All events, All years

Denise Cai: Linking memories across time and by Tristan Shuman: Breakdown of spatial coding and interneuron synchronization in epileptic mice

Lecture
Date:
Thursday, January 9, 2020
Hour: 14:30 - 15:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Denise Cai and Tristan Shuman
|
Mount Sinai School of Medicine, New York

Denise Cai: Linking memories across time The compilation of memories, collected and aggregated across a lifetime defines our human experience. My lab is interested in dissecting how memories are stored, updated, integrated and retrieved across a lifetime. Recent studies suggest that a shared neural ensemble may link distinct memories encoded close in time. Using in vivo calcium imaging (with open-source Miniscopes in freely behaving mice), TetTag transgenic system, chemogenetics, electrophysiology and novel behavioral designs, we tested how hippocampal networks temporally link memories. Multiple convergent findings suggest that contextual memories encoded close in time are linked by directing storage into overlapping hippocampal ensembles, such that the recall of one memory can trigger the recall of another temporally-related memory. Alteration of this process (e.g. during aging, PTSD, etc) affect the temporal structure of memories, thus impairing efficient recall of related information. Tristan Shuman: Breakdown of spatial coding and interneuron synchronization in epileptic mice Temporal lobe epilepsy causes severe cognitive deficits yet the circuit mechanisms that alter cognition remain unknown. We hypothesized that the death and reorganization of inhibitory connections during epileptogenesis may disrupt synchrony of hippocampal inhibition. To test this, we simultaneously recorded from CA1 and dentate gyrus (DG) in pilocarpine-treated epileptic mice with silicon probes during head-fixed virtual navigation. We found desynchronized interneuron firing between CA1 and DG in epileptic mice. Since hippocampal interneurons control information processing, we tested whether CA1 spatial coding was altered in this desynchronized circuit using a novel wire-free Miniscope. We found that CA1 place cells in epileptic mice were unstable and completely remapped across a week. This place cell instability emerged ~6 weeks after status epilepticus, well after the onset of chronic spontaneous seizures and interneuron death. Finally, our CA1 network model showed that desynchronized inputs can impair information content and stability of CA1 place cells. Together, these results demonstrate that temporally precise intra-hippocampal communication is critical for spatial processing and hippocampal desynchronization contributes to spatial coding deficits in epileptic mice.

Imaging deep: sensory and state coding in subcortical circuits

Lecture
Date:
Thursday, January 9, 2020
Hour: 11:00
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Jan Grundemann
|
Dept of Biomedicine, University of Basel

Internal states, including affective or homeostatic states, are important behavioral motivators. The amygdala is a key regulator of motivated behaviors, yet how distinct internal states are represented in amygdala circuits is unknown. Here, by longitudinally imaging neural calcium dynamics across different environments in freely moving mice, we identify changes in the activity levels of two major, non-overlapping populations of principal neurons in the basal amygdala (BA) that predict switches between exploratory and non-exploratory (defensive, anxiety-like) states. Moreover, the amygdala broadcasts state information via several output pathways to larger brain networks, and sensory responses in BA occur independently of behavioral state encoding. Thus, the brain processes external stimuli and internal states orthogonally, which may facilitate rapid and flexible selection of appropriate, state-dependent behavioral responses.

The Use of Mental Imagery in Enhancing Human Motor and Cognitive Functions: From Dancers to Parkinson’s Disease

Lecture
Date:
Wednesday, January 1, 2020
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Amit Abraham
|
PhD, MAPhty (Musculoskeletal), B.P.T Dept of General Medicine & Geriatrics Emory School of Medicine

In recent years, a growing body of scientific and clinical evidence points to the effectiveness of mental imagery (MI) in enhancing motor and non-motor aspects of performance in a variety of populations, including athletes, dancers, and people with neurodegenerative conditions, such as Parkinsonʼs disease (PD). However, MI’s mechanisms of effect are not fully understood to date. Further, MI’s best practices and potential benefits from implementing such approaches in sports and neurorehabilitation are in its infancy. This talk will focus on three MI-based approaches to movement retraining I study―motor imagery practice, dynamic neurocognitive imagery, and Gaga movement language―that use a variety of neuro-cognitive elements, including problem-solving, proprioception, and internally guided movement, along with anatomical and metaphorical imagery, self-touch, and self-talk. I will discuss a brief background of MI and its beneficial effects on human motor and cognitive performance, followed by a review of my research into MI for PD rehabilitation and dance training. I will specifically discuss our work on MI and its association with body schema in people with PD. Lastly, future directions into basic, transitional, and clinical research will be discussed.

Wearable high resolution electrophysiology for recording freely behaving humans

Lecture
Date:
Tuesday, December 31, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Yael Hanein
|
School of Electrical Engineering Tel Aviv University

Electroencephalography and surface electromyography are notoriously cumbersome technologies. A typical setup may involve bulky electrodes, dandling wires, and a large amplifier unit. The wide adaptation of these technologies in numerous applications has been accordingly fairly limited. Thanks to the availability of printed electronics technologies, it is now possible to dramatically simplify these techniques. Elegant electrode arrays with unprecedented performances can be readily produced, eliminating the need to handle multiple electrodes and wires. Specifically, in this presentation I will discuss how printed electronics can improve signal transmission at the electrode-skin interface, facilitate electrode-skin stability, and enhance user convenience during electrode placement while achieving prolonged use. Customizing electrode array designs and implementing blind source separation methods, can also improve recording resolution, reduce variability between individuals and minimizing signal cross-talk between nearby electrodes. Finally, I will outline several important applications in the field of neuroscience and how each can benefit from the convergence of electrophysiology and printed electronics.

How do We Recognize Faces? Insights from biological and artificial face recognition systems

Lecture
Date:
Tuesday, December 24, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Galit Yovel
|
School of Psychological Sciences Sagol School of Neuroscience Tel Aviv University

Face recognition is a computationally challenging classification task that requires generalization across different views of the same identity as well as discrimination across different identities of a relatively homogenous set of visual stimuli. How does the brain resolve this taxing classification task? It is well-established that faces are processed by specialized neural mechanisms in high-level visual cortex. Nevertheless, it is not clear how this divergence to a face-specific and an object-general system contributes to face recognition. Recent advances in machine face recognition together with our understanding of how humans recognize faces enable us to address this question. In particular, I will show that a deep convolutional neural network (DCNN) that is trained on face recognition, but not a DCNN that is trained on object recognition, is sensitive to the same view-invariant facial features that humans use for face recognition. Similar to the hierarchical architecture of the visual system that diverges to a face and an object system at high-level visual cortex, a human-like, view-invariant face representation emerges only at higher layers of the face-trained but not the object-trained neural network. This view-invariant face representation is specific to the category of faces that the system was trained with both in humans and machines. I will therefore further emphasize the important role of experience and suggest that human face recognition depends on our social experience with familiar faces (“supervised learning”) rather than passive perceptual exposure to unfamiliar faces (“unsupervised learning”), highlighting the important role of social cognition in face recognition.

A visual motion detector: From the connectome to a theory of transformation learning

Lecture
Date:
Monday, December 23, 2019
Hour: 12:45
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Dmitri "Mitya" Chklovskii
|
Simons Foundation's Flatiron Institute and NYU Medical Center

Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Here, by optimizing a principled objective funciton, we derive an unsupervised algorithm that maps onto a biological plausible neural network. When trained on video frames, the neural network recapitulates the reconstructed connectome of the fly motion detector. In particular, local motion detectors combine information from at least three adjacent pixels, something that contradicts the celebrated Hassenstein-Reichardt model.

Decipher the properties of sex-shared yet dimorphic neuronal circuits

Lecture
Date:
Wednesday, December 18, 2019
Hour: 15:15
Location:
The David Lopatie Hall of Graduate Studies
Vladyslava Pechuk (MSc Thesis Defense/PhD Proposal)
|
Dr. Meital Oren Lab Dept of Neurobiology

The nervous system of sexually reproducing species is built to accommodate their sex-specific needs and thus contains sexually dimorphic properties. Males and females respond to environmental sensory cues and transform the input into sexually dimorphic traits. New findings reveal a significant difference in the way the two sexes in the nematode C. elegans respond to aversive stimuli. Further analysis of the function of the circuit for aversive behaviors unveiled how stimuli elicit non-dimorphic sensory neuronal activity, proceeded by dimorphic postsynaptic interneuron activity, generating the sexually dimorphic behavior. Here, we propose to uncover how genetic sex defines the properties of the sex-shared circuit for aversive behaviors. We will explore the circuit at the behavioral, connectome and genetic levels. Using calcium imaging, optogenetics, synaptic trans-labeling, transcriptome profiling and a candidate gene approach we will map the functional connections and define the dimorphic responses of all the cells in the avoidance circuit in both sexes. Since in vertebrates and invertebrates, males and females share most of the nervous system, studies of the development of dimorphic aspects of the shared nervous system are crucial for understanding the effects of sex on brain and behavior and specifically, how do changes in connectivity generate dimorphic behaviors, and how both are modulated by the genetic sex.

Hidden neural states underlie canary song syntax

Lecture
Date:
Tuesday, December 17, 2019
Hour: 12:15
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Yarden Cohen
|
Dept of Biology, Boston University

Songbirds are outstanding models of motor sequence generation, but commonly-studied species do not share the long-range correlations of human behavior – skills like speech where sequences of actions follow syntactic rules in which transitions between elements depend on the identity and order of past actions. To support long-range correlations, the ‘many-to-one’ hypothesis suggests that redundant premotor neural activity patterns, called ‘hidden states’, carry short-term memory of preceding actions. To test this hypothesis, we recorded from the premotor nucleus HVC in a rarely-studied species - canaries - whose complex sequences of song syllables follow long-range syntax rules, spanning several seconds. In song sequences spanning up to four seconds, we found neurons whose activity depends on the identity of previous, or upcoming transitions - reflecting hidden states encoding song context beyond ongoing behavior and demonstrating a deep many-to-one mapping between HVC states and song syllables. We find that context-dependent activity correlates more often with the song’s past than its future, occurs selectively in history-dependent transitions, and also encodes timing information. Together, these findings reveal a novel pattern of neural dynamics that can support structured, context-dependent song transitions and validate predictions of syntax generation by hidden neural states in a complex singer.

The temporal structure of the code of large neural populations

Lecture
Date:
Monday, December 9, 2019
Hour: 10:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Ehud Karpas (PhD Thesis Defense)
|
Elad Schneidman Lab, Dept of Neurobiology, WIS

Studying the neural code deals with trying to understand how information is stored and processed in the brain, searching for basic principles of this "language". The study of population codes aims to understand how neural populations collectively encode information, and to map interactions between neurons. Previous studies explored the firing rates of single cells and how they evolve with time. Other studies have shown that neural populations are correlated and explored spatial activity patterns of large groups. In this work we combine these approaches, and study population activity of large groups of neurons, and how they evolve with time. We studied the fine temporal structure of spiking patterns of groups of up to 100 simultaneously recorded units in the prefrontal cortex of monkeys performing a visual discrimination task. We characterized the population activity using 10 ms time bins and found that population activity patterns (codebooks) were strongly shaped by spatial correlations. Further, using a novel extension of models which describe spatio-temporal population activity patterns, we show that temporal sequences of population activity patterns have strong history-dependence. Together, the large impact of spatial and temporal correlations makes the observed sequences of activity patterns many orders of magnitude more likely to appear than predicted by models that ignore these correlations and rely only on the population rates. Surprisingly, despite these strong correlations, decoding behavior using models that were trained ignoring these correlations perform as well as decoders that were trained to capture these correlations. The difference in the role of correlations in population encoding and decoding suggests that one of the goals of the complex encoding scheme in the prefrontal cortex may be to create a code that can be read by simple downstream decoders that do not have to learn correlations.

Rodents' social recognition: what the nose knows…and what it doesn't

Lecture
Date:
Tuesday, December 3, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Shlomo Wagner
|
Sagol Department of Neurobiology University of Haifa

The ability to recognize individual conspecifics, termed social recognition, is crucial for survival of the individual, as it guides appropriate interactions with its social environment. In humans, social recognition can be based upon cues arriving from single sensory modalities. For example, humans can recognize a person just by looking at its face (visual modality) or hearing its voice (auditory modality). Such single-modality based social recognition seems to hold for other primates as well. Yet, how general is this ability among mammals is not clear. Mice and rats, the main laboratory mammalian models in the field of neuroscience, are social species known to exhibit social recognition abilities, widely assumed to be mediated by stimulus-derived chemosensory cues received by the main and accessory olfactory systems of the subject. In the lecture, I will challenge this common assumption and show evidence that rodents' social recognition is based upon integration of olfactory, auditory and somatosensory cues, hence requires active behavior of the social stimuli. In that sense, social recognition in rodents seems to be fundamentally different from social recognition in humans.

Pages

All events, All years

Imaging deep: sensory and state coding in subcortical circuits

Lecture
Date:
Thursday, January 9, 2020
Hour: 11:00
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Jan Grundemann
|
Dept of Biomedicine, University of Basel

Internal states, including affective or homeostatic states, are important behavioral motivators. The amygdala is a key regulator of motivated behaviors, yet how distinct internal states are represented in amygdala circuits is unknown. Here, by longitudinally imaging neural calcium dynamics across different environments in freely moving mice, we identify changes in the activity levels of two major, non-overlapping populations of principal neurons in the basal amygdala (BA) that predict switches between exploratory and non-exploratory (defensive, anxiety-like) states. Moreover, the amygdala broadcasts state information via several output pathways to larger brain networks, and sensory responses in BA occur independently of behavioral state encoding. Thus, the brain processes external stimuli and internal states orthogonally, which may facilitate rapid and flexible selection of appropriate, state-dependent behavioral responses.

The Use of Mental Imagery in Enhancing Human Motor and Cognitive Functions: From Dancers to Parkinson’s Disease

Lecture
Date:
Wednesday, January 1, 2020
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Amit Abraham
|
PhD, MAPhty (Musculoskeletal), B.P.T Dept of General Medicine & Geriatrics Emory School of Medicine

In recent years, a growing body of scientific and clinical evidence points to the effectiveness of mental imagery (MI) in enhancing motor and non-motor aspects of performance in a variety of populations, including athletes, dancers, and people with neurodegenerative conditions, such as Parkinsonʼs disease (PD). However, MI’s mechanisms of effect are not fully understood to date. Further, MI’s best practices and potential benefits from implementing such approaches in sports and neurorehabilitation are in its infancy. This talk will focus on three MI-based approaches to movement retraining I study―motor imagery practice, dynamic neurocognitive imagery, and Gaga movement language―that use a variety of neuro-cognitive elements, including problem-solving, proprioception, and internally guided movement, along with anatomical and metaphorical imagery, self-touch, and self-talk. I will discuss a brief background of MI and its beneficial effects on human motor and cognitive performance, followed by a review of my research into MI for PD rehabilitation and dance training. I will specifically discuss our work on MI and its association with body schema in people with PD. Lastly, future directions into basic, transitional, and clinical research will be discussed.

Wearable high resolution electrophysiology for recording freely behaving humans

Lecture
Date:
Tuesday, December 31, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Yael Hanein
|
School of Electrical Engineering Tel Aviv University

Electroencephalography and surface electromyography are notoriously cumbersome technologies. A typical setup may involve bulky electrodes, dandling wires, and a large amplifier unit. The wide adaptation of these technologies in numerous applications has been accordingly fairly limited. Thanks to the availability of printed electronics technologies, it is now possible to dramatically simplify these techniques. Elegant electrode arrays with unprecedented performances can be readily produced, eliminating the need to handle multiple electrodes and wires. Specifically, in this presentation I will discuss how printed electronics can improve signal transmission at the electrode-skin interface, facilitate electrode-skin stability, and enhance user convenience during electrode placement while achieving prolonged use. Customizing electrode array designs and implementing blind source separation methods, can also improve recording resolution, reduce variability between individuals and minimizing signal cross-talk between nearby electrodes. Finally, I will outline several important applications in the field of neuroscience and how each can benefit from the convergence of electrophysiology and printed electronics.

How do We Recognize Faces? Insights from biological and artificial face recognition systems

Lecture
Date:
Tuesday, December 24, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Galit Yovel
|
School of Psychological Sciences Sagol School of Neuroscience Tel Aviv University

Face recognition is a computationally challenging classification task that requires generalization across different views of the same identity as well as discrimination across different identities of a relatively homogenous set of visual stimuli. How does the brain resolve this taxing classification task? It is well-established that faces are processed by specialized neural mechanisms in high-level visual cortex. Nevertheless, it is not clear how this divergence to a face-specific and an object-general system contributes to face recognition. Recent advances in machine face recognition together with our understanding of how humans recognize faces enable us to address this question. In particular, I will show that a deep convolutional neural network (DCNN) that is trained on face recognition, but not a DCNN that is trained on object recognition, is sensitive to the same view-invariant facial features that humans use for face recognition. Similar to the hierarchical architecture of the visual system that diverges to a face and an object system at high-level visual cortex, a human-like, view-invariant face representation emerges only at higher layers of the face-trained but not the object-trained neural network. This view-invariant face representation is specific to the category of faces that the system was trained with both in humans and machines. I will therefore further emphasize the important role of experience and suggest that human face recognition depends on our social experience with familiar faces (“supervised learning”) rather than passive perceptual exposure to unfamiliar faces (“unsupervised learning”), highlighting the important role of social cognition in face recognition.

A visual motion detector: From the connectome to a theory of transformation learning

Lecture
Date:
Monday, December 23, 2019
Hour: 12:45
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Dmitri "Mitya" Chklovskii
|
Simons Foundation's Flatiron Institute and NYU Medical Center

Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Here, by optimizing a principled objective funciton, we derive an unsupervised algorithm that maps onto a biological plausible neural network. When trained on video frames, the neural network recapitulates the reconstructed connectome of the fly motion detector. In particular, local motion detectors combine information from at least three adjacent pixels, something that contradicts the celebrated Hassenstein-Reichardt model.

Decipher the properties of sex-shared yet dimorphic neuronal circuits

Lecture
Date:
Wednesday, December 18, 2019
Hour: 15:15
Location:
The David Lopatie Hall of Graduate Studies
Vladyslava Pechuk (MSc Thesis Defense/PhD Proposal)
|
Dr. Meital Oren Lab Dept of Neurobiology

The nervous system of sexually reproducing species is built to accommodate their sex-specific needs and thus contains sexually dimorphic properties. Males and females respond to environmental sensory cues and transform the input into sexually dimorphic traits. New findings reveal a significant difference in the way the two sexes in the nematode C. elegans respond to aversive stimuli. Further analysis of the function of the circuit for aversive behaviors unveiled how stimuli elicit non-dimorphic sensory neuronal activity, proceeded by dimorphic postsynaptic interneuron activity, generating the sexually dimorphic behavior. Here, we propose to uncover how genetic sex defines the properties of the sex-shared circuit for aversive behaviors. We will explore the circuit at the behavioral, connectome and genetic levels. Using calcium imaging, optogenetics, synaptic trans-labeling, transcriptome profiling and a candidate gene approach we will map the functional connections and define the dimorphic responses of all the cells in the avoidance circuit in both sexes. Since in vertebrates and invertebrates, males and females share most of the nervous system, studies of the development of dimorphic aspects of the shared nervous system are crucial for understanding the effects of sex on brain and behavior and specifically, how do changes in connectivity generate dimorphic behaviors, and how both are modulated by the genetic sex.

Hidden neural states underlie canary song syntax

Lecture
Date:
Tuesday, December 17, 2019
Hour: 12:15
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Yarden Cohen
|
Dept of Biology, Boston University

Songbirds are outstanding models of motor sequence generation, but commonly-studied species do not share the long-range correlations of human behavior – skills like speech where sequences of actions follow syntactic rules in which transitions between elements depend on the identity and order of past actions. To support long-range correlations, the ‘many-to-one’ hypothesis suggests that redundant premotor neural activity patterns, called ‘hidden states’, carry short-term memory of preceding actions. To test this hypothesis, we recorded from the premotor nucleus HVC in a rarely-studied species - canaries - whose complex sequences of song syllables follow long-range syntax rules, spanning several seconds. In song sequences spanning up to four seconds, we found neurons whose activity depends on the identity of previous, or upcoming transitions - reflecting hidden states encoding song context beyond ongoing behavior and demonstrating a deep many-to-one mapping between HVC states and song syllables. We find that context-dependent activity correlates more often with the song’s past than its future, occurs selectively in history-dependent transitions, and also encodes timing information. Together, these findings reveal a novel pattern of neural dynamics that can support structured, context-dependent song transitions and validate predictions of syntax generation by hidden neural states in a complex singer.

The temporal structure of the code of large neural populations

Lecture
Date:
Monday, December 9, 2019
Hour: 10:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Ehud Karpas (PhD Thesis Defense)
|
Elad Schneidman Lab, Dept of Neurobiology, WIS

Studying the neural code deals with trying to understand how information is stored and processed in the brain, searching for basic principles of this "language". The study of population codes aims to understand how neural populations collectively encode information, and to map interactions between neurons. Previous studies explored the firing rates of single cells and how they evolve with time. Other studies have shown that neural populations are correlated and explored spatial activity patterns of large groups. In this work we combine these approaches, and study population activity of large groups of neurons, and how they evolve with time. We studied the fine temporal structure of spiking patterns of groups of up to 100 simultaneously recorded units in the prefrontal cortex of monkeys performing a visual discrimination task. We characterized the population activity using 10 ms time bins and found that population activity patterns (codebooks) were strongly shaped by spatial correlations. Further, using a novel extension of models which describe spatio-temporal population activity patterns, we show that temporal sequences of population activity patterns have strong history-dependence. Together, the large impact of spatial and temporal correlations makes the observed sequences of activity patterns many orders of magnitude more likely to appear than predicted by models that ignore these correlations and rely only on the population rates. Surprisingly, despite these strong correlations, decoding behavior using models that were trained ignoring these correlations perform as well as decoders that were trained to capture these correlations. The difference in the role of correlations in population encoding and decoding suggests that one of the goals of the complex encoding scheme in the prefrontal cortex may be to create a code that can be read by simple downstream decoders that do not have to learn correlations.

Rodents' social recognition: what the nose knows…and what it doesn't

Lecture
Date:
Tuesday, December 3, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Shlomo Wagner
|
Sagol Department of Neurobiology University of Haifa

The ability to recognize individual conspecifics, termed social recognition, is crucial for survival of the individual, as it guides appropriate interactions with its social environment. In humans, social recognition can be based upon cues arriving from single sensory modalities. For example, humans can recognize a person just by looking at its face (visual modality) or hearing its voice (auditory modality). Such single-modality based social recognition seems to hold for other primates as well. Yet, how general is this ability among mammals is not clear. Mice and rats, the main laboratory mammalian models in the field of neuroscience, are social species known to exhibit social recognition abilities, widely assumed to be mediated by stimulus-derived chemosensory cues received by the main and accessory olfactory systems of the subject. In the lecture, I will challenge this common assumption and show evidence that rodents' social recognition is based upon integration of olfactory, auditory and somatosensory cues, hence requires active behavior of the social stimuli. In that sense, social recognition in rodents seems to be fundamentally different from social recognition in humans.

The Neurobiology of Personality: Using AI to link Genes, Behavior, and Positive-Psychology

Lecture
Date:
Tuesday, November 26, 2019
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Oren Forkosh
|
Dept of Animal Sciences, Faculty of Agriculture, Rehovot The Hebrew University

Individual differences are an essential property of all living things, and personality provides a unique glimpse into the biology underlying behavioral variability. And yet, because of the lack of a systematic approach to personality, most works on animal personalities still end up examining a limited subset of subjectively chosen behavioral readouts. Lately, we have shown how personality can be inferred directly and objectively from high-dimensional natural behavioral space. While this approach is not species-specific, we have demonstrated it on mice as it is one of the most common model animals. The mice were videoed over several days, and their behavior automatically analyzed in depth. Altogether, the computer identified 60 separate behaviors such as approaching others, chasing or fleeing, sharing food or keeping others away from food, exploring, or hiding. We found the mice personalities by working backward from behavior and extracting the underlying traits that differ among individuals while being stable over time and across contexts. We validated that traits found this way (which we term identity domains) were stable across social context, do not change with age, explain the variability in performance in classical tests, and significantly correlates with gene expression in brain regions related to personality. Expanding this method to human behavior, by using location and physiological data from cellphones and smartwatches, revealed a highly structured personality space which resembles that of the mice. This method allows for better informed mechanistic investigations into the biology of individual differences, systematically comparing behaviors across species, as well as develop more personalized psychiatry. Recently we have also been employing this approach to subjectively quantify wellness and welfare in both people and animals, towards the biology of happiness.

Pages

All events, All years

There are no events to display

All events, All years

There are no events to display

Pages