All events, event

Designing Language Models to Think Like Humans

Lecture
Date:
Thursday, July 11, 2024
Hour: 11:00 - 12:00
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Chen Shani
|
Post-doctoral researcher NLP group Stanford University

While language models (LMs) show impressive text manipulation capabilities, they also lack commonsense and reasoning abilities and are known to be brittle. In this talk, I will suggest a different LMs design paradigm, inspired by how humans understand it. I will present two papers, both shedding light on human-inspired NLP architectures aimed at delving deeper into the meaning beyond words.  The first paper [1] accounts for the lack of commonsense and reasoning abilities by proposing a paradigm shift in language understanding, drawing inspiration from embodied cognitive linguistics (ECL). In this position paper we propose a new architecture that treats language as inherently executable, grounded in embodied interaction, and driven by metaphoric reasoning.  The second paper [2] shows that LMs are brittle and far from human performance in their concept-understanding and abstraction capabilities. We argue this is due to their token-based objectives, and implement a concept-aware post-processing manipulation, showing it matches human intuition better. We then pave the way for more concept-aware training paradigms.   [1] Language (Re)modelling: Towards Embodied Language Understanding Ronen Tamari, Chen Shani, Tom Hope, Miriam R L Petruck, Omri Abend, and Dafna Shahaf. 2020. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6268–6281, Online. Association for Computational Linguistics.  [2] Towards Concept-Aware Large Language Models Shani, Chen, Jilles Vreeken, and Dafna Shahaf. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 13158-13170. 2023.   Bio: Chen Shani is a post-doctoral researcher at Stanford's NLP group, collaborating with Prof. Dan Jurafsky. Previously, she pursued her Ph.D. at the Hebrew University under the guidance of Prof. Dafna Shahaf and worked at Amazon Research. Her focus lies at the intersection of humans and NLP, where she implements insights from human cognition to improve NLP systems.

This decision, not just the average decision: Factors contributing to one single perceptual judgment

Lecture
Date:
Tuesday, July 9, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Mathew E. Diamond
|
Cognitive Neuroscience, SISSA Trieste, Italy

While cognitive neuroscientists have uncovered principles of perceptual decision-making by analyzing choices and neuronal firing across thousands of trials, we do not yet know the behavioral or neuronal dynamics underlying one SINGLE choice. For instance, why might a subject judge a given stimulus in category A 70% of the time but in category B 30%? Until we can work out precisely what determines single-decisions – this choice, right now – the mechanisms of real-world decision-making will remain unknown. In tactile psychophysical tasks with rats and humans, we are trying to sort out factors that explain the variability in judgments (across trials) to the identical stimulus input. We identify four factors: (i) trial-to-trial fluctuations in sensory coding, (ii) temporal context, namely, the history of preceding stimuli and choices, (iii) attention, and (iv) bias (predictions originating in beliefs about the environment’s probabilistic structure). The strategy is to bring these factors under experimental control, rather than leaving them to vary according to uninterrogated states within the subject. Psychophysics from rats and humans show that large chunks of variability are accounted for by these factors; evidence from cortical neuronal populations in rats provides some mechanistic grounding.

Reading Minds & Machines-AND-The Wisdom of a Crowd of Brains

Lecture
Date:
Tuesday, June 25, 2024
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Michal Irani
|
Dept of Computer Science & Applied Mathematics, WIS

1.  Can we reconstruct images that a person saw, directly from his/her fMRI brain recordings?  2.  Can we reconstruct the training data that a deep-network trained on, directly from the parameters of the network?   The answer to both of these intriguing questions is “Yes!”  In this talk I will show how these can be done. I will then show how exploring the two domains in tandem can potentially lead to significant breakthroughs in both fields. More specifically: (i)  I will show how combining the power of Brains & Machines can potentially be used to bridge the gap between those two domains. (ii) Combining the power of Multiple Brains (scanned on different fMRI scanners with NO shared stimuli) can lead to new breakthroughs and discoveries in Brain-Science. We refer to this as “the Wisdom of a Crowd of Brains”. In particular, we show that a Universal Encoder can be trained on multiple brains with no shared data,  and that information can be functionally mapped between different brains.

Memory and Obliviscence:From Random to Structured Material 

Lecture
Date:
Sunday, June 23, 2024
Hour: 14:15 - 15:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Antonis Georgiou-Student Seminar-PhD Thesis Defense
|
Advisor: Prof. Misha Tsodyks Dept of Brain Sciences, WIS

The study of human memory is a rich field with a history that spans over a century, traditionally investigated through the prism of psychology. Drawing inspiration from this vast pool of findings, we approached the subject with a more physics-oriented mindset based on first principles. For this reason, we combined mathematical modelling of established ideas from the literature of psychology with large-scale experimentation. In particular, we created a model based on the concept of retroactive interference that states that newly encoded items hinder the retention of older ones in memory. We show that this simple mechanism is sufficient to describe a variety of experimental data of recognition memory with different categories of verbal and pictorial stimuli. The model has a single free parameter and can be solved analytically. We then focus on recall and recognition memory of stories. This transition from discrete random lists to coherent continuous stimuli such as stories introduces a new challenge when it comes to the quantification and the analysis of the results. To address this, we have developed a pipeline that employs large language models and showed that it performs comparably to human evaluators. Using this tool we were able to show that recall scales linearly with recognition and story size for the range we examined. Finally, we discovered that when stories are presented in a scrambled manner, even though recall performance drops, subjects seem to reconstruct the material in their recall in alignment to the unscrambled version.

Elucidating convergence and divergence of neural mechanisms: from genes to behavior

Lecture
Date:
Thursday, June 13, 2024
Hour: 14:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Asaf Gat-Student Seminar-PhD Thesis Defense
|
Dr. Meital Oren Lab

The capacity of animals to respond to stimuli in their surroundings is crucial for their survival. In mammals, complex evaluations of the environment require large numbers and different subtypes of neurons. The nematode C. elegans utilize its compact nervous system to process environmental cues and tune behavior. Integration of opposing spatial information and adaptation to distinct types of addictive substances are only a few challenges that require efficient and effective use of the worm’s compact nervous system. We describe how distinct environmental cues can converge onto common neural networks and molecular mechanisms but generate diverse neuronal and behavioral responses. Using a multidisciplinary approach, we completed several parallel aims, including the development of two novel research methods

Memory consolidation and generalization during sleep

Lecture
Date:
Wednesday, June 5, 2024
Hour: 10:00 - 11:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Ella Bar-Student Seminar-PhD Thesis Defense
|
Prof. Rony Paz Lab & Prof. Yuval Nir, Tel Aviv University

During sleep, our memories are reactivated and consolidated in an active process that significantly influences our memory and decision-making. In this talk, I will present two studies about sleep-memory consolidation. The first study investigated sleep memory consolidation's local versus global properties within the brain. By exploiting the unique functional neuroanatomy of olfactory system, we were able to manipulate sleep oscillations and enhance memories locally within a single hemisphere during sleep. These findings underscore the local nature of sleep memory consolidation, which can be selectively manipulated within the brain, thereby creating an important link between theories of local sleep and learning. The second research explored the relationship between generalization processes and sleep, acknowledging that overgeneralization of negative stimuli and disruptions in sleep quality contribute to anxiety and PTSD disorders. Specifically, we studied participants' responses to stimuli associated with positive, negative, or neutral outcomes. Our findings revealed significant correlations between brain activity, as detected by fMRI, during the association of a stimulus with an outcome and the perceptual generalization of these stimuli. While activity in limbic brain areas was correlated with immediate negative stimulus generalization, we observed that the activation in these areas predicted recovery and positively related generalization following sleep. Moreover, we identified specific sleep oscillations correlated with this recovery generalization using high-density EEG recordings. These results highlight the crucial role of sleep in both generalization processes and the restoration of balanced responses to stimuli. Understanding these mechanisms can offer valuable insights into developing therapeutic strategies for anxiety and PTSD.

Blood flow perturbations and its impact on brain structure and function: from microstrokes to heartbeats

Lecture
Date:
Tuesday, June 4, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Pablo Blinder
|
Dept of Neurobiology, Tel Aviv University

Vasosdynamics of cortical arterioles and what it informs us about neuronal activity  

Lecture
Date:
Tuesday, May 28, 2024
Hour: 12:30 - 13:15
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. David Kleinfeld  
|
University of California at San Diego

The evolution and development of critical periods of cortical plasticity

Lecture
Date:
Tuesday, May 7, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Joshua Trachtenberg 
|
Department of Neurobiology, David Geffen School of Medicine at UCLA

Consciousness and the brain: comparing and testing neuroscientific theories of consciousness

Lecture
Date:
Tuesday, April 16, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Liad Mudrik
|
Sagol School of Neuroscience, School of Psychological Sciences, Tel Aviv University

For centuries, consciousness was considered to be outside the reach of scientific investigation. Yet in recent decades, more and more studies have tried to probe the neural correlates of conscious experience, and several neuronally-inspired theories for consciousness have emerged. In this talk, I will focus on four leading theories of consciousness: Global Neuronal Workspace (GNW), integrated Information Theory (IIT), Recurrent Processing Theory (RPT) and Higher Order Theory (HOT). I will first shortly present the guiding principles of these theories. Then, I will provide a bird's-eye view of the field, using the results of a large-scale quantitative and analytic review we conducted, examining all studies that either empirically tested these theories or interpreted their findings with respect to at least one of them. Finally, I will describe the first results of the Cogitate consortium - an adversarial collaboration aimed at testing GNW and IIT.

Pages

All events, event

Designing Language Models to Think Like Humans

Lecture
Date:
Thursday, July 11, 2024
Hour: 11:00 - 12:00
Location:
Gerhard M.J. Schmidt Lecture Hall
Dr. Chen Shani
|
Post-doctoral researcher NLP group Stanford University

While language models (LMs) show impressive text manipulation capabilities, they also lack commonsense and reasoning abilities and are known to be brittle. In this talk, I will suggest a different LMs design paradigm, inspired by how humans understand it. I will present two papers, both shedding light on human-inspired NLP architectures aimed at delving deeper into the meaning beyond words.  The first paper [1] accounts for the lack of commonsense and reasoning abilities by proposing a paradigm shift in language understanding, drawing inspiration from embodied cognitive linguistics (ECL). In this position paper we propose a new architecture that treats language as inherently executable, grounded in embodied interaction, and driven by metaphoric reasoning.  The second paper [2] shows that LMs are brittle and far from human performance in their concept-understanding and abstraction capabilities. We argue this is due to their token-based objectives, and implement a concept-aware post-processing manipulation, showing it matches human intuition better. We then pave the way for more concept-aware training paradigms.   [1] Language (Re)modelling: Towards Embodied Language Understanding Ronen Tamari, Chen Shani, Tom Hope, Miriam R L Petruck, Omri Abend, and Dafna Shahaf. 2020. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), pages 6268–6281, Online. Association for Computational Linguistics.  [2] Towards Concept-Aware Large Language Models Shani, Chen, Jilles Vreeken, and Dafna Shahaf. In Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 13158-13170. 2023.   Bio: Chen Shani is a post-doctoral researcher at Stanford's NLP group, collaborating with Prof. Dan Jurafsky. Previously, she pursued her Ph.D. at the Hebrew University under the guidance of Prof. Dafna Shahaf and worked at Amazon Research. Her focus lies at the intersection of humans and NLP, where she implements insights from human cognition to improve NLP systems.

This decision, not just the average decision: Factors contributing to one single perceptual judgment

Lecture
Date:
Tuesday, July 9, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Mathew E. Diamond
|
Cognitive Neuroscience, SISSA Trieste, Italy

While cognitive neuroscientists have uncovered principles of perceptual decision-making by analyzing choices and neuronal firing across thousands of trials, we do not yet know the behavioral or neuronal dynamics underlying one SINGLE choice. For instance, why might a subject judge a given stimulus in category A 70% of the time but in category B 30%? Until we can work out precisely what determines single-decisions – this choice, right now – the mechanisms of real-world decision-making will remain unknown. In tactile psychophysical tasks with rats and humans, we are trying to sort out factors that explain the variability in judgments (across trials) to the identical stimulus input. We identify four factors: (i) trial-to-trial fluctuations in sensory coding, (ii) temporal context, namely, the history of preceding stimuli and choices, (iii) attention, and (iv) bias (predictions originating in beliefs about the environment’s probabilistic structure). The strategy is to bring these factors under experimental control, rather than leaving them to vary according to uninterrogated states within the subject. Psychophysics from rats and humans show that large chunks of variability are accounted for by these factors; evidence from cortical neuronal populations in rats provides some mechanistic grounding.

Reading Minds & Machines-AND-The Wisdom of a Crowd of Brains

Lecture
Date:
Tuesday, June 25, 2024
Hour: 12:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Michal Irani
|
Dept of Computer Science & Applied Mathematics, WIS

1.  Can we reconstruct images that a person saw, directly from his/her fMRI brain recordings?  2.  Can we reconstruct the training data that a deep-network trained on, directly from the parameters of the network?   The answer to both of these intriguing questions is “Yes!”  In this talk I will show how these can be done. I will then show how exploring the two domains in tandem can potentially lead to significant breakthroughs in both fields. More specifically: (i)  I will show how combining the power of Brains & Machines can potentially be used to bridge the gap between those two domains. (ii) Combining the power of Multiple Brains (scanned on different fMRI scanners with NO shared stimuli) can lead to new breakthroughs and discoveries in Brain-Science. We refer to this as “the Wisdom of a Crowd of Brains”. In particular, we show that a Universal Encoder can be trained on multiple brains with no shared data,  and that information can be functionally mapped between different brains.

Memory and Obliviscence:From Random to Structured Material 

Lecture
Date:
Sunday, June 23, 2024
Hour: 14:15 - 15:30
Location:
Nella and Leon Benoziyo Building for Brain Research
Antonis Georgiou-Student Seminar-PhD Thesis Defense
|
Advisor: Prof. Misha Tsodyks Dept of Brain Sciences, WIS

The study of human memory is a rich field with a history that spans over a century, traditionally investigated through the prism of psychology. Drawing inspiration from this vast pool of findings, we approached the subject with a more physics-oriented mindset based on first principles. For this reason, we combined mathematical modelling of established ideas from the literature of psychology with large-scale experimentation. In particular, we created a model based on the concept of retroactive interference that states that newly encoded items hinder the retention of older ones in memory. We show that this simple mechanism is sufficient to describe a variety of experimental data of recognition memory with different categories of verbal and pictorial stimuli. The model has a single free parameter and can be solved analytically. We then focus on recall and recognition memory of stories. This transition from discrete random lists to coherent continuous stimuli such as stories introduces a new challenge when it comes to the quantification and the analysis of the results. To address this, we have developed a pipeline that employs large language models and showed that it performs comparably to human evaluators. Using this tool we were able to show that recall scales linearly with recognition and story size for the range we examined. Finally, we discovered that when stories are presented in a scrambled manner, even though recall performance drops, subjects seem to reconstruct the material in their recall in alignment to the unscrambled version.

Elucidating convergence and divergence of neural mechanisms: from genes to behavior

Lecture
Date:
Thursday, June 13, 2024
Hour: 14:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Asaf Gat-Student Seminar-PhD Thesis Defense
|
Dr. Meital Oren Lab

The capacity of animals to respond to stimuli in their surroundings is crucial for their survival. In mammals, complex evaluations of the environment require large numbers and different subtypes of neurons. The nematode C. elegans utilize its compact nervous system to process environmental cues and tune behavior. Integration of opposing spatial information and adaptation to distinct types of addictive substances are only a few challenges that require efficient and effective use of the worm’s compact nervous system. We describe how distinct environmental cues can converge onto common neural networks and molecular mechanisms but generate diverse neuronal and behavioral responses. Using a multidisciplinary approach, we completed several parallel aims, including the development of two novel research methods

Memory consolidation and generalization during sleep

Lecture
Date:
Wednesday, June 5, 2024
Hour: 10:00 - 11:00
Location:
Nella and Leon Benoziyo Building for Brain Research
Ella Bar-Student Seminar-PhD Thesis Defense
|
Prof. Rony Paz Lab & Prof. Yuval Nir, Tel Aviv University

During sleep, our memories are reactivated and consolidated in an active process that significantly influences our memory and decision-making. In this talk, I will present two studies about sleep-memory consolidation. The first study investigated sleep memory consolidation's local versus global properties within the brain. By exploiting the unique functional neuroanatomy of olfactory system, we were able to manipulate sleep oscillations and enhance memories locally within a single hemisphere during sleep. These findings underscore the local nature of sleep memory consolidation, which can be selectively manipulated within the brain, thereby creating an important link between theories of local sleep and learning. The second research explored the relationship between generalization processes and sleep, acknowledging that overgeneralization of negative stimuli and disruptions in sleep quality contribute to anxiety and PTSD disorders. Specifically, we studied participants' responses to stimuli associated with positive, negative, or neutral outcomes. Our findings revealed significant correlations between brain activity, as detected by fMRI, during the association of a stimulus with an outcome and the perceptual generalization of these stimuli. While activity in limbic brain areas was correlated with immediate negative stimulus generalization, we observed that the activation in these areas predicted recovery and positively related generalization following sleep. Moreover, we identified specific sleep oscillations correlated with this recovery generalization using high-density EEG recordings. These results highlight the crucial role of sleep in both generalization processes and the restoration of balanced responses to stimuli. Understanding these mechanisms can offer valuable insights into developing therapeutic strategies for anxiety and PTSD.

Blood flow perturbations and its impact on brain structure and function: from microstrokes to heartbeats

Lecture
Date:
Tuesday, June 4, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Pablo Blinder
|
Dept of Neurobiology, Tel Aviv University

Vasosdynamics of cortical arterioles and what it informs us about neuronal activity  

Lecture
Date:
Tuesday, May 28, 2024
Hour: 12:30 - 13:15
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. David Kleinfeld  
|
University of California at San Diego

The evolution and development of critical periods of cortical plasticity

Lecture
Date:
Tuesday, May 7, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Joshua Trachtenberg 
|
Department of Neurobiology, David Geffen School of Medicine at UCLA

Consciousness and the brain: comparing and testing neuroscientific theories of consciousness

Lecture
Date:
Tuesday, April 16, 2024
Hour: 12:30 - 13:30
Location:
Gerhard M.J. Schmidt Lecture Hall
Prof. Liad Mudrik
|
Sagol School of Neuroscience, School of Psychological Sciences, Tel Aviv University

For centuries, consciousness was considered to be outside the reach of scientific investigation. Yet in recent decades, more and more studies have tried to probe the neural correlates of conscious experience, and several neuronally-inspired theories for consciousness have emerged. In this talk, I will focus on four leading theories of consciousness: Global Neuronal Workspace (GNW), integrated Information Theory (IIT), Recurrent Processing Theory (RPT) and Higher Order Theory (HOT). I will first shortly present the guiding principles of these theories. Then, I will provide a bird's-eye view of the field, using the results of a large-scale quantitative and analytic review we conducted, examining all studies that either empirically tested these theories or interpreted their findings with respect to at least one of them. Finally, I will describe the first results of the Cogitate consortium - an adversarial collaboration aimed at testing GNW and IIT.

Pages

All events, event

There are no events to display

All events, event

There are no events to display