Past Events
International Lecture Series
February 21st, 2024, 4:15-5:45 pm: Antonia Bott on "Context effects on dynamic belief updating in psychosis and paranoia"
March 13th, 2024: 9:45-11 am: Peter Dayan on "Mindgames"
March 13th, 2024: 11:15 am-12:30 pm: Andreea Diaconescu on "Aberrant perception of environmental volatility in early psychosis"
June 26th, 2024, 4:15-5:45 pm: Philipp Sterzer on "Now you see it… now you don’t: Temporal fluctuations in perceptual inference and their role in psychosis"
July 17th, 2024, 4:15-5:45 pm: Klaas Enno Stephan on "Translational Neuromodeling, Computational Psychiatry & Computational Psychosomatics" (postponed from April 24th)
September 23rd, 2024, 1:00-2:30 pm: Joshua Gold on "Mechanisms of Adaptive Inference"
October 30th, 2024, 4:15-5:45 pm: Sören Krach on "Affected beliefs: Neurocomputational mechanisms and clinical implications
February 26th, 2025, 4:15-5:45 pm: Michael J. Frank on "Strategies for managing memory uncertainty to improve effect capacity in biological and artificial neural networks"
Abstract:
How and why is working memory (WM) capacity limited? Traditional cognitive accounts focus either on limitations on the number or items that can be stored (slots models), or loss of precision with increasing load (resource models). I will present a neural network model of corticostriatal circuitry that can learn to reuse the same neural populations to store multiple items, leading to resourcelike constraints within a slot-like system, and inducing a tradeoff between quantity and precision of information. Such “chunking” strategies are adapted as a function of reinforcement learning and WM task demands, mimicking human performance and normative models. These simulations also suggest a computational rather than anatomical limit to WM capacity. As such I will also describe a new line of work linking mechanisms of WM gating in biological networks to those that can emerge in transformer neural networks underlying language models. Despite not having memory limits, we also find that storing and accessing multiple items requires an efficient gating policy, resembling the constraints found in frontostriatal models. When learned effectively, these gating strategies support enhanced generalization and increase the models' effective capacity to store and access multiple items in memory.
March 19th, 2025, 4:15-5:45 pm: Alex Pike on "Theory-driven computational psychiatry: testing assumptions"
Pike, Alexandra C.1,2; Board, R.2; Travers, E.2; Croal, M.2; Lam, Y.C.2; Valton, V.2; Robinson, O.J.2,3; Roiser, J. P.2
1Department of Psychology, Institute of Mental Health Research York, and York Biomedical Research Institute, University of York, York, YO10 5DD
2Neuroscience and Mental Health Group, Institute of Cognitive Neuroscience, University College London
3Department of Clinical, Educational and Health Psychology, University College London
Theory-driven computational psychiatry attempts to use cognitive models of computation to understand how mental health problems might relate to (or even be caused by) changes in cognitive processes such as learning and decision-making. The potential applications and relevance of this approach, however, is contingent on several assumptions, including that computational parameters are reliable over time, meaningful across task contexts, and relate to symptoms. We recruited a large online sample of participants (n=548) using Prolific, who completed seven mental health questionnaires and five tasks. An unselected subset was re-invited to complete the five tasks fourteen days later. Five models (including a null model, or model of no interest) were fit to each of these tasks. The parameters from the best-fitting models showed moderate to high levels of test-retest reliability (mean ICC=0.50, sd=0.15). Parameters that were theoretically or mathematically similar to each other were, however, generally not related, with relationships only found within the class of ‘inverse temperature’-like parameters (r values of 0.11 to 0.24), indicating that in general, our parameters do not generalise across task context. Finally, only parameters from two of the five tasks (four-armed bandit and cognitive effort) related to symptoms, and the relationships were modest (maximum r value -.12), indicating that perhaps computational parameters are less relevant to mental health symptoms than our theories suggest. To conclude, it seems that at least some of the assumptions we make when advocating for the use of computational models in psychiatry are not met. This is likely to limit the clinical and translational utility of this approach. We argue that researchers in the field should assess the psychometric properties of the tasks and models they use more closely, and these should be reported.
March 27th, 2025. 9:30-11 am: Jill O'Reilly on "Defining the bounds of the hypothesis space for learning"
Reinforcement learning models describe how we learn associations between possible causes and outcomes in our environment. But in a rich environment, if we consider all possible cause-outcome parings, there is a combinatorial explosion of possible associations to learn. A longstanding problem, therefore, is how we determine which candidate causes are worth considering, and which outcomes can be predicted. These questions motivated the ‘attentional learning’ theories of Mackintosh and Pearce/Hall in the 1980s which have received relatively little attention compared to the contemporary Rescorla-Wagner model. In this talk I will present a modern Bayesian take on the attentional learning theories. I will introduce the idea of the hypothesis space, or set of causes and outcomes that are eligible for learning, as central to Bayesian models of learning. I will consider the consequences of expanding or collapsing this hypothesis space for learning in a Bayesian framework. I will argue that global features of the motivational environment, in particular the baseline reward rate, determine whether and when the hypothesis space should be expanded or reduced. I will describe experimental work demonstrating that changes in hypothesis space are indeed linked to global reward rate, resulting in qualitatively different forms of learning in ‘rising’ and ‘falling’ environments. I will also present work investigating the cognitive and neural mechanisms by which the expansion or contraction of the hypothesis space might be achieved.