top of page
Shutterstock_1639770886.png

Research

How do people make sense of incomplete and noisy observations? How do humans make decisions in an uncertain world and how do they learn from their mistakes? We investigate these problems in health and disease using computational and experimental tools.

Computationally, we develop formal models of these process in the brain using tools from reinforcement learning and Bayesian machine learning. Experimentally, we use these models to generate quantitative predictions about the underlying neural dynamics that can be used in combination with various experimental techniques and data modalities. We perform behavioral studies, functional magnetic resonance imaging (fMRI), eye tracking and virtual reality in both healthy participants and patients. We also collaborate with other labs to test these models in animals. See below for two ongoing projects.

study1.png

Solving complex planning problems efficiently and flexibly requires reusing expensive previous computations. The brain can do this, but how? We have developed a model, termed linear reinforcement learning, that addresses this question. The model reuses previous computations more nimbly in a way to enable biologically-realistic, flexible choice at the expense of specific, quantifiable control costs. The model produces similar flexibilities and inflexibilities as animals and unifies a number of hitherto distinct areas, including planning, entorhinal maps (grid cells and border cells) and cognitive control. 

A key insight from Bayesian machine learning is that adaptive learning requires distinguishing between two types of noise: speed of change or volatility and moment-to-moment stochasticity. Although this is a computationally difficult problem, the brain can solve it; the question is, how? We addressed this issue by developing a model that learns to distinguish between volatility and stochasticity solely through observations over time. The model has two modules: a stochasticity module and a volatility module. These modules must compete within the model to explain experienced noise. This enables us to explain aspects of human and animal data across a wide range of seemingly unrelated neuroscientific and behavioral phenomena. It also provides a rich set of hypotheses about pathological learning in psychiatric conditions, where damage in one module impacts the behavior with regard to the other factor because the model misattributes all noise to the remaining module.

study21.png
unbalance-weighing-options.gif
study22.png
giphy.gif
bottom of page