Learning and Decision Making
To interact with the world, our brain often relies on internal models of the world to make decisions, such as how places are organized in the environments, how objects may interact, which events cause which other events, what quantities give rise to other quantities, what actions cause rewards or punishments, and which scenarios are more or less similar to each other. In spatial navigation and goal-driven decision making, such models are often called cognitive maps or state space. In the Bayesian analysis of perception and cognition, such models may be called generative models. We are interested in how cognitive maps or generative models are learned in the first place, how they are represented in the brain, and how the brain utilizes such models to learn and decide more efficiently. Currently, the models we consider are often within the frameworks of reinforcement learning and Bayesian inference.
Brain images are complex data with high dimensionality and a high level of noise. To understand the brain’s function by neuroimaging, one needs sophisticated algorithms that accurately capture the statistical properties of both the signal and noise. Our preferred approach to achieving this is based on probabilistic graphical models. We develop novel algorithms to analyze brain imaging data (primarily fMRI), including a Bayesian algorithm for neural representational similarity analysis and a realistic fMRI simulator. We also contribute to open-source packages such as BrainIAK.
We are currently leveraging the expertise of the lab in advanced fMRI analysis and deep learning to study the contents and dynamics of spontaneous thoughts, with the outlook of studying its relation with psychiatric disorders.
In collaboration with other researchers of the International Research Center for Neurointelligence with expertise in neurophysiology, computation, and psychiatry, we are developing new cognitive tasks to understand the mechanism of psychiatric disorders such as schizophrenia.
Brain-Inspired Deep Networks
We are interested in how to build neural networks to learn like infants: mostly unsupervised and self-driven. This line of research shares the same goal as studying human learning and decision making: how to learn an interpretable model of the world.