Dr. Zhang is currently leading the Cognitive Computational Neuroscience and Brain Imaging Group at the School of Psychology and Shanghai Mental Health Center at Shanghai Jiao Tong. Dr. Zhang has long been working at the intersection of brain science and brain-like intelligence. His research primarily focuses on the neural computational mechanisms of the human brain and artificial intelligence by combining psychophysics, Bayesian probabilistic modeling, deep learning modeling, neuromodulation, and functional magnetic resonance imaging. He has published several cognitive neuroscience papers in PNAS, eLife, J Neurosci, Neuroimage, PLoS Comput Biol, etc. Dr. Zhang's research on brain-like computation has also been published in the world's top machine learning conferences (ICML and IJCAI). He is also a reviewer for several brain science journals such as eLife, Cerebral Cortex, and machine learning conferences such as ICML, NeurIPS, IJCAI, ICLR, CVPR, etc. He is also the Area Chair of NeurIPS 2024.
The past decade has seen a surge in the use of sophisticated AI models to reverse-engineer the human mind and behavior. This NeuroAI approach has dramatically promoted interdisciplinary research between neuroscience and AI. This talk focuses on using the neuroAI approach to elucidate human learning mechanisms. The talk will consist of two parts. First, I will present our work on the relationships between the primate visual system and artificial visual systems (i.e., deep neural networks) during the learning of simple visual discrimination tasks. Our deep learning models of biological visual learning successfully reproduce a wide range of neural phenomena observed in the primate visual system during perceptual learning. The novel predictions generated by our models are further validated against multivariate neuroimaging data in humans and multi-electrode recording data in macaques. In the second part, I will discuss our recent work on neural and computational mechanisms of how the human brain mitigates catastrophic forgetting during continual multitask learning. Leveraging neural network modeling on human learning behavior, we show that the human brain directly distills learned knowledge via elastic weight consolidation rather than other methods such as memory replay. These studies have profound implications for interdisciplinary research at the intersection of neuroscience and artificial intelligence.