Animals utilize self-supervision, commonsense reasoning, and interaction to make sense of sensory inputs and perform embodied tasks without a significant amount of explicit labels or instructions. However, most state-of-the-art embodied systems require millions of human annotations and are not able to generalize their previously learned knowledge to accurately reason about novel inputs or tasks. My research focuses on embodied artificial agents that learn, act and reason by interactive and active means, drawing from psychological and neuroscientific literature when it is useful.
Computer vision models of the primate visual system
Deep neural networks optimized for visual tasks have been shown to be good predictive models of neural responses in visual areas (e.g. fMRI, electrophysiology - see here). By modeling the representations and behaviors of primates with AI systems optimized for different tasks and inputs, we can better understand the neural representations underlying naturalistic stimuli processing in primates.