Neuro
Cog2025
November 17-18, 2025
Event registration

Submission closes September 30st and Registration closes October 15th

November 17-18, 2025 • Brussels (BE)
  • days
  • Hours
  • Minutes

ARTIFICIAL INTELLIGENCE AND THE HUMAN BRAIN

 

NEUROCOG is an international biannual conference in cognitive neurosciences intending to provide a workshop for recent advances in all domains of cognitive neurosciences. The 2025 conference topic is AI and the human brain. The 2-day conference is organised around six invited talks provided by leaders in their field. As NEUROCOG aims at creating a sense of community among the researchers, each keynote talk will be followed by an extended discussion time to encourage interactions between the speaker and the audience. With the same goal of fostering the interactions and discussions among the attendants, there will be no parallel talk sessions and the number of individual talks, as the number of attendants, will be limited. Two guided poster sessions will finally be organized with several poster prizes awarded on a competitive basis.

 

 

REGISTRATION AND POSTER SUBMISSION FROM JUNE 1ST 2025

 

Speakers

University of Oxford

Chris Summerfield

Using neural networks to understand human learning

Neural networks have been proposed as theories of perception and cognition. However, most studies have focussed on comparing representations in biological and artificial learners once learning is complete. Here, I will describe three projects in which we study the dynamics of learning, and sensitivity to training curricula, in humans and neural networks. In the first project, we understand why humans learning to integrate multiple pieces of information benefit from a ‘divide and conquer’ strategy, and use a neural network to design a curriculum that successfully accelerates human learning. In the second project, I show how humans and neural networks have remarkably similar patterns of transfer and interference during continual learning. In the final project, I show how humans and transformer networks have very similar sensitivity to the data diet on which they are trained.

https://www.psy.ox.ac.uk/people/christopher-summerfield

Massachusetts Institute of Technology

Evelina Fedorenko

Language and thought in humans and machines

I seek to understand how humans understand and produce language, and how language relates to the rest of human cognition. I will discuss the ‘core’ language network, which includes left-hemisphere frontal and temporal areas, and show that this network is ubiquitously engaged during language processing across input and output modalities. Importantly, the language network is sharply distinct from higher-level systems of knowledge and reasoning. First, the language areas show little neural activity when individuals solve math problems, infer patterns from data, or reason about others’ minds. And second, some individuals with severe aphasia lose the ability to understand and produce language but can still do math, play chess, and reason about the world. Thus, language does not appear to be necessary for thinking and reasoning. Human thinking instead relies on several brain systems, including the network that supports social reasoning, abstract formal reasoning and fluid intelligence. These systems are sometimes engaged when we use language in the real world but are not language-selective. The separation between language and thought may have implications for what we can expect from neural network models trained solely on linguistic input.

https://www.evlab.mit.edu

University of Bristol

Jeff Bowers

Deep Problems with Neural Network Models of Human Vision and Language

 Deep neural networks (DNNs) developed in computer science are successful in a range of vision and language tasks, and at the same time, they can predict the behavioural and brain responses of humans (and macaques) better than alternative models. This has led to the common claim that DNNs and brains perform vision and language tasks using similar representations, and more generally, DNNs are thought to advance theories in psychology and neuroscience. However, the good behavioural and brain predictions tend to be observed in correlational studies, and correlations do not imply causation, let alone support the claim that two systems are mechanistically similar.  What is needed is experiments that manipulate independent variables that test specific hypotheses about how DNNs and humans perform vision and language tasks.  When this is done, DNNs are shown to provide poor models of human vision and language. If DNNs are going to be useful tools in psychology and neuroscience, researchers need to give up correlational studies and focus on experiments that test hypotheses that characterize the performance of both DNNs and humans.

University of Amsterdam

Iris Groen

Representational alignment between DNNs and brains: From objects to affordances

The unprecedented ability of deep neural networks (DNNs) to predict neural responses in the human visual cortex has led to excitement about these models’ potential to capture human visual perception. However, studies demonstrating representational alignment of DNNs with humans typically analyze brain responses to static, object-centered images and DNNs trained on object labels. Real-life visual perception encompasses (a lot) more than object recognition: it requires processing of a continuous stream of complex, dynamic scenes, as part of a constant perception-action loop. In this talk, I will discuss our recent work aiming to assess representational alignment between DNNs and the human brain beyond object recognition. I will focus in particular on scene affordance perception, showing convergent fMRI and EEG evidence for a gap in alignment between DNNs and human perception. These results highlight that a relatively simple change in task – from object to affordance labeling – results in a very different representational space that is easily accessed by humans, but not naturally represented in DNNs. I will also show how DNN alignment extends to video perception, and how we can leverage alignment to probe neural representations via brain-guided image generation.

www.irisgroen.com

Max Planck Institute for Software Systems

Mariya Toneva

Improving Large Language Models as Model Organisms of Language in the Human Brain

Language is one of the richest and most complex human cognitive capacities. Yet, we lack a model organism to study its underlying neural mechanisms: unlike other important cognitive capacities, such as vision or memory, language does not have a clear counterpart in non-human animals, leaving a gap in our ability to develop and test mechanistic hypotheses. In recent years, large language models (LLMs) have emerged as the closest computational analogs we have but how can we use them effectively as model organisms for language in the human brain? In this talk, I will discuss the promise and challenges of this approach. I will present our recent work on brain-tuning using naturalistic brain recordings to refine LLMs so that their internal representations and processing better align with human neural data. But beyond representational similarity, a key question remains: do LLMs rely on mechanisms that are similar to those in the brain? And at what level of abstraction should we assess this similarity? This research direction aims to transform LLMs from mere engineering artifacts into powerful scientific tools for uncovering how the brain supports our most distinctive cognitive ability.

http://mtoneva.com/

University of Hamburg

Nicolas Schuck

Replay for Learning and Generalisation in Humans

Replay — the sequential neural reactivation of previous activity patterns – has been investigated in many studies in rodents, but we know remarkably little about it in humans. In this talk I will describe our work using fMRI to track sequential neural reactivation in the human brain. In an implicit learning study we find that visual cortex backward reactivates multi-step stimulus sequences during micro pauses of merely 10 seconds. We also find that such reactivation is related to learning multi-step relationships between stimuli, and occurs independently of whether participants are aware of the sequence or not. In another study we investigated value-based decision making about correlated but changing bandits. We find that humans are able to quickly learn the bandit correlation structure and use it to make one-shot inferences about value changes in unseen bandits. In parallel, we find replay in visual cortex and hippocampus during brief micro pauses that specifically supports one-shot generalisation.  Hence, neural activity during very brief pauses from the task shows that seconds after the last stimulus was displayed the brain meaningfully and systematically reactivates past experiences in a manner that supports current cognitive computations.

https://www.mpib-berlin.mpg.de/staff/nicolas-schuck

Location

Académie royale des Sciences, des Lettres et des Beaux-Arts de Belgique
Address :
Rue Ducale 1, 1000 Bruxelles - Belgique
Itinerary

Chargement de la map...

Organisation committee
Tom Verguts
Professor
Ghent University
Wim Notebaert
Professor
Ghent University
Senne Braem
Professor
Ghent University
Clay Holroyd
Professor
Ghent University
Louisa Bogaerts
Professor
Ghent University
Wim Fias
Professor
Ghent University
Rose Bruffaerts
Professor
Antwerp University
Hans Op de Beeck
Professor
KU Leuven
Eva Van den Bussche
Professor
KU Leuven
Olivier Collignon
Professor
Catholic University of Louvain
Valérie Goffaux
Professor
Catholic University of Louvain
Adélaïde de Heering
Professor
Université Libre de Bruxelles