We investigate how the human brain perceives sounds and words from speech.
We do not only listen with our ears but also with our eyes, using lip movements, facial expressions, and hand gestures to perceive speech.
A key question in our group concerns how the temporal alignment between gesture and speech shapes what we hear.
We use behavioral, eye-tracking, virtual reality, and neuroimaging methods in our experiments.
We also develop and openly distribute research tools that support and speed up data collection, annotation, and analysis.
click here for all news items
Chengjia joined the group in Sept 2023 to test how the timing of hand gestures influences speech perception in more naturalistic listening conditions.
Matteo joined the group in June 2023 to test the neural correlates of audiovisual integration of gestural timing and spoken prosody, as well as their alteration in Autism Spectrum Disorder (ASD)
Patrick joined the group in May 2023 to work on a cross-linguistic comparison of gesture-speech temporal alignment in production and perception.
Try out these audiovisual illusions, speech tricks, and other hocus-pocus…