We investigate how the human brain perceives sounds and words from speech.
We do not only listen with our ears but also with our eyes, using lip movements and carefully-timed hand gestures to perceive speech.
A key question in our group concerns how the temporal alignment between gesture and speech shapes what we hear.
We use behavioral, eye-tracking, virtual reality, and neuroimaging methods in our experiments.
We also develop and openly distribute research tools that support and speed up data collection, annotation, and analysis.
How is it possible that we can easily have a conversation with someone even if that someone is shouting over several other talkers, speaks in busy street noise, wears a face mask, talks very fast, has a strange accent, or produces uhm’s all the time?
The human brain is uniquely equipped to successfully perceive the speech of those around us – even in quite challenging listening environments. At the SPEAC lab, we investigate the psychological and neurobiological mechanisms that underlie the exceptional human behavior of spoken communication. We specifically focus on how humans integrate input from multiple modalities, including such visual cues as lip movements and carefully-timed hand gestures.
The work we do contributes to a better understanding of how audiovisual spoken communication can - most of the time - take place so smoothly. We ask, for instance, how do seemingly meaningless up-and-down hand movements, known as beat gestures, influence what words we hear? How do listeners manage to understand talkers in challenging listening conditions, such as in loud background noise or when there are competing talkers around? How do listeners ‘tune in’ to a particular talker with his or her own peculiar pronunciation habits? What is the role of context (i.e., acoustic, semantic, and situational context) in speech processing? Finally, we also develop methodological tools and large audio/video collections to facilitate research in the speech sciences.
The kinds of behavioral experiments we run include (i) playing participants artificially manipulated videos with a speech categorization task (what’s this word?); (ii) speech-in-noise intelligibility experiments (type out the sentence); and (iii) various psycholinguistic paradigms such as repetition priming (e.g., lexical decision). We use eye-tracking to study the time-course of speech processing on a millisecond timescale (e.g., visual world paradigm). We also apply neuroimaging techniques (EEG, MEG, tACS) to uncover the neurobiological mechanisms involved in the temporal decoding of speech.
At present, the SPEAC lab is funded by an ERC Starting Grant (HearingHands; 101040276). This grant was awarded to Hans Rutger Bosker in 2021 and runs from September 2022 to September 2027.
Check out some of our demos, showcasing the kinds of experiments we run…