Our work uses state-of-the-art methods including: psychophysics, eye tracking, electrophysiology (EEG), functional and structural brain imaging, computational modeling, Virtual Reality (VR), brain stimulation (e.g. TMS) and neuropsychological studies of patients with acquired cognitive, sensory and motor impairments (e.g. strokes) and neuro-degenerative illnesses (e.g. Parkinson’s disease).
The pain research team investigates brain mechanisms by which the conscious experience of pain (both experimental and clinical) is generated and how brain states influence pain-related behaviour.
Our aim is to identify brain mechanisms and biomarkers that can both predict outcomes from clinical interventions (for example, spinal cord stimulation or cognitive-behavioural treatments) and aid in the development of new treatments. Our strategy is to translate theory and evidence from cognitive neuroscience and pain psychology into the development of new clinical methods for improving the long-term prognosis of patients with chronic pain. We have expertise in the use of human experimental pain models and the application of electrophysiology and neuroimaging to analyse brain structure and pain processing.
For more information please visit the personal webpages of Andrej Stancak, Chris Brown and Nicholas Fallon.
We study the neurobiological and psychological foundations of low-level and high-level vision and attention systems in the human brain, using a wide range of techniques (psychophysics, eye tracking, EEG, f/MRI, brain stimulation (e.g., TMS), neuropsychological studies with patients (e.g. Glaucoma, Parkinsons, stroke patients) and other visual deficiencies. The aim it so translate basic vision research into non-invasive behavioural interventions. We also work with machine vision and robotics collaborators on the development of biologically-inspired robotics and machine vision applications.
For more information please visit the personal webpages of Charles Leek, Sophie Wuerger, Alexis Makin, Giulia Rampone and Laurence Tidbury.
Multisensory Research, Virtual and Augmented Reality
All animals, even the simplest ones, have multiple sensory systems to experience their bodies and surroundings. Information from all the senses is combined by the brain to get faster, more precise and more accurate perceptual estimates.
We investigate the computational principles of multisensory integration by combining behavioural methods (psychophysics, reaction times, motion and eye-tracking, etc.), computational modelling (Bayesian Ideal Observer models, biological cybernetics and neural models) and physiological measures (MEG, fMRI, EEG). Our multisensorial research in conjunction with the state-of-the-art VR system at the Virtual Engineering Centre (VEC) and Digital Innovation Facilities (DIF) has direct applications for learning, rehabilitation and novel interactive technologies.
For more information please visit the personal webpage of Cesare Parise, Emmanuel Biau and Sophie Wuerger.
Neurobiology of Language, Speech and Memory
Our goal is to establish how the human brain is able to perceive, understand and remember our complex real-world experiences, with a particular focus on naturalistic language experiences, both spoken and signed language, and its interactions with memory.
Our research makes use of multiple methodologies, including functional and structural MRI (fMRI, DTI), EEG, computational modelling and TMS to provide convergent evidence on the neural basis of language and memory in healthy and patient populations, and to translate these findings from cognitive neuroscience to clinical settings.
We also investigate how language and memory functions break down in individuals with brain damage to obtain unique insights, informative for cognitive neuroscience models of language and memory.
For more information please visit the personal webpages of Emmanuel Biau, Francesca Branzi, or Dimitris Tsivilis.
Back to: Institute of Population Health