Photo of Professor Sophie Wuerger

Professor Sophie Wuerger Ph.D.

Professor Psychology



Achromatic and chromatic contrast sensitivity as a function of light level
Achromatic and chromatic contrast sensitivity as a function of light level

Chromatic and spatial processing is tightly linked in the visual system. The S-cone system is generally poor at resolving fine spatial detail due to the paucity of S cones in the retina. The other two pathways, the chromatic (red-green) and achromatic (luminance) channel, both of which receive input from the L and M cones, also exhibit different spatial characteristics due to post-receptoral spatial processing. The spatio-chromatic properties of these channels have been studied extensively in detection and discrimination tasks [1-6] but much less is known how the different spatial characteristics affect the suprathreshold appearance.

Secondly, how the chromatic visual system operates under light levels reflecting real-world illumination conditions, from dusk to bright sunlight, is still unkown. We have characterised the contrast sensitivity of the visual system for both achromatic and chromatic stimuli (red-green, yellow-blue) as a function of light level [7] and also studied how contrast sensitivity interacts with light level in older adults [8].
 The overall aim of our project is to create a spatio-chromatic model of colour vision, capable of robust predictions across the luminance range from 0.02 to 10000 cd/m2. One of our applications will consist of developing a  retargeting model for different age groups by taking into account the age-related changes in the optics of the eye and age-related changes in post-receptoral processing. Updates, data and code are available on
Dr. R Mantiuk, Computer Science, University of Cambridge
Dr J Martinovic, Psychology, University of Aberdeen
Prof. G Finlayson, Computer Science, University of East Anglia
Apple Inc.
[1] Wuerger, S.M. and Morgan, M.J. (1999). The input of the long- and medium-wavelength-sensitive cones to orientation discrimination. Journal of the Optical Society of America A, 16(3), 443-454.
[2] Wuerger, S.M., Morgan M.J., Westland, S., and Owens, H. (2000). The spatio-chromatic sensitivity of the human visual system. New Journal of Physics: Physiological Measurements, 21(11), 505-513.
[3] Wuerger, S.M., Owens, H, and Westland, S. (2001). Blur Tolerance for luminance and chromatic stimuli. Journal of the Optical Society of America A, 18(6), 1231-1239.
[4] Wuerger, S.M., Watson, A.B., and Ahumada, A. (2002). Towards a spatio-chromatic standard observer for detection, in Human Vision and Electronic Imaging VII, ed. B. E. Rogowitz and T.N. Pappas, Proceedings of SPIE, San Jose, CA, USA, Vol. 4662, pp. 159-172.
[5] Martinovic, Mordal, Wuerger (2011). Event-related potentials reveal an early advantage for luminance contours in the processing of objects, Journal of Vision, 11(7), p. 1-15.
[6] Kosilo, M., Wuerger, S. M., Craddock, M., Jennings, B. J., Hunt, A. R., & Martinovic, J. (2013). Low-level and high-level modulations of fixational saccades and high frequency oscillatory brain activity in a visual object classification task. Frontiers in Psychology, 4.
[7] Sophie Wuerger, Maliha Ashraf, Minjung Kim, Jasna Martinovic, María Pérez-Ortiz, Rafał K. Mantiuk; Spatio-chromatic contrast sensitivity under mesopic and photopic light levels. Journal of Vision 2020; 20(4):23. doi:
[8] Ashraf, M, Kim MJ, Wuerger, S, Mantiuk, R. (2020). Spatio-chromatic contrast sensitivity across the lifespan: inter-actions between age and light level in high dynamic range. Color and Imaging Conference, Chiba, Japan.
Acknowledgement of Support
EPSRC GR/L75795/01
EPSRC EP/P007503/1 (2017-2021)


Crossmodal associations between olfactory and visual stimuli
Crossmodal associations between olfactory and visual stimuli


How does the brain combine inputs from the different sensory modalities?

Audiovisual Integration
One of our main behavioural findings is that spatial and motion congruency (same hemifield; same motion direction) is a crucial factor when dynamic signals from the auditory and visual modality are combined: integration is most effective (closest to linear summation)  when the signals are co-localised and move in the same direction (H+D+); when they are inconsistent in motion direction or location, an  independent-decisions model accounts best for the data [1-7].  In many instances, auditory and visual inputs carry not only information about motion direction or location, but also semantic information. A good example is lip reading, where both the auditory and visual signals carry semantic information, namely speech. Another example is biological motion, often studied with a point-light walker; the human brain is good at extracting body motions from reduced visual and auditory signals (e.g. a small set of moving dots). [8-12]. In addition to semantic and spatial congruency, temporal alignment is an important heuristic for sensory integration and whether the input from different senses is perceived a single event. The perceived simultaneity of auditory and visual events depends to some extent on the intensities of the unimodal stimuli. This is partly due to different processing speeds in the auditory and visual system. We have shown that the intensity-dependence of perceived synchrony is explained by early intensity-dependent processing latencies of the unimodal signals [13].Furthermore, perceived synchronicity is plastic and can be altered by training; we have shown that this perceptual training is specific for the intensity of the stimuli and does not generalise across intensities [14].
Smelling Sensations- interactions between odours and other senses
 Olfaction is ingrained in the fabric of our daily life and constitutes an integral part of our perceptual reality. We investigated crossmodal correspondences between ten olfactory stimuli and other modalities (angularity of shapes, smoothness of texture, pleasantness, pitch, colours, musical genres and emotional dimensions. Robust associations were found for most pairings, apart from musical genres [15]. Applications for Virtual Reality applications will be explored with odour stimulation as a tool to enhance immersiveness [16]..

Georg Meyer, University of Liverpool
Mark Greenlee, University of Regensburg
Neil Harrison, Hope University
Ryan Horsfall, University of Manchester
Ryan Ward, Electrical Engineering, University of Liverpool
Alan Marshall, Electrical Engineering, University of Liverpool
[1] Meyer, G.F. and Wuerger, S.M (2001). Crossmodal integration of auditory and visual motion signals, NeuroReport, 12, 2557-2560.
[2] Wuerger, S.M., Hofbauer, M. and Meyer G. (2003) The integration of auditory and visual motion signals at threshold, Perception & Psychophysics 65(8), 1188-1196
[3] Meyer, G.F., Mulligan, J., and Wuerger, S.M (2004). Continuous Audio-visual digit recognition using N-best decision fusion, Information Fusion, 5, 91-101
[4] Hofbauer, M., Wuerger, S. M., Meyer, G. F., Roehrbein, M., Schill, K., & Zetzsche, C. (2004). Catching audio-visual mice: Predicting the arrival time of auditory-visual motion signals. Cognitive, Affective & Behavioral Neuroscience, 4(2), 241–250
[5] Meyer, G. F., Wuerger, S. M.,. Roehrbein, M., & Zetzsche, C. (2005). Low-level Integration of Auditory and Visual Motion Signals Requires Spatial Co-localisation, Experimental Brain Research, 166 (3-4), 538-547.
[6] Wuerger, S.M., Meyer, G., Hofbauer, M., Schill, K. and C. Zetzsche (2010). Motion extrapolation of auditory-visual targets, Information Fusion, 11, 45–50.
[7] Harrison, N. R., Wuerger, S. M., & Meyer, G. F. (2011). Reaction time facilitation for horizontally moving auditory-visual stimuli. Journal of Vision, 10(14), 1-21.
[8] Meyer, G., Greenlee, M., & Wuerger, S. (2011) Interactions between Auditory and Visual Semantic Stimulus Classes: Evidence for Common Processing Networks for Speech and Body Actions. Journal of Cognitive Neuroscience, 23(9), 2271-2288.
[9] Wuerger, S.M., Crocker-Buque, A., and Meyer G.(2011) Evidence for auditory-visual processing specific to biological motion, Seeing and Perceiving, 25, pp. 15-28.
[10] Wuerger, S., Parkes, L., Lewis, P.A., Crocker-Buque, A., Rutschmann, R., & Meyer, G. F. (2012). Premotor Cortex Is Sensitive to Auditory–Visual Congruence for Biological Motion. Journal of Cognitive Neuroscience, 24(3), pp. 575-587. doi:10.1162/jocn_a_00173
[11] Meyer, G. F., Harrison, N. R., & Wuerger, S. M. (2013). The time course of auditory–visual processing of speech and body actions: Evidence for the simultaneous activation of an extended neural network for semantic processing. Neuropsychologia, 51(9), 1716-1725. doi:
[12] Harrison, Neil R. , Witheridge, Sian , Makin, Alexis , Wuerger, Sophie , Pegna, Alan J. and Meyer, Georg . The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth. (2015) Neuropsychologia. ISSN 1873-3514 (Online); 0028-3932 (print). DOI: 10.1016/j.neuropsychologia.2015.09.023
[13] Horsfall, Ryan, Wuerger, Sophie, Meyer, Georg (2020a). Visual intensity-dependent response latencies predict perceived audio-visual simultaneity, Journal of Mathematical Psychology. pdf Data are available on Mendeley.
[14] Ryan P Horsfall, Sophie M Wuerger and Georg F Meyer (2020b). Narrowing of the audio-visual temporal binding window due to perceptual training is specific to high visual intensity stimuli. I-Perception (accepted for publications).

[15] Ryan Joseph Ward, Sophie Wuerger, Alan Marshall (2020a). Smelling sensations: olfactory crossmodal correspondences., Journal of Perceptual Imaging (accepted). bioRxiv 2020.04.15.042630; doi:
[16] Ryan J. Ward, P. M. Jjunju, Elias J. Griffith, Sophie M. Wuerger and Alan Marshall (2020b) Artificial Odour-Vision Syneasthesia via Olfactory Sensory Argumentation., IEEE Sensors Journal (accepted)

Acknowledgement of Support
The Welcome Trust
The Royal Society


Acquisition of skin images  and 3D printing process
Acquisition of skin images and 3D printing process

Understanding human skin appearance is a subject of great interest in science, medicine and technology. In medicine, skin appearance is a vital factor in surgical/prosthetic reconstruction, medical make-up/tattooing and disease diagnosis. With the 3D printing of human skin now at the horizon, the process involved in matching natural and manufactured skin samples has become essential; a robust, accurate and efficient imaging system is required that acquires the relevant skin information and predicts a good match and translates this information through this new and innovative manufacturing process. A major problem with manufactured skin is that the match to the individual’s natural skin must hold not only be accurate under a particular ambient illumination but the match needs to be preserved when the individual is moving between different environments, e.g. when the individual moves from office or LED lighting into daylight. To achieve this illumination invariance, the physical properties of the skin need to be taken into account.

Two of our team members, Professor Yates (Manchester) and Dr. Xiao (Leeds), have pioneered the use of additive manufacture for prostheses ( With funding from the EPSRC we were able to refine their technology, with the following outcomes:
(i) Proof-of-concept for the acquisition and additive manufacture of skin using a 3dMD photogrammetry facial system (3dMD, Atlanta, GA, USA).and a powder-based 3D printing system (Zcorp Z510) [1,3,6]
(ii) Assessment of the skin measurement reliability of two instruments (PhotoResearch PR650 spectroradiometer; Konica Minolta CM700d spectrophotometer) [5]
(iii) An algorithm for the spectral reconstruction of full skin spectra from camera [2]
(iv) Publicly accessible data base characterising the variability of natural skin colour [4]
(v) Appearance evaluation of natural and manufactured skin using spectral and perceptual error metrics [6,7]

Future challenges: Our previous manufacturing method relied on post-processing techniques (infiltration with silicone) to increase the flexibility and durability of the manufactured skin, which also leads to poorly controlled textures and highlights. While the colour accuracy (comparing real skin with additively manufactured skin) is not far from an acceptable range and is not strongly dependent on the spectral distribution of the light source, the overall appearance of the manufactured prosthesis is deficient due to poorly controlled glossiness/highlights. With recent developments in 3D printing, allowing direct deposition of photocurable polymers with layer thickness < 15 mm and material specification at voxel resolution, we can address these shortcomings to optimise the optical and biomechanical properties.

Lab Equipment: For 3D image acquisition, we use a 3-pod 3dMD photogrammetry system , in conjunction with a VeriVide ceiling lighting system allowing for controlled illumination. For calibration purposes we use various measurement devices, including the PhotoResearch PR670 spectroradiometer and the Konica Minolta CM-700d spectrophotometer (with the CM-SA skin analysis software).

[1] K. Xiao, A. Sohaib, P.-L. Sun, J. M. Yates, C. Li, and S. Wuerger, “A colour image reproduction framework for 3D colour printing,” Proc. SPIE, vol. 10153, no. 2, 2016.

[2] K. Xiao, Y. Zhu, C. Li, D. Connah, J. M. Yates, and S. Wuerger, “Improved method for skin reflectance reconstruction from camera images,” Opt. Express , vol. 24, no. 13, pp. 14934–14950, 2016

[3] K. Xiao, S. Wuerger, F. Mostafa, A. Sohaib, and J. M. Yates, “Colour Image Reproduction for 3D Printing Facial Prostheses,” in New Trends in 3D Printing, 2016, pp. 89–109.

[4] K. Xiao et al., “Characterising the variations in ethnic skin colours: A new calibrated data base for human skin,” Ski. Res. Technol., 2016.

[5] M. Wang, K. Xiao, M. R. Luo, M. Pointer, V. Cheung, and S. Wuerger, “An investigation into the variability of skin colour measurements,” Color Res. Appl., 2018.

[6] A. Sohaib, K. Amano, K. Xiao, J. M. Yates, C. Whitford, and S. Wuerger, “Colour quality of facial prostheses in additive manufacturing,” Int. J. Adv. Manuf. Technol., vol. 96, no. 1–4, 2018.

[7] Chauhan, T., Xiao, K., & Wuerger, S. (2019). Chromatic and luminance sensitivity for skin and skinlike textures. Journal of Vision, 19(1), 13.

Data sets: Colorimetric (LAB) skin values for four ethnicities (Caucasian, Chinese, Kurdish, Thai): (download [link to]). Details in pub #4.
Database containing skin spectra (download [link to]. Details in pub #2.

Acknowledgement of support:
Engineering and Physical Sciences Research Council (EPSRC) EP/K040057/1 (2013-2017)
Royal Academy of Engineering (2015-16)
EPSRC IAA EP/K503952 (2016-2017)

More information:

Research Grants

A spatio-chromatic colour appearance model for retargeting high dynamic range image appearance across viewing conditions


May 2017 - December 2021

Correlating visual deficits in the magnocellular, parvocellular and koniocellular pathway with in-vivo LGN activity in glaucoma


September 2011 - November 2012

Sleep Mask pilot production capability development, and safety trials of a home based treatment and monitoring model as a primary care intervention for Diabetic Retinopathy and Wet Age-Related Macular Degeneration.


July 2012 - July 2015

Measuring and Reproducing the 3D Appearance of Human Facial Skin under varying Illumination Conditions.


October 2013 - April 2017

Research Collaborations

Dr. Rafal Mantiuk

External: University of Cambridge

A spatio-chromatic colour appearance model for retargeting high- dynamic-range image appearance across viewing conditions

Prof Julian Yates

External: University of Manchester

Additive Manufacture of Facial Prosthetics

Dr. Jasna Martinovic

External: The University of Aberdeen

Spatio-chromatic vision

Anshoo Choudary


Functional imaging in Glaucoma patients

Dr. Dimos Karatzas

External: Universidad Autonoma Barcelona

Dr. Yannis Goulermas


Multivariate pattern classification techniques for brain imaging

Dr. Laura Parkes

External: University of Manchester, UK

fMRI: novel multivariate classification techniques