Overview
Neuromorphic computing has emerged as an alternative compute model to the traditional von Neumann model. Classical computational complexity only considers time and space as resources, however in modern ‘edge’ computing, energy is critical. We ask, “What kinds of problems are efficiently solved on a neuromorphic computer? Which are not? Are these the same problems that can be solved efficiently on a von Neumann computer?”
About this opportunity
This is a theoretical project which seeks to develop complexity classes which better model computation in non-von Neumann architectures, specifically, on Spiking Neural networks. The project is undertaken in the School of Computer Science and Informatics at the University of Liverpool with Dr David Purser and Dr John Sylvester in collaboration with Dstl – Defence Science and Technology Laboratory.
The rapid growth of artificial intelligence (AI) and machine learning has placed increasing demands on computational power and energy efficiency. Conventional von Neumann architectures, which separate memory and computation, struggle to meet these demands due to data movement bottlenecks, high power consumption, and limited scalability. These bottlenecks will become particularly acute as AI workloads increasingly shift toward low-power edge devices, where energy efficiency and compact hardware are critical. Neuromorphic computing draws inspiration from the structure and efficiency of biological systems, potentially enabling high-performance computing and AI in power-constrained environments.
Neuromorphic computing hardware and algorithms aim to emulate key principles of neural information processing, including massive parallelism, event-driven operation, local memory, and adaptive learning. At the core of this paradigm are spiking neural networks (SNNs), in which neurons communicate via discrete, time-encoded spikes, closely mirroring the dynamics of biological neurons. Unlike artificial neural networks (ANNs) that use continuous-valued activations, SNNs process information asynchronously through sparse events, integrating incoming spikes over time and emitting an output when a threshold is reached. This temporal, event-driven approach enables low-power computation, real-time processing, and robust performance in noisy environments.
Despite rapid progress in neuromorphic hardware and algorithms, a principled understanding of computational complexity in neuromorphic computing remains largely underdeveloped. In classical computing, complexity theory provides a foundational framework for characterising the intrinsic difficulty of problems, independent of specific hardware implementations, and has guided both algorithm design and architectural innovation. An analogous theory is needed for neuromorphic systems. Existing complexity measures—such as time and space complexity defined for sequential, clocked machines—do not readily capture key features of neuromorphic computation, including spike timing, parallelism, communication cost, and energy consumption. The project aims to develop a neuromorphic complexity framework that will enable rigorous comparison between spiking neural networks and classical models, clarify what classes of problems SNNs can solve efficiently, and identify fundamental trade-offs between accuracy, latency, and energy.
Neuromorphic computing holds strong potential across a wide range of applications where energy efficiency, low latency, and real-time adaptability are critical, such as robotics, autonomous systems, embedded sensors, wearable and implantable devices, smart sensors, and Internet-of-Things (IoT) systems. Its event-driven and low-power characteristics make it particularly well suited for edge computing, enabling intelligent processing directly on devices with limited power and communication bandwidth.
Building upon principles of computational complexity and theoretical computer science, this PhD project seeks to develop a robust understanding of the neuromorphic model of computation. We will investigate its unique capabilities and constraints to identify the most appropriate applications for this new technology. The core of the research involves constructing a formal mathematical framework that precisely characterises resource dependencies (time, space, and energy) in end-to-end neuromorphic architectures. By applying this framework specifically to Spiking Neural Networks (SNNs), we aim to either reconstruct computational complexity theory or extend it by defining new ‘neuromorphic’ complexity classes with rigorous membership criteria.
The School of Computer Science and Informatics at the University of Liverpool provides an exceptionally strong environment for pursuing foundational research with internationally recognised expertise in theoretical computer science across three subject groups: Algorithms and Computing Systems, Trustworthy Computing and, Artificial Intelligence.
In REF 2021, the unit was ranked 5th in the UK for world-leading (4*) research outputs, with 100% of the research environment rated as either world-leading (4*) or internationally excellent (3*), demonstrating the strength and vitality of the environment for sustaining world-class research.
Defence Science and Technology Laboratory (Dstl). As the Ministry of Defence (MOD)’s in-government science and technology organisation, Dstl provides unique expertise, insight and innovation to maintain UK warfighting readiness in an increasingly dangerous and complex world. As MOD science and technology leaders, Dstl provides expert advice, analysis and capability across a wide range of applications fulfilling our responsibility to further technological advances in UK sovereign capabilities and support to UK defence.