Natural language processing
Advancing human-centric artificial intelligence (AI) that can understand, interact, and reason with people through natural language.
Natural language processing (NLP) is the branch of artificial intelligence focused on enabling computers to understand and use human language. From conversational assistants like ChatGPT and Alexa, to web search engines, translation systems, and clinical text analysis, NLP powers many of the technologies we use every day. At Liverpool, our NLP group develops methods that are safe, interpretable, and human-centric, with applications ranging from healthcare and law to finance, education, and science.
Our research
At Liverpool NLP group, we conduct both theoretical as well as applied research in NLP spanning diverse topics such as representation learning, safety alignment of LLMs, information extraction, commonsense reasoning, interpretability, and agentic NLP. We have world-leading researchers in NLP committed to student supervision and teaching at Liverpool Computer Science. Our graduates have gone on to lead NLP projects in diverse industries from law, finance, machine learning, medical sciences. The group has received funding from research councils (e.g. NIHR, MRC, BBSRC, InnovateUK, EU) as well as industry (e.g. Amazon, Deloitte, Cookpad).
The research breakthroughs of the group have had wider implications. For example, during the recently concluded DynAIRx project the group developed efficient NLP methods for detecting how medicines can adversely affect human health and which does could be safely deprescribed.
Members of the NLP group are conducting cutting-edge research into multiple topics in NLP and publishing their findings at top-tier international venues in NLP.
Representation Learning
For computers to understand human languages, they must be first converted to a suitable representation. Representation learning is concerned with this problem and is a key stone in modern-day deep learning. The group has developed various word and sentence representation learning methods that accurately capture the meaning expressed in text.
Safety Alignment of LLMs
Given that LLM powered AI assistants interact with diverse users from children to adults, from different cultural and social backgrounds, we want their responses to be safe and respectful. The NLP group has developed methods for detecting and mitigating social biases expressed by LLMs, which have been widely adopted in the research community.
Explainability (XAI)
XAI aims to make the internal mechanisms of an AI model transparent, interpretable. The goal is not only to show what the model predicted, but why it arrived at that prediction and when it might fail. The end user of Explainability can be a domain expert (e.g. clinicians, law experts), a lay man who doesn’t have any knowledge about AI models or an AI practitioner. We focus on developing explainability solutions broadly on three different categories. They are a) Post-hoc (explanations after training): feature attribution, counterfactuals, example-based explanations, and surrogate models to summarize behaviour. b) Mechanistic interpretability: opening the “black box” to map computations inside neural networks—identifying neurons, circuits, and pathways that implement specific functions and c) Intrinsic (interpretable by design): models or components with human-readable structure (rules, graphs).
Agentic NLP
Agentic NLP focuses on developing intelligent language agents that interact, reason, and solve complex tasks using natural language. Our work includes conversational agents, language-driven games, and integrating language with decision-making. Projects are interdisciplinary, combining NLP, reinforcement learning, and agent systems. Applications range from question answering, chatbots, and game agents to scientific agents and collaborative problem-solving.
People
The NLP group is led by a team of internationally recognised researchers and supported by postdoctoral researchers and PhD students.
Academic staff
- Professor Danushka Bollegala – expert in representation learning and LLM safety alignment, Amazon Scholar
- Dr Procheta Sen – works at the intersection of information retrieval, NLP, and machine learning, focusing on mechanistic interpretability of LLMs
- Dr Meng Fang – specialises in reinforcement learning and NLP, developing intelligent agents for reasoning and decision-making.
Research associates
-
Micheal Abaho.
PhD students
- Tianhui Zhang
- Gaifan Zhang
- Lingfang Li.
Partnerships and collaborations
We work with academic partners worldwide and industry leaders including Amazon (Palo Alto, USA) and Deloitte (Tokyo, Japan) on projects spanning information retrieval, clinical text analysis, and socially responsible AI. Our research has been funded by UKRI, the EU, and multiple industry collaborators, demonstrating strong engagement and real-world relevance.
Outputs and impact
Our breakthroughs have advanced both theory and practice in NLP, with outputs influencing healthcare, law, finance, and more. Our work on medicine safety through NLP has directly informed clinical decision-making, while our research on bias detection in LLMs has been widely adopted in the AI community. Graduates from our group go on to lead NLP projects in global industries.
Opportunities
We welcome PhD students, postdoctoral researchers, and industrial collaborators to join our work on NLP and human-centric AI. Opportunities include:
- PhD projects and studentships in areas such as representation learning, explainable AI, and language agents
- Consultancy and collaboration with industry and public sector partners
- Weekly research meet-ups where members present state-of-the-art NLP developments and discuss their work.