Skip to main content
What types of page to search?

Alternatively use our A-Z index.

Why the right approach to AI at Liverpool matters

Posted on: 18 March 2026 by Professor Richard Black, Deputy Vice-Chancellor in 2026

Professor Richard Black

Artificial intelligence (AI) is no longer a thing of the future, understood by a small number of people while the rest of us wondered what all the fuss was about. It is already part of our everyday working lives - whether we actively choose to use it or not.

You don’t need to be a technical expert to be using artificial intelligence. It is with us - whether we like it or not, it is a part of teaching, learning, research and professional practice. It is embedded in many of the tools we use every day: email software that predicts text, systems that recommend content, or platforms that help analyse data more quickly.

AI is seemingly unavoidable. But it does not make it unmanageable.

The real question for universities like ours is not whether AI will be allowed, but how it is used well, safely and responsibly.

Getting that right matters. But it isn’t easy, because AI is not a single tool or a quick fix. It is a developing ecosystem that touches on academic standards, professional judgement, ethics, data protection and trust.

AI at Liverpool

At our recent all‑staff open meeting, we shared an update on how the University is approaching AI. The message is a simple one: as an institution, we cannot ignore or reject AI - to do so would be to let our students down. But we also need to work patiently and iteratively to get it right.

Public discussion about AI often swings between excitement and anxiety. There is no doubt that AI has real potential. The University’s AI for life research frontier is demonstrating how AI can improve lives, strengthen communities, and drive inclusive economic growth. AI is central to some of our most amazing research, for example in the Materials Innovation Factory and Civic Health Innovation Labs.

It also has uses for much more mundane, and sometimes fun activities. But if we don’t use it in a responsible way, it can become ethically questionable, a security risk and expensive.

Our approach starts from a simple principle: AI should support the University’s academic mission, not shape it. That means focusing less on individual tools and more on shared understanding, good judgement and institutional coordination. It means encouraging experimentation and learning, as well as putting in place an infrastructure where all can benefit. But above all, it means acting responsibly.

What is our University doing about AI?

Because AI is evolving quickly, we are also moving quickly and cautiously. But we are not rushing to use a single product or do a deal that will tie our hands in the future. We are testing the waters and seeking to learn lessons.

For example, a trial of the paid-for version of Microsoft Copilot this year has demonstrated a number of valuable use cases, but also identified where things don’t work so well. Our students are also trying things out - a recent HEPI report found that three quarters of 13-18 year olds have already used AI, suggesting most of our students will have at least some experience of AI before coming to university. We can all explore what the opportunities might be - but it’s important to do so with a keen eye on the cost implications, the sustainability of what we do, and the imperative of keeping control of our data.

We are also learning and trying to encourage everyone to do the same. AI is triggering one of the biggest changes to the world of work since the arrival of the personal computer. We can’t look on from the sidelines, nor will most of us master the technology just by playing with it. That’s why we’ve set up a series of online training sessions for staff, delivered by Jisc, and developed initial KnowHow modules for students. The training is platform-agnostic, which is important. Based on our experience so far, we hope to move to in-person training for staff soon and expand our offer to students.

The AI train is leaving from platforms 1, 2, 3, 4 and 5

So what platform are we using? There isn’t a simple answer to this. Universities are complex environments. What makes sense in one discipline, service or context may not be appropriate in another. There are also a lot of products on the market, and it is far from clear which ones will be around in the long term. For that reason, we are seeking a hybrid environment, one where we are not tied to one provider in the long term, but where all staff and students will have access to a good level of AI support. We have also developed a new AI Hub providing helpful guidance and advice and are looking to make sure that all our academic programmes include the use of AI, but also contain components where AI is questioned, and indeed not used at all.

This work is being shaped through cross‑University collaboration via our AI and AI in Education Working Group, involving academic, professional services and research colleagues. This is closely aligned with our Digital Strategy and the ambitions set out in Liverpool 2031, ensuring that AI supports - rather than distracts from - our long‑term goals for education, research and how the University operates. External collaboration remains critical and we are working with a number of expert partners and Liverpool City Region. We are also in close dialogue with other universities that are facing the same challenges but also the same shared opportunity around AI.

Looking ahead – keeping our people at the centre

AI can do a lot. It is important enough that students graduating without the knowledge of how to responsibly use AI will find it hard to get jobs. But let’s be clear; teaching quality, research excellence and effective professional practice - within the University and beyond - continue to rest with people. AI can assist, augment and streamline work, but responsibility and accountability remain human. A responsible approach to AI makes this explicit, rather than leaving it unclear or assumed.

Our approach to AI will continue to evolve as the technology, and our understanding of it, develops. What will not change is the emphasis on clarity, coordination and trust.

Further guidance, communications and training will be shared over time, allowing colleagues to engage at a pace that suits their role and level of interest. That will include more blogs like this, written by people who know much more about AI than I do, which will explore in more detail how we are approaching AI in education, in research, and in our professional work.

In the meantime, I want to thank colleagues who have helped us to move so far so quickly, We are prioritising getting it right, but we also can’t wait for everything to be clear - we owe it to our students to help them adapt to this new world now, rather than some time in the future.

ChatGPT didn’t write this blog, but I did ask another AI tool how I should end it. The suggestion was: “At Liverpool, we aren't just observing the AI revolution; we are shaping it for the good of all.” An ambitious claim - but not one that is completely beyond our reach.

More information