University guidance on generative artificial intelligence for research
The University supports the responsible and ethical use of generative artificial intelligence (GAI) in research. However, inappropriate, undisclosed, or uncritical use of AI tools can compromise research integrity, breach legal obligations, and damage institutional and individual reputation.
The increasing use of AI across the sector means that funders, publishers, and regulators are actively monitoring compliance. The University expects all researchers and professional services colleagues to apply critical judgement, maintain transparency, and prioritise integrity at all times. Users remain fully accountable for all research outputs, submissions, analyses, and assessment materials - regardless of whether AI tools were used in their preparation.
This page outlines key risks and inappropriate use cases across the research lifecycle. It should be read alongside the University's golden rules on AI, policies on research integrity, and legal, security and data protection. These examples are not exhaustive.
Overarching ethical and legal considerations
Across all stages of research, inappropriate AI use may raise concerns relating to:
- Research integrity - fabrication, falsification, plagiarism, misrepresentation
- Copyright and intellectual property - unlawful reuse of protected content
- Data protection and GDPR compliance - unlawful processing or transfer of personal data
- Confidentiality - breach of contractual, commercial, or peer review obligations
- Bias and discrimination - amplification of embedded societal or disciplinary bias
- Transparency and accountability - failure to disclose AI involvement.
Inappropriate uses
Read some examples below of inappropriate uses of AI in the research lifecycle.
Planning research, grant writing and bid development
- Submitting AI-generated grant text without critical review or verification
- Failing to disclose AI use where required by a funder
- Uploading confidential project ideas, partner information, or unpublished data into public AI tools which lack data protection
- Generating fabricated references or research data
- Presenting AI-generated content as human-authored without intellectual contribution
- Non-compliance with funder guidelines.
Experimental design, data collection and analysis
- Uploading personal, sensitive, or confidential research data into AI tools that are not institutionally approved
- Inputting identifiable participant information into public generative AI systems
- Using AI to fabricate, manipulate, or selectively alter data
- Relying on AI-generated statistical analyses without validation
- Allowing AI-generated code to be implemented without review or testing
- Feeding participant data into public AI tools without consent.
Academic writing, publishing and peer review
- Submitting AI-generated manuscripts without substantive intellectual contribution
- Failing to disclose AI use where required by a journal and listing AI tools as authors against publisher guidance
- Uploading confidential manuscripts (as an author, reviewer, or editor) into public AI systems
- Using AI to generate peer review reports
- Fabricating citations, data, or reviewer comments
- Using AI to paraphrase others’ work to evade plagiarism detection
- Non-compliance with publisher/journal guidance on AI use.
Assessment, REF and KEF preparation
- Using AI to conduct the assessment of research outputs
- Fabricating or exaggerating impact and generating unsupported claims about reach, significance, or engagement
- Manipulating citation or performance data
- Submitting AI-generated case studies without verification
- Failing to declare AI use where required by assessment guidance
- Using AI in ways that fail to comply with national assessment framework rules (final REF2029 guidance released October 2026), the University’s REF/KEF guidance (institutional REF code of conduct, once approved), or University policies and guidance.