Foundations of AI
Read more about the foundations of AI, the differences between artificial intelligence and generative artificial intelligence. As well as tools available at the University.
Artificial intelligence
Artificial intelligence concerns the study and design of computer systems for performing tasks that would normally require human intelligence. The field of AI covers multiple techniques to capture different aspects of intelligence such as decision making, reasoning, learning and communicating. Generative AI is one particular sub-field within the more general field of AI.
Generative artificial intelligence
Generative artificial intelligence describes AI systems that can create new content in response to a ‘prompt’, including text, images, code, audio, video, and more across many platforms and tools. Generative AI systems work by identifying patterns in their training data and generating statistically likely outputs based on those patterns. Advanced systems can also reason through problems step-by-step and refine their responses before presenting them.
While generative AI tools can support exploring ideas, improving productivity, and enhancing learning, their outputs can be incomplete, incorrect, biased, or lack proper attribution. In academic contexts, users must also consider implications for academic integrity, critical thinking development, and data confidentiality. Generative AI should therefore always be used with appropriate care, critical evaluation, and human judgement in line with university policies and disciplinary expectations.
University of Liverpool tools
Microsoft Copilot
For University work, staff and students should use Microsoft Copilot – the endorsed generative AI chat tool. Copilot is provided through the University's Microsoft 365 environment, with data governance and safeguards aligned to our institutional requirements.
Access and security
All staff and students have access to Copilot Chat through Microsoft's Edge browser or at copilot.microsoft.com. It can also be used through the Microsoft 365 Copilot desktop app on Managed Windows Service (MWS) computers. When accessed on a MWS device (when you are signed in with your University account), Copilot Chat has Microsoft's Enterprise data protection enabled – indicated by the green shield icon in the top-right corner of the window. This ensures that:
- Interactions are secure and compliant with institutional requirements
- Any documents, spreadsheets, or PDFs uploaded, along with responses received, are stored securely in your OneDrive
- Microsoft does not use this data to train its foundation models, Microsoft will only use it to serve the specific chat request. It also means it is encrypted at rest and in transit, it is isolated from other organisations’ data, it falls under our enterprise data protection agreement with Microsoft
- Data is not shared outside the organisation.
Please note: there’s an important difference between the free version of Copilot Chat and the paid Copilot for Microsoft 365 licence. With the paid licence, any files you upload in chat are saved to your OneDrive in a folder called “Microsoft Copilot Chat Files”. The free version does not save uploads to OneDrive because it does not have access to our Microsoft 365 storage. Microsoft states that files uploaded to the free version are stored securely and then deleted after a period of time (potentially up to around 18 months, similar to the consumer Copilot Chat).
Capabilities
Copilot can support tasks such as drafting and refining text, summarising information, generating ideas, structuring plans, analysing data, and brainstorming through a mix of basic file uploads and web searches.
Enhanced functionality
M365 Copilot is also available on a subscription basis, providing enhanced functionality and direct integration with Microsoft's 365 suite (Word, Excel, Teams, PowerPoint etc.), including the ability to search and work with any document you have permission to access on the University's tenancy. This tool is currently being trialled as part of a Proof of Concept.
User responsibilities
All users remain responsible for:
- Checking accuracy and quality of all outputs
- Ensuring academic integrity by properly attributing AI use in line with guidance
- Applying critical judgement to ensure outputs meet academic and professional standards.
Embedded generative AI
Not every piece of generative AI software will consist of a standalone application or dedicated website where users interact with the technology to produce outputs. Some software companies are working to embed features within their products that are powered by generative AI to add new, or enhance existing, features.
Some examples might include the AI-powered image and content generation tools embedded within Canva or the AI-powered features of Grammarly. Some of the above examples might also appear as embedded AI – Google's Gemini platform is accessible within Google Docs as an assistive tool, as well as accessible in its standalone application. Copilot, too, can be made accessible via Microsoft 365 applications with the appropriate subscription.
Some of these AI-powered features in existing applications are reserved for premium-tier users and will incur an extra financial cost to access.
Prompt literacy
Prompt literacy is not just a technical skill; it is fast becoming a critical communication competency. It is the ability to effectively guide generative AI systems to produce useful, accurate, and relevant outputs. Like the more familiar academic writing or statistical analysis, it is a skill that improves with understanding and practice.
1. The anatomy of an effective prompt
A vague prompt will yield vague results. To get high-quality outputs suitable for work, treat your prompt like a brief for a competent research assistant.
Use the content, task, constraints, inputs, output (CTCIO) framework to build your prompt:
| Component | Function | Example |
|---|---|---|
| Context | Who is this for? What is the professional role or institutional setting? | "Acting as the Quality Assurance Officer preparing the 2026 Undergraduate Course Handbook..." |
| Task | What specific action do you want? | "...standardise the provided rough notes into a formal module description for prospective students..." |
| Constraints | Tone, length, regulatory style, compliance requirements, exclusions. | "...using a professional, inviting, yet formal tone. Ensure all learning outcomes start with active verbs (Bloom’s Taxonomy). Avoid first-person phrasing ('I will teach'). Max 150 words." |
| Inputs | Raw data or text you provide. | "Here are the lecturer's rough bullet points regarding the 'Intro to Macroeconomics' syllabus: [PASTE TOPIC LIST]" |
| Output | Structure, file format, or institutional templating requirements. | "Format as three distinct sections: 'Course Overview', 'Key Learning Outcomes', and 'Assessment Methods'." |
2. Additional prompting strategies
Once you have the basics, use these strategies to refine the output for complex academic tasks.
Iterative refinement
Treat prompting as a conversation, not a one-off command. Start broad, review the output, and then narrow your focus.
Example: "That is too descriptive. Rewrite the second paragraph to be more analytical."
Request reasoning ('chain of thought')
Don't just ask for the answer; ask for the logic. This helps you spot errors in the AI's process.
Example: 'Explain the assumptions behind your previous response' or 'Show your working step-by-step.'
Adding constraints or ‘anti-prompts’
Explicitly stating what not to do is often as powerful as stating what to do. Example: 'Do not invent citations. If you do not know a source, state that you do not know.'
You can also find useful advice on prompting in this online KnowHow tutorial.
Capabilities, limitations and risks
In considering generative AI it is important not only to understand its capabilities but also its limitations.
| Capabilities | Risks |
|---|---|
| Generates unique content in various formats (text, images, code, etc.) | May produce incorrect, biased, or inappropriate outputs ('hallucinations') |
| Supports a range of tasks: summarisation, brainstorming, translation, coding, etc. | Cannot verify truth, accuracy, or copyright status |
| Enhances productivity and creativity | Lacks up-to-date knowledge (unless connected to the internet) |
| Embedded in everyday tools (e.g. Office 365, Canvas) | May reinforce user or algorithmic biases, or output culturally/politically biased content |
| Can streamline admin, teaching, and research tasks | Security, privacy, and copyright risks; environmental impact |
Please note - generative AI does not 'understand' content as humans do. It predicts the next word or element based on patterns in data, which means it can produce plausible but incorrect or misleading outputs ('hallucinations'). Always verify generative AI outputs before using them as fact, especially in academic, research, or public-facing contexts.