People in lecture

Using AI effectively: What Helps, What Hurts

Chris McCall-Twentyman
Management School

This case study summarises an academic skills session titled “Using AI Effectively: What Helps, What Hurts,” delivered to undergraduate students at the University of Liverpool Management School.

The session aimed to help students engage with generative AI tools like ChatGPT in a reflective, critical, and ethical manner. Implemented as a lecture-style talk with interactive discussion, it explored practical uses of AI, prompt engineering, academic integrity, and student agency. Student dialogue played a key role in shaping the session, with real-time questions and reflections fostering deeper engagement. The session enhanced programme provision by addressing digital fluency and ethical academic practice, and has since prompted planning for follow-up sessions focused on AI literacy and academic writing. A key aim of the session was to support students in developing effective and ethical AI practices, aligning with the University’s broader commitment to digital fluency and responsible use of emerging technologies.

Please briefly describe the activity undertaken for the case study

In October 2025, I delivered a 50-minute session titled “Using AI Effectively: What Helps, What Hurts” to a mixed-year undergraduate audience in the University of Liverpool Management School. The session launched our Academic Masterclass series for the semester, a programme designed to develop students’ academic skills. This sits within ULMS Extra, our wider extracurricular initiative supporting student development. The session aimed to help students engage with generative AI (GenAI) tools like ChatGPT in a reflective, critical, and ethical way. Through a lecture-style talk supported by real-world examples, I encouraged students to explore both the advantages and limitations of AI in academic study, while raising awareness of overreliance, shallow output, and academic integrity concerns. Given how embedded AI increasingly is in student routines, (a theme echoed in conversations with peers and students), it became clear that a session was needed that assumed student use and focused instead on guiding them toward more effective, ethical, and reflective practices. Consequently, the session positioned AI not as a shortcut, but as a study skill, a tool for clarifying complex ideas, sparking insight, and supporting deeper engagement with learning.

How was the activity implemented?

The session built upon a previous Academic Masterclass that explored GenAI tools, a session which had proven extremely popular. It also drew on my own professional and personal experience using AI, and on research which highlights the benefits of dialogue-based engagement with GenAI tools, as opposed to passively accepting their outputs. The session was delivered in a live, lecture-style format, consistent with other Academic Masterclass sessions in the Management School, and was designed to provide a supportive and critical introduction to AI use within academic study.

The session was listed on Handshake, and undergraduate students across programmes were invited via Canvas announcements on their programme pages. A total of 87 students registered, with 31 attending the live session. Those in attendance were receptive and actively engaged, contributing to an open discussion about their prior experiences using AI in their studies. To encourage early participation, I opened with a low-stakes whole-class discussion question and used informal check-ins (e.g., “Who’s tried using AI for study before?”) to reduce pressure. I did not use small groups, as the cohort responded well to guided whole-class dialogue. Initially, there was some hesitation in admitting to using AI for academic purposes, but once a few students shared examples, confidence grew and the conversation deepened. One student reflected on a peer who had used AI to write and submit an assignment that passed. This prompted a critical unpacking of what it means to “use AI to write an assignment”, raising questions about prompt quality, citation checking, and academic integrity. We explored the emerging practice of using GenAI to generate the prompts that students then feed back into the tool. I highlighted that while this can improve output quality, it risks distancing the user from the thinking process. This led to a discussion on agency and the importance of retaining ownership of the thinking process. Students also shared examples of hallucinations, instances where AI tools produce information that is factually incorrect or fabricated, which I supplemented with my own, reinforcing the need for fact-checking. A simple fact-checking routine could include cross-checking claims with trusted academic sources, using library search tools to confirm authors and theories, checking factual detail through reputable databases and websites, and applying lateral reading strategies to assess credibility. Toward the end of the session, a student asked about AI’s future impact on jobs and society. This opened a brief but meaningful reflection on ethical leadership and the role students will play in shaping AI’s trajectory.

Has this activity improved programme provision and student experience, if so, how?

The session improved both programme provision and the student experience by offering a timely and relevant opportunity to explore AI in a way that felt practical, non-judgemental, and academically grounded. It addressed the reality that many students are already using tools like ChatGPT, but often without guidance on how to use them critically or ethically.

The session gave students a space to reflect on their own use of AI and to explore how it might support their understanding, productivity, and academic skills. Rather than framing AI as a shortcut or a risk, it was presented as a tool that can encourage deeper thinking, clarify complex ideas, and support independent study. At the same time, students were encouraged to think critically about shallow output, misinformation, and the temptation to outsource thinking to technology.

From a programme perspective, the session offered something new and highly relevant to students' lived experience. It introduced a conversation around AI literacy and academic integrity that aligns well with the direction of the university and the wider sector. It also brought students into that conversation in a way that felt inclusive and supportive.

There was strong student interest throughout the session, with students engaging in thoughtful questions during and after the talk. One student even posted a reflective summary on LinkedIn, which prompted interest from colleagues and academic leads. Several students asked whether there would be more sessions on AI, and this has encouraged further planning around topics like AI and academic writing. In this sense, the session helped open the door to a wider, ongoing conversation.

In future iterations, it may be valuable to capture changes in student confidence or digital capability using diagnostic tools such as the Jisc Discovery Tool, allowing the session’s impact on AI literacy to be monitored over time.

Did you experience any challenges in implementation, if so, how did you overcome these?

One key challenge was navigating the line between supportive use of AI and full outsourcing of academic work. The University’s GenAI policy outlines clear principles, but the practical application of those principles remains complex for both staff and students. To address this, the session emphasised transparency, critical engagement, and reflective practices, positioning AI as a thinking partner rather than a content generator. Students were encouraged to ask not just “Can I use AI?” but “How am I using it, and am I still thinking for myself?”. The session applied principles of Vygotsky’s Zone of Proximal Development (ZPD) where AI provides scaffolding for students to develop and deepen their theoretical understanding so they can progress to independent critical analysis and reflection.

A second challenge was managing demand and capacity. The session was oversubscribed, reflecting strong student interest in the topic. Anticipating no-show rates, I allowed for overbooking and am now planning follow-up sessions to ensure wider access. The slides were made available on the Learning and Teaching Support Officers’ Canvas page alongside other study resources to be accessed asynchronously. This experience highlighted the need for scalable, inclusive approaches to digital literacy education.

In preparing for the session, I accounted for personal challenges related to ADHD by using structured slide anchors and encouraging student interaction. These strategies supported session flow and created a more dialogic, inclusive learning environment. This experience also highlighted the importance of designing sessions that are cognitively accessible for both students and educators.

How does this case study relate to the Hallmarks and Attributes you have selected?

Core Value: Inclusivity

The session was designed using Universal Design for Learning (UDL) principles, clear structure, varied modes of engagement and cognitive scaffolding, to support students with diverse levels of AI experience. This ensured that all students could participate confidently, regardless of starting point.

Hallmark: Active learning

The session promoted active learning by encouraging students to engage in iterative, dialogic exchanges with AI tools. Rather than passively accepting AI-generated content, students were guided to question, refine, and reflect on outputs in real time. This approach mirrors Freire’s concept of dialogic teaching, where learning emerges through questioning, reflection, and mutual exploration. By treating AI as a conversational partner, students were supported in developing deeper understanding through active engagement.

Graduate Attribute: Confidence

The session supported the development of student confidence by creating a non-judgemental space to explore AI use openly. By encouraging students to share their experiences, ask questions, and engage in discussion, the session helped demystify GenAI tools and reduce anxiety around their use. Students were empowered to reflect on their own practices and consider how AI could support, rather than replace, their academic thinking. This fostered a sense of agency and ownership over their learning.

Graduate Attribute: Digital Fluency

Students were guided to use AI tools critically and reflectively, developing an understanding of how to interact with generative systems in a way that supports academic integrity and deeper learning. The session emphasised iterative engagement, prompt refinement, and critical evaluation of outputs, helping students move beyond surface-level use. This approach aligns with the development of digital fluency, enabling students to navigate emerging technologies with confidence and ethical awareness.

How could this case study be transferred to other disciplines?

The responsible use of GenAI tools spans all disciplines, both academic and professional. While the nature of the interaction will differ, there are core principles which are easily transferable:

  • Framing GenAI as a learning partner, instead of a shortcut to an output is becoming an essential mindset.
  • Teaching students how to critically engage with tools like ChatGPT and Copilot.
  • Emphasising discipline-specific risks (e.g. citation errors in Law, oversimplification in Science, bias in Social Sciences)

This session started with understanding what students already do, provided context-relevant information and examples, and emphasised a reflective and dialogic approach. I believe this format will help transfer application to other disciplines. A future step is to develop a short, shareable session template—such as a lesson outline, example conversation flows, or a slide deck—that could be uploaded to FigShare under a CC BY-SA licence. This would support staff across disciplines in adapting the approach for their own contexts.

If someone else were to implement the activity within your case study what advice would you give them?

Begin with where students already are

Assume that students are using GenAI in some form and start from the question, “How can we use this better?” Future iterations may also include optional introductory materials for those with no prior experience, ensuring the session is accessible across different starting points.

Create a practical, non-judgemental environment

Students engaged more openly when the tone made clear that the session was reflective rather than punitive. I normalised uncertainty by acknowledging my own learning journey with GenAI, which helped reduce fear and encouraged honest discussion of real practices.

Build interaction by modelling reflective practice

Early contributions were validated to establish psychological safety and encourage wider participation. Allowing time for students to share experiences, ask questions, and analyse AI outputs together helped foster deeper engagement and a sense of ownership over the learning process.

Position the session as supportive rather than restrictive

Present AI as a thinking partner that can support understanding, while reminding students to maintain agency and uphold academic integrity. This helps create a space where students feel comfortable exploring AI tools without fear of judgement.

Keep the session focused on behaviours rather than tools

Avoid overwhelming students with multiple platforms or technical features. Prioritise transferable practices such as critical engagement, fact-checking, prompt refinement, and reflective thinking.

Plan for sustained demand

Strong interest may require repeating the session, running smaller workshops, or developing asynchronous follow-up resources so that more students can benefit.

Conclusion

The student-led dialogue and critical engagement with AI tools suggest that a dialogic approach may be particularly effective in fostering ethical and reflective AI use. By positioning AI as a thinking partner rather than a content generator, the session encouraged students to interrogate their own practices and develop metacognitive awareness - key components of digital fluency and academic integrity.

Looking ahead, future sessions will benefit from more explicit integration of dialogic teaching strategies and AI literacy frameworks to scaffold deeper learning. Based on strong student response and ongoing sector-wide conversations, this session is likely to form part of a wider programme exploring AI’s role in academic practice. Planned follow-up sessions will focus on academic writing, prompt development, and inclusive reading strategies, maintaining the same emphasis on critical engagement and practical application.

These developments continue to align with the University’s goals around digital fluency, student-centred learning, and ethical academic practice, contributing to the evolving conversation around AI in higher education. As the programme grows, there is potential to collaborate with the KnowHow team to develop asynchronous resources, or even support a small student–staff community of practice around effective AI use. Longer-term, inviting students to document how their AI practices evolve (e.g., reflective diaries) could provide valuable insights for programme development.

References

Freire, P. (1970) Pedagogy of the Oppressed. New York: Herder and Herder.

Holmes, W. and Murdoch, J. (2022). Artificial Intelligence in Education: Promise and Implications for Teaching and Learning. UNESCO.

Jisc (2023). Digital Capabilities Framework: Developing Digital Fluency in Higher Education. Bristol, Jisc.

Luckin, R. (2024). AI for Education: Towards a Future of Ethical and Effective Use. London, UCL Press.

Selwyn, N. (2023). AI in Education: Understanding the Limits and Possibilities of Artificial Intelligence. Cambridge, Polity Press.

Vygotsky, L.S. (1978) Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press.

Creative Commons Licence
Using AI effectively: What Helps, What Hurts by Chris McCall-Twentyman is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.