Generative AI and the challenge of misinformation
Generative AI (GAI) tools such as ChatGPT can create fluent, humanlike text in seconds. This ability makes them powerful, but also very risky. Unlike humans, these systems do not truly understand knowledge. Instead, they predict the most likely sequence of words based on patterns in their training data.
As AI pioneer Geoffrey Hinton has observed, their intelligence is “very different from the intelligence we have.” While GAI can process and reproduce vast amounts of text, it lacks the ability to reason about accuracy or consider consequences.
Interdisciplinary research at the University of Liverpool is tackling these risks by studying how GAI generates misinformation and exploring ways to redesign these tools for social good. Studies have shown that GAI systems can fake academic citations, fabricate book references, or distort key points from original texts. This makes them unreliable for tasks such as finding credible sources or summarising complex material.
GAI-generated misinformation is particularly dangerous because it is highly persuasive. In testing, systems like ChatGPT have quickly produced harmful instructions such as unsafe dieting advice or misrepresented academic content in ways that appeared authoritative. One study of AI-generated podcasts found that summaries omitted crucial methodological details, introduced misleading interpretations, and even invented definitions. Because these outputs are packaged in fluent, polished language, audiences are less likely to question them.
The risks extend beyond information quality to decision-making. For example, in recruitment scenarios, ChatGPT reinforced gender stereotypes by valuing male-associated traits such as “technical skills” and “quantifiable results” for leadership while downplaying collaboration and interpersonal skills. This shows that AI is not inherently more objective than humans; in fact, it can amplify existing biases.
To address these challenges, Liverpool researchers are reverse engineering GAI to detect misinformation and developing conversational designs that encourage “reasonable parrots”. Further research focuses on digital inclusion, investigating what skills are needed for the responsible and effective use of AI. This has led to the development of a framework for AI literacy, designed to equip users with the competencies necessary to critically engage with these technologies.