16 Output reliability and GenAI
GenAI is sometimes wrong
Sometimes, GenAI tools will create information that sounds correct but is actually wrong or misleading. We call this a hallucination.
Here’s why this happens: A GenAI model (like a Large Language Model, or LLM) is designed to predict the next word in a sentence based on patterns it has seen in huge amounts of data.
It doesn’t actually “know” facts like a human does. It just knows what looks like a fact based on those patterns. Sometimes, it confidently gives you information that’s completely false or made up.
Hallucinated citations
One common example in academic use is hallucinated citations. This often happens when you ask GenAI for academic sources on a certain topic. It might give you real-looking titles and authors that don’t actually exist!
Prompt: Give me five good research sources on fast fashion
- Kumar, Vikas, et al. “Sustainable Fashion Supply Chain: Lessons from H&M.” Journal of Cleaner Production, vol. 112, no. 4, 2016, pp. 2377-2385. ScienceDirect, doi: 10.1016/j.jclepro.2015.09.048.
- Hancock II, Joseph H. “Fast Fashion, Sustainability, and the Ethical Appeal of Luxury Brands.” Fashion Theory: The Journal of Dress, Body & Culture, vol. 17, no. 3, 2013, pp. 351-368. JSTOR, doi: 10.2752/175174113X13673404903122.
- Netter, Sarah, et al. “Sustainable Fashion Consumption and the Fast Fashion Conundrum: Fashionable Consumers and Attitudes to Sustainability in Clothing Choice.” International Journal of Consumer Studies, vol. 41, no. 3, 2017, pp. 290-298. Taylor & Francis Online.
- Liu, Tingchi, et al. “A Circular Economy Approach to the Fashion Industry: A Review.” Sustainability, vol. 12, no. 5, 2020, pp. 1788. MDPI, doi: 10.3390/su12051788.
- Rissanen, Timo, and Holly McQuillan. “Textile Waste Minimisation Through Design.” Journal of Textile Design Research and Practice, vol. 5, no. 1, 2017, pp. 1-22. Taylor & Francis Online, doi: 10.1080/20511787.2017.1281423.
Any output from GenAI should be evaluated for accuracy, but especially for things like citations. In this example, we’d want to either
- follow each DOI link in the citation, or
- copy and paste each citation into Google Scholar or another database.
Is this just with text output?
No, it’s not just text GenAI that can make things up; GenAI image creators can also “hallucinate.” This means they create pictures that aren’t quite right.
Because of the huge amount of image data these GenAIs learn from, they can often have trouble with basic things. For example, they might create humans or animals with the wrong number of fingers, toes, or even entire limbs. AI also struggles with words in images, so if you ask it to create an infographic, the spelling in any text it generates might be incorrect.
In the example image of the University of Minnesota students, there are several elements that are concerning about it being a real image:
- Duluth is spelled incorrectly,
- the mascot for Rochester is incorrect, and
- it doesn’t show the University of Minnesota colors as much as it could.

Since AI-generated images aren’t always accurate, it’s really important to evaluate them.
You can do this by using tools like Google’s reverse image search to see if the image (or parts of it) exist elsewhere or to find reliable sources for the information it supposedly represents.
refers to the generation of outputs that may sound plausible but are either factually incorrect or unrelated to the given context.
A type of Generative AI specifically designed to understand, generate, and process human language, often trained on vast amounts of text data.