"

4 Limitations of GenAI

Non-exclusive list of limitations of and considerations when using GenAI

Hallucinations and factual inaccuracies:

Generating plausible-sounding but incorrect or fabricated information.

  • images where humans have extra fingers,
  • fake citations in reports,
  • misinterpreted summaries of data.

Dependence on training data:

Performance is limited by the quality, quantity, and diversity of the data it was trained on.

  • No awareness of recent events (like elections, scientific discoveries, or latest celebrity scandals).
  • Training data might not have information in fast-moving fields like medicine, technology, or law based on the training data cutoff.

Bias:

Reflecting and amplifying biases present in the training data, leading to unfair or discriminatory outputs.

  • Language that was once appropriate and is now offensive.
  • Training data that reflects sexism, ageism or racism.
  • One study noted that AI teacher assistants treat students differently based on the racial stereotypes in the training data.

Lack of common sense and real-world understanding:

AI models don’t “understand” the world in the human sense and can produce illogical or nonsensical results.

  • Suggesting you microwave metal containers (don’t do that!),
  • recommending walking routes across highways or restricted areas (We see you, Google Maps),
  • advising unsafe electrical fixes without proper tools or precautions.
  • One systematic review found that GenAI used for mental health education and interventions lacked contextual reasoning, accuracy, cultural sensitivity, and emotional engagement with users.

Ethical concerns:

Misinformation, impersonations, copyright issues, job displacement, and the potential for misuse.

  • AI-generated images, videos, and voices (deepfakes) can be used to impersonate people convincingly.
  • (Un)intentional reproduction of copyrighted text, art, or code.
  • AI chat bots replacing people in customer service roles.
  • Governments enabling GenAI propaganda machines (through the mass generation of biased content) to sway public opinion.

Additional considerations

Academic integrity considerations:

Students, faculty and researchers may be required to limit their use of AI on specific projects, assignments, and research based on academic integrity policies. An instructor should communicate to students what they consider to be appropriate AI use for each class.

The Student Conduct Code contains more in information on policies at the University.

Environmental considerations:

Energy Consumption: Training large AI models requires immense computational power, leading to significant energy consumption.
Carbon Footprint: This energy consumption contributes to a substantial carbon footprint, raising concerns about sustainability.
Data Storage: The vast datasets used for training and the models themselves require massive storage, adding to environmental impact including water supplies.

The Wall Street Journal traced the impact of how much energy a prompt uses.

Google Gemini-generated infographic of how AI systems work. It is riddled with spelling errors
Google Gemini-generated image from the prompt: “Could you create an infographic targeted at undergraduate college students on how Generative AI systems work ?” Note all the errors in spelling and grammar and nonsensical images.

 

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

GenAI+U: A Student Learning Experience Copyright © 2025 by University of Minnesota Libraries is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Share This Book