30 Case Study: AI and research papers
The scenario
They then searched for articles in library databases, finding eight relevant ones to their research question. To manage the workload, they divided the articles among themselves for reading and summarizing, pasting their individual summaries into a shared Google Doc.
As Selena reviewed the document, she noticed that Javier’s summary of one particular article seemed inaccurate; she’d read it for another class. When she approached Javier, he admitted to uploading his assigned articles to Google Gemini and prompting it to summarize them because he struggled with the scientific jargon.
Later, while drafting their introduction, Maria found herself stuck on articulating an argument about the long-term impacts of climate change. Instead of wrestling with the complex ideas, she fed her partial thoughts and several key phrases into an AI writing tool, asking it to “elaborate and connect these ideas into a compelling paragraph that sounds like it was written by a first year writing student.” The AI-generated paragraph was indeed compelling, but it obviously contained phrasing and insights Maria hadn’t come up with on her own.
Maria decides to not use that text, and instead talks with her group members about how they can best represent the argument.
What appropriate and inappropriate uses of GenAI can you identify?
- GenAI for brainstorming – The group’s initial use of GenAI to brainstorm ideas aligns with the professor’s guidelines.
- GenAI for summarizing – Javier uploaded his articles to a GenAI and prompted it to summarize them. This is an inappropriate use because it bypasses a core academic task: reading, comprehending, and summarizing source material. This task requires critical thinking to identify the main arguments, key evidence, and overall purpose of an article, especially one with complex jargon. By outsourcing this task to GenAI, Javier failed to engage with the material directly, which resulted in an inaccurate summary.
- GenAI for writing output: Maria fed her partial thoughts into a GenAI tool and asked it to “elaborate and connect these ideas into a compelling paragraph.” This is a clear violation of the professor’s rule against using GenAI for writing output or tasks requiring critical thinking. However, Maria’s decision to not use the AI-generated text and instead discuss the issue with her group was an appropriate and ethical choice. It shows she recognized the problem and opted to complete the task collaboratively.