AI and Academic Integrity: Examining the Use of Artificial Intelligence by Students

The rise of sophisticated artificial intelligence (AI) tools like ChatGPT has sparked concerns about academic integrity, particularly regarding students using AI to cheat. This article analyzes the prevalence of AI-fueled cheating, its underlying causes, and the evolving responses from educational institutions. It examines data from surveys, detection tools, and expert opinions to provide a comprehensive overview of the current landscape.

The Extent of AI-Assisted Cheating

Educators are increasingly concerned about AI-fueled cheating and how to prevent it. Newly released data is shedding light on the problem. Of the more than 200 million writing assignments reviewed by Turnitin’s AI detection tool over the past year, some AI use was detected in about 1 out of 10 assignments, while only 3 out of every 100 assignments were generated mostly by AI. These numbers have remained relatively stable since Turnitin released initial data in August of 2023. According to Annie Chechitelli, Turnitin’s chief product officer, while some students are over-reliant on AI, the issue is not as widespread as initially feared.

This observation aligns with a Stanford University survey of 40 high schools, which found that the percentage of students admitting to cheating has remained consistent since the introduction of ChatGPT. For years before the release of ChatGPT, between 60 and 70 percent of students admitted to cheating, and that remained the same in the 2023 surveys, the researchers said. However, Turnitin’s data indicates that AI was used in at least 20 percent of the writing in 11 percent of the assignments.

In the UK, a Guardian investigation uncovered almost 7,000 proven cases of cheating using AI tools in 2023-24, a significant increase from the previous year. Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students - but recorded cases represent only the tip of the iceberg, according to experts. This data underscores a rapidly evolving challenge for universities as they adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

Shifting Forms of Academic Misconduct

The advent of AI tools has coincided with a shift in the types of academic misconduct. The survey found that confirmed cases of traditional plagiarism fell from 19 per 1,000 students to 15.2 in 2023-24 and are expected to fall again to about 8.5 per 1,000, according to early figures from this academic year. In 2019-20, before the widespread availability of generative AI, plagiarism accounted for nearly two-thirds of all academic misconduct. During the pandemic, plagiarism intensified as many assessments moved online. But as AI tools have become more sophisticated and accessible, the nature of cheating has changed.

Read also: Your Guide to Nursing Internships

Students are now using AI to generate responses, tweak them slightly, and submit content that passes as their own, making detection more challenging. Not only are students using AI tools like ChatGPT and Grammarly, but many are now turning to "humanizers" like Word Spinner or paraphrasing tools.

Motivations Behind AI Use

The reasons students are turning to AI tools are more layered than simple laziness. AI tools offer instant results, reduce stress, and help manage deadlines. Some students also say they use AI to brainstorm, reword, or summarize information, not necessarily to copy. Academic expectations are higher than ever, and the pressure to succeed can push students toward shortcuts. Many students use AI simply because they aren’t sure where the line is.

According to a Stanford University survey, students believe AI should be used as an aid to understanding concepts rather than as a plagiarism tool. Students cite reasons like time pressure, academic stress, unclear rules, and the convenience of AI.

Harvey, a recent business management graduate, admitted to using AI for generating ideas, structuring assignments, and suggesting references. He emphasized that the tool was primarily used for brainstorming and idea creation, with the content being reworked in his own words. Similarly, Amelia, a music business student, found AI helpful for summarizing and brainstorming, particularly for students with learning difficulties like dyslexia.

Challenges in Detection and Institutional Responses

AI detection tools are becoming more popular with teachers, a trend that worries some experts. However, these tools are not foolproof. Some studies show that lightly edited AI content escapes detection over 90 percent of the time. One UK test found 94% of AI submissions slipped through undetected.

Read also: The Return of College Football Gaming

Turnitin claims its AI detector is 99 percent accurate at identifying AI-written text if at least 20 percent of the content is AI-generated. But many experts remain skeptical. Educators are encouraged to treat these tools as indicators rather than proof. Some research has found that AI detection tools are especially weak at identifying the original writing of English learners from AI-driven prose.

The response from academic institutions has been mixed. Some universities are revising their assessment models to reduce opportunities for AI misuse. This includes returning to handwritten exams or in-person testing. Many educators are navigating unfamiliar territory. Some have become more skeptical of students' work, leading to increased use of AI detection tools. But this shift has also created tension, with false positives sometimes damaging student-teacher trust.

Dr Peter Scarfe, an associate professor of psychology at the University of Reading, noted that while there have always been ways to cheat, AI poses a fundamentally different problem. He highlighted the difficulty in proving AI use, even when suspected, and the reluctance to falsely accuse students.

The Path Forward: Education, Transparency, and Clear Policies

Experts advocate for a shift from punitive measures to a constructive approach. Instead of bans, many universities are now tracking proven AI cheating cases. Some high schools in Denmark are using ChatGPT as a teaching tool instead of banning it. They believe that the tool can help students improve their writing and research skills.

Nattrass recommends against schools using AI detection tools. They are too unreliable to authenticate students’ work, she said, and false positives can be devastating to individual students and breed a larger environment of mistrust. Chechitelli pointed out that no detector or test-whether it’s a fire alarm or medical test-is 100 percent accurate. While she said teachers should not rely solely on AI detectors to determine if a student is using AI to cheat, she makes the case that detection tools can provide teachers with valuable data.

Read also: Transfer pathways after community college

As educators become more comfortable with generative AI, Chechitelli said she predicts the focus will shift from detection to transparency: how should students cite or communicate the ways they’ve used AI? When should educators encourage students to use AI in assignments? And do schools have clear policies around AI use and what, exactly, constitutes plagiarism or cheating?

Lancaster said: “University-level assessment can sometimes seem pointless to students, even if we as educators have good reason for setting this. This all comes down to helping students to understand why they are required to complete certain tasks and engaging them more actively in the assessment design process."

Several strategies can help address the issue:

  • Define clear AI usage policies: Schools must clearly state what is and isn’t allowed.
  • Encourage transparency: Let students disclose when and how they used AI.
  • Support over surveillance: Relying only on detection tools and punishment doesn’t build trust.
  • Teach AI literacy: Equip students with the skills to understand and critically evaluate AI technology.

Demographic Differences in AI Usage

Some student groups are more likely to report using AI tools to complete college coursework than others. Business and STEM majors, men, and millennials are more likely than humanities majors, women, and Gen Z to report using the tools.

More business majors report having used AI tools such as ChatGPT to help complete assignments or exams than humanities majors (62% vs. 52%). STEM majors are somewhere in between, with 59% responding that they have used AI tools to complete assignments or exams. Men are also more likely than women to answer that they have had coursework that required them to use AI as part of an assignment (62% vs. 44%).

tags: #students #using #AI #to #cheat #statistics

Popular posts: