Academic integrity and AI – Universities in a dilemma

At some point, even calculators which were introduced in the 1980s were considered a threat to academic integrity

By Prof. Stephen Wilhite

  • Follow us on
  • google-news
  • whatsapp
  • telegram

Top Stories

Photo: Reuters file
Photo: Reuters file

Published: Tue 6 Jun 2023, 11:24 PM

In March this year, three academics from Plymouth Marjon University published an academic paper entitled ‘Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT’ in the journal Innovations in Education and Teaching International. It was peer-reviewed by four other academics who cleared it for publication. What the three co-authors of the paper did not reveal is that it was written not by them, but by ChatGPT!

This incident is a classic example of how AI-generated content poses a threat to traditional notions of authorship and intellectual property.


Over the years, there have been many challenges to academic integrity. Essay mills selling pre-written essays and other academic work to students have been around for a long time. At some point, even calculators which were introduced in the 1980s were considered a threat to academic integrity. In the 1990s, the commercialization of the Internet led to similar concerns.

Onus on educational institutions

AI-generated content, however, presents a different level of threat, and poses the biggest challenge to academic integrity today. It is becoming increasingly difficult to prove if a document or work product was written or produced by a machine, because the standard of writing or quality of the submitted work is often very good. Even when the language, quality of grammar, and sophistication of the content appears to be better than that students working on their own could produce, faculty members supervising the student work may not be able to prove conclusively that the work was AI-generated.


While artificial intelligence (AI) can be a valuable resource for students, it also poses a significant challenge to academic integrity. The use of AI tools leaves room for cheating, plagiarism and fabrication. The onus is therefore on educational institutions to ensure that students use AI ethically and in accordance with academic integrity policies.

Using AI-generated content in course work is leading universities to revise their academic integrity policies. For example, my own university, the American University of Ras Al Khaimah, now advises students as follows: “Academic fraud consists of any actions that serves to undermine the integrity of the academic process or that gives the student an unfair advantage, including: Using or attempting to use anything that constitutes unauthorized assistance. PLEASE NOTE: Faculty members may prohibit the use of generative AI, including though not limited to, generative AI tools such as Open AI ChatGPT and Canva, in completing assignments. When such prohibitions have been communicated by the faculty member, incorporating information from such sources into your assignment submission will be treated as a serious violation of academic integrity expectations.”

At the same time, students need to be prepared to use AI appropriately in their coursework. They need to understand the capabilities of AI tools as well as the potential biases or errors. Students need to understand that they can use AI tools as a supplement to their own research and writing, but that they cannot rely on them entirely. They need to appreciate the necessity of citing their sources properly and honestly, and they need to feel comfortable in seeking guidance from their instructors if they are unsure about how to use AI tools ethically.

What constitutes academic dishonesty?

The use of AI tools does not automatically constitute academic dishonesty unless students have been explicitly told that the use of AI tools in completing an assignment is not allowed. Acceptable use of AI tools depends on how the tools are used. For example, if a student uses apps such as ChatGPT to generate a rough draft and then revises and updates it, the resultant text can help him learn the skills of fact-checking and critical thinking, since the outputs from AI-generated apps often contain factual errors. However, the student’s final draft would need to reference ChatGPT as a source for information contained in the paper.

Another good reason to integrate AI in the academic learning environment is the need for university graduates to acquire these skills if they want to succeed in their careers, since AI tools are being widely used in industry. It is, therefore, important to teach students how to use artificial intelligence tools responsibly, before they enter the workforce.

What worries many of us in academia is that the growing prevalence of generative AI tools will make it increasingly difficult to determine the intellectual and creative contribution of the individual learner to the work produced. Traditional referencing of sources consulted may be inadequate as a basis for documenting the learner’s unique contribution to the authored work.

Nevertheless, the integration of AI into other technology tools is accelerating. It is very likely that artificial intelligence will be built into everyday word processing programs such as Microsoft Word or Google docs. Increasingly, humans and AI-based technology tools will be co-writing text for a variety of uses and in a variety of contexts.

Rethinking assessments

Acknowledging the ramifications of the use of AI in the academic world, the World Economic Forum (WEF) has expressed its concern and published guidelines for educators, as we enter the second era of digital technologies [ChatGPT and cheating: 5 ways to change how students are graded (https://www.weforum.org/agenda/2023/03/chatgpt-and-cheating-5-ways-to-change-how-students-are-graded/).

According to WEF, these technologies present opportunities for educators to rethink assessment practices and engage students in deeper and more meaningful learning that can promote critical thinking skills.

The report says: “We believe the emergence of ChatGPT creates an opportunity for schools and post-secondary institutions to reform traditional approaches to assessing students that rely heavily on testing and written tasks focused on students’ recall, remembering and basic synthesis of content. It’s not useful or practical for institutions to outright ban AI and applications like ChatGPT.”

AI applications such as ChatGPT will spur enormous changes in contemporary education, leading to major changes in assessment systems. Even as AI software is incorporated into educational settings, degree-granting institutions must be able to defend the validity of degrees being awarded by ensuring that student learning outcomes do reflect knowledge and skills that students have incorporated into their cognitive frameworks.

For assessments to be valid, WEF notes that work produced utilizing AI must be structured in a way such that the unique contribution of the learner to the work product can be identified. In order to nurture in students an appreciation of the contexts in which AI tools can appropriately be used in the learning process, WEC recommends looking for opportunities for students to be involved in helping set the learning goals for an assignment and the criteria that will be used in assessing achievement of the goals. This recommendation is related to the overarching importance of students knowing how they will be graded or assessed in any formal educational program of study.

Authentic assignment

In order to verify that students have internalized knowledge and skills, even if AI tools have helped them identify and organize important information and practice required skills, assignments and their assessments need to be authentic. The Centre for Innovative Teaching and Learning at Indiana University Bloomington, defines an authentic assignment as “one that requires application of what students have learned to a new situation, and that demands judgement to determine what information and skills are relevant and how they should be used.” (https://citl.indiana.edu/teaching-resources/assessing-student-learning/authentic-assessment/index.html).

Examples of authentic assignments provided by the Centre include: for Business, developing and presenting a marketing plan for an imaginary company; for Computer Science, developing and demonstrating an app to solve a particular problem; for Engineering, designing an improved battery for electric cars and describing its features to a panel of experts. The key, however, even with these authentic assessments, is to structure assessments such that the supervising faculty member can determine with confidence that the work presented reflects knowledge and skills that have been internalized by the student.

Even with the increased use of authentic assessments, submission of written work will continue to be an important means through which students demonstrate their learning. Therefore, universities must pursue multiple ways to reduce the potential for use of AI tools. For example, assignments can be crafted such that they require students to connect personal experience or events from the class to course concepts, as the AI software will not have access to personal experiences or class events. Students can also be asked to pair short written submissions with oral, in-class questioning about the submission.

Universities can require that writing occurs during class meetings with a zero tolerance policy for possession of any electronic devices during such writing exercises. That is, universities can encourage “flipped” classes with reading, viewing of lectures, videos, etc., occurring at home, but with writing about the material occurring in class.

Further, if written assignments are to be completed outside class, universities should collect an in-class sample of students’ writing as a “baseline” against which written assignments completed outside class can be compared.

However, the continuing evolution of AI software means that some AI tools will increasingly be able to mimic the writing style of individual students. Therefore, universities will want to continue to identify and make available for faculty use software for detecting AI-generated content, but this means that the process of updating AI detection software will need to be continual. Faculty will also need to be made aware of strategies they can employ in identifying AI-generated content. For example, careful evaluation of cited references may reveal “suspect” or fabricated reference sources.

For institutions of higher learning, the challenge is to adapt assessments of student learning so as to minimize the potential for AI-generated content to invalidate assessments, while also ensuring that students learn to use AI tools honestly and effectively. I am confident that faculty and their university leaders will meet this challenge, displaying as they have over centuries that they are thought leaders who welcome and promote knowledge generation and are devoted to helping ensure that emerging knowledge, even generative AI, is used for the greater good.

(Prof. Stephen Wilhite is senior vice-president for Academic Affairs & Student Success/Provost, American University of Ras Al Khaimah)


More news from Opinion