Dominating headlines since late 2022, the generative AI system ChatGPT, has rapidly become one of the most controversial and fastest-growing consumer applications in history [1]. Capable of composing Shakespearean sonnets with hip-hop lyrics, drafting manuscripts with key points and strong counterarguments, or creating academic blogs worthy of publication, ChatGPT offers unrivalled potential to automize tasks and generate large bodies of text at lightning speed [2].

ChatGPT is a sophisticated language model that responds to user requests in ways that appear intuitive and conversational [3]. The model is built upon swathes of information obtained from the internet, 300 million words to be precise [4]. ChatGPT works by forming connections between data to reveal patterns of information that align with user prompts [5]. As a language model, ChatGPT has the ability to remember threads of information, enabling users to ask follow-up or clarifying questions. It is this personalized interactive dialogue that elevates it above traditional search engine models.
Unsurprisingly, generative AI has amassed a strong army of followers eager to monopolise on its efficient functionalities: 100 million people conversed with the chatbot in January alone [1].
But what might the lure of working smarter, not harder mean in academia?
Perilous Publishing, Or Powerful Penmanship
No longer a whisper shared between hushed sororities, generative AI like ChatGPT has become a powerful force proudly employed by professors and pupils alike. However, despite its popularity, uptake is not unanimous. Academics are divided.
With the ability to generate work at the touch of a button, users risk being led down a perilous path towards plagiarism, and having their development stifled. The clear threat to academic integrity and original thought is sending many into a state of panic [6–8]. Editors of scientific journals are also having to wrestle with publishing ethics, as ChatGPT is increasingly being cited as a co-author [9].
But, despite its outward looking proficiency, the generative AI model has a number of particularly unnerving limitations. In the words of OpenAI, ChatGPT’s parent company, the software “will occasionally make up facts or ‘hallucinate’ outputs, [that] may be inaccurate, untruthful, and otherwise misleading” [5].

The rise of generative AI shines a spotlight on a troublesome issue in academia: the exchange of papers for grades. Whilst finished articles are necessary, when product triumphs over process, valuable lessons found in the process of writing can be overlooked.
“This technology [generative AI] … cuts into the university’s core purpose of cultivating a thoughtful and critically engaged citizenry. If we can now all access sophisticated and original writing with a simple prompt, why require students to write at all?” [10]
Responsively, in an attempt to stem the flow of plagiarism and untruth, and protect creative thinking, some academics have enforced outright bans of generative AI systems [2].
Unethical AI
Academic integrity aside, ChatGPT’s capabilities are also undermined by moral and ethical concerns.
A recent Times Magazine exposé revealed that OpenAI outsources work to a firm in Kenya, whose staff are assigned the menial task of trawling through mountains of data, flagging harmful items to ensure that ChatGPT’s outputs are “safe for human consumption”. Data Enrichment Specialists earn less than $2/hour [2].
Moreover, generative AI propagates systemic biases by repurposing primarily westernised data in response to English-language prompts, created by tech-savvy users with easy access to IT. For some, the commercialization of more sophisticated platforms like ChatGPT Pro will also prove particularly exclusionary [10–13].
Embracing The Chatbot
However, vying in support of generative AI in academia, are those such as Associate Professor of Learning Enhancement at Edinburgh Napier University, Sam Illingworth, who state that it would be unrealistic and unrepresentative of future workplaces if students did not learn to use these technologies. Illingworth and others call for a shift from albeit valid concerns around insidious plagiarism (OpenAI’s own plagiarism detector tool is highly inaccurate, with a 26% success rate [11]) toward embracing the opportunities as a chance to reshape pedagogy [4].
Methods for teaching and assessment are having to be reexamined, with some suggesting that a return to traditional methods, such as impromptu oral exams, personal reflections or in-person written assignments, may prove effective against a proliferation of AI generated work [12,13].
Generative AI chatbots also have the potential to become a teacher’s best friend [14]. Automating grading rubrics or assisting with lesson planning might offer a much-needed morale boost to a professional body whose expertise is being somewhat jeopardized by the emergent technology. And despite rumors of existential threat [15,16], generative AI, for now at least, poses no immediate risk of replacing human educators; empathy and creativity are among unique human qualities proving tricky to manufacture from binary code.
The Future Is Unknown
Much like other technologies that have emerged from Sisyphean cycles of innovation (think Casio graphing calculator or Mac OS), ChatGPT and fellow generative AI chatbots have the potential to transform the face of education [17].
As the AI arms race marches on at quickening pace, with companies delivering a daily bombardment of upgrades and functionalities, it is impossible to predict who, or what, might benefit or become a casualty to automation in academia. The story of AI in academia remains unwritten, but as the indelible mark left by ChatGPT suggests, it is certain to deliver a compelling narrative.
References:
1. Hu K. ChatGPT sets record for fastest-growing user base – analyst note. Reuters [Internet]. 2023 Feb 2 [cited 2023 Feb 6]; Available from: https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
2. Perrigo B. OpenAI Used Kenyan Workers on Less Than $2 Per Hour: Exclusive | Time [Internet]. 2023 [cited 2023 Feb 13]. Available from: https://time.com/6247678/openai-chatgpt-kenya-workers/
3. OpenAi. ChatGPT: Optimizing Language Models for Dialogue [Internet]. OpenAI. 2022 [cited 2023 Feb 17]. Available from: https://openai.com/blog/chatgpt/
4. Hughes A. ChatGPT: Everything you need to know about OpenAI’s GPT-3 tool [Internet]. BBC Science Focus Magazine. 2023 [cited 2023 Feb 6]. Available from: https://www.sciencefocus.com/future-technology/gpt-3/
5. OpenAi. ChatGPT General FAQ [Internet]. 2023 [cited 2023 Feb 18]. Available from: https://help.openai.com/en/articles/6783457-chatgpt-general-faq
6. Heidt A. ‘Arms race with automation’: professors fret about AI-generated coursework. Nature [Internet]. 2023 Jan 24 [cited 2023 Feb 6]; Available from: https://www.nature.com/articles/d41586-023-00204-z
7. Kubacka T. “Publish-or-perish” and ChatGPT: a dangerous mix [Internet]. Lookalikes and Meanders. 2023 [cited 2023 Feb 6]. Available from: https://lookalikes.substack.com/p/publish-or-perish-and-chatgpt-a-dangerous
8. Boyle K. A reason for the moral panic re AI in academia: in work, we learn prioritization of tasks, which higher ed doesn’t prize. Speed is crucial in work— it’s discouraged in school. Tools that encourage speed are bad for some established industries. Take note of who screams loudly. https://t.co/ot8YHh7H7b [Internet]. Twitter. 2023 [cited 2023 Feb 6]. Available from: https://twitter.com/KTmBoyle/status/1619384367637471234
9. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature [Internet]. 2023 Jan 18 [cited 2023 Feb 21];613(7945):620–1. Available from: https://www.nature.com/articles/d41586-023-00107-z
10. Southworth J. Rethinking university writing pedagogy in a world of ChatGPT [Internet]. University Affairs. 2023 [cited 2023 Feb 18]. Available from: https://www.universityaffairs.ca/opinion/in-my-opinion/rethinking-university-writing-pedagogy-in-a-world-of-chatgpt/
11. Wiggers K. OpenAI releases tool to detect AI-generated text, including from ChatGPT [Internet]. TechCrunch. 2023 [cited 2023 Feb 6]. Available from: https://techcrunch.com/2023/01/31/openai-releases-tool-to-detect-ai-generated-text-including-from-chatgpt/
12. Nature Portfolio. A poll of @Nature readers about the use of AI chatbots in academia suggests that the resulting essays are still easy to flag, and it’s possible to amend existing policies and assignments to address their use. https://t.co/lHyPtEEb7F [Internet]. Twitter. 2023 [cited 2023 Feb 6]. Available from: https://twitter.com/NaturePortfolio/status/1619751476947046408
13. Khatsenkova S. ChatGPT: Is it possible to spot AI-generated text? [Internet]. euronews. 2023 [cited 2023 Feb 6]. Available from: https://www.euronews.com/next/2023/01/19/chatgpt-is-it-possible-to-detect-ai-generated-text
14. Roose K. Don’t Ban ChatGPT in Schools. Teach With It. The New York Times [Internet]. 2023 Jan 12 [cited 2023 Feb 21]; Available from: https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html
15. Thorp HH. ChatGPT is fun, but not an author. Science [Internet]. 2023 Jan 27 [cited 2023 Feb 6];379(6630):313–313. Available from: https://www.science.org/doi/10.1126/science.adg7879
16. Chow A, Perrigo B. The AI Arms Race Is On. Start Worrying | Time [Internet]. 2023 [cited 2023 Feb 18]. Available from: https://time.com/6255952/ai-impact-chatgpt-microsoft-google/
17. Orben A. The Sisyphean Cycle of Technology Panics. Perspect Psychol Sci [Internet]. 2020 Sep 1 [cited 2023 Feb 6];15(5):1143–57. Available from: https://doi.org/10.1177/1745691620919372

Susanna Martin BSc (Hons) is a Research Assistant at The Neuroscience Engagement and Smart Tech (NEST) lab.