It was a busy day in February. I was in my office at Monash University, squeezing in some emails with one hand and a quick bite of lunch with the other. Yeah, a typical day for an academic. That’s when I came across an email sent to me by a PhD student from another Australian university who wanted to know about a research paper I had written. They sent me the title of the paper, the abstract, and the author list.
Usually, this would prompt a straightforward reply. I would find the paper and share the PDF with them. This time, I paused. Sandwich mid-air and squinting at the screen, I tried to make sense of the details on my laptop. Sure, it’s not uncommon for academics to become confused about which of our papers appeared in which journal or conference, or when it was published. On this occasion, I almost began to question my sanity. When had I written the paper? More to the point, had I written it? After a few minutes of analysis, I concluded that it was a paper I definitely might have written. In hindsight, it was a paper I should have written. But I had not written it. ChatGPT had made up the title, the author list of people I had previously co-authored with, and a rather well-written abstract, and it had recommended this non-existent research paper to the PhD student.
This was my first brush with ChatGPT.
ChatGPT is an intelligent chatbot that answers queries, explains things, and generates creative text. It was developed by OpenAI, based on the GPT3.5 architecture, where GPT stands for Generative Pre-trained Transformer. In essence, it is an example of what’s called a ‘large language model’ that is trained on vast amounts of data to find patterns as to how words and phrases are related, and that uses this information to make predictions about what words should come next as it responds to user queries or prompts.