With a little input from humans, the tool has managed to write children’s fiction, create and debug lines of code, and even complete the licence-level tests needed to pursue a career in medicine, business, or law.

So, if AI is training up to ace all of our exams, what implications could this have for education?

How could AI affect education?

Teaching

ChatGPT has been touted as an asset that could assist teaching across the medical and legal spheres, as the tool shows evidence of being able to interpret unique datasets and apply high-level concepts. The rationale goes that, if ChatGPT can emulate the knowledge needed to pass exams, it could be used as a handy teaching tool for those same tests.

However, I would argue that ChatGPT’s ability to ‘understand’ and rephrase the information it’s been fed does not compare to real human knowledge. The tool is based on a language algorithm, and responds with a relevant text output — it doesn’t innately comprehend these ideas in the same way that a human can, much less a teacher, whose job it is to educate budding learners about challenging medical concepts.

And for all its smarts, these cracks do show.

By OpenAI’s own admission, ChatGPT has the tendency to respond with “plausible-sounding but incorrect or nonsensical answers”. To the uninformed reader, yes, ChatGPT may talk the talk — but at its current stage of development, with “no source of truth”, it cannot walk the walk.

Cheating and plagiarism

ChatGPT is open to the public and has undoubtedly entered mainstream awareness, so it should come as no surprise that the tool is already being put to work to complete tech-savvy students’ homework. For youngsters all the way up to the university level, the days of missing deadlines or blaming the hungry dog are over. Instead, they can feed their maths problems and essay assignments in as ChatGPT prompts and get tailored answers back in seconds — workings and all.

Of course, students have been using traditional search engines as research tools for years — the difference being that ChatGPT can provide a customised, point-for-point response to your query. The likes of Google, on the other hand, tend to feed relevant resources back to the user to synthesise their own response.

But as Microsoft looks set to add ChatGPT’s functionality to Bing, and Google prepares to send Bard out into the world, chatbot AI could become the new normal.

There are concerns that this could significantly threaten educational standards. If ChatGPT is capable of fooling notoriously challenging clinical exams with a passing grade, it could easily be used as a workaround for all kinds of assignments — particularly as schools continue to push their working methods online.

Should parents be worried?

So, the race to have the smartest AI assistant is heating up, but where exactly are the tracks headed? Do we need to fear chatbot AI?

The global market for this kind of tech is set to grow tenfold by 2030, as more investors enter the fray and drive the industry to an expected value of $1,597 billion. There’s no doubt about it: AI is here to stay. This could be bad news for education, as it will only get more intelligent — and more human — as time goes on.

Inevitably, it will become harder to check whether students’ work really belongs to them or to chat-enabled search engines. OpenAI is developing its own detection tools to prevent plagiarism — but we could still see teachers making some hasty steps back into the classroom, returning to pen-and-paper methods.

Parents will need to be more diligent than ever to ensure that children are learning and applying their understanding. Admittedly, pressures have been growing for years now — but AI has the potential to turn yet another long-standing institution on its head. It’s up to families, educators, and the AI powers that be, to make sure that this tech is used as an asset for expanding human knowledge, not replacing it.

Edward Coram-James is CEO of digital marketing agency Go Up.