Opinion par Chris Griffiths & Caragh Medlicott
19 June 2023
19 June 2023
Temps de lecture : 5 minutes
5 min
0

With the rise of AI we need to learn to challenge what we read

Everyone knows that people make mistakes. In fact, when we console someone for slipping up we often remind them ‘you’re only human’. In other words, we see fallibility as integral to being a person. Perhaps this is why we think of machines very differently. Whether it’s a Google search or question posed to Siri, we tend to trust the information technological tools give us. They have access to more data than we do, after all, so we assume that means they’re correct.
Temps de lecture : 5 minutes

Of course, a lot of the time they are right. And with the rise of AI, machines are getting smarter than they have ever been before. That is why it’s extra important we learn to be vigilant, not accepting everything we read as fact, and instead engaging our critical thinking skills. This is the best way to secure success and authenticity as we move out of the information age, into a new AI-powered future.

AI is not infallible

Yes, AI may be analytical and complex – and it can certainly make large quantities of information useful. But it is not infallible. At their core, popular machine learning tools such as ChatGPT are simply big sifting algorithms, capable of repackaging information to meet given requests. Whether that means summarising complex scientific theories or aiding in the writing of code. There’s no denying that there is great scope for working with these tools, but it’s important to know they are also capable of mistakes.

AI is trained on data inputted by humans, and as such sometimes inadvertently replicates human bias. They can also make miscalculations and mistakes. As these tools evolve, such flaws will begin to be ironed out – but there is no saying how long that will take, and total infallibility is probably unlikely. That’s why anyone working with AI should learn to question what they read, even if it’s just spot checking with other sources to make sure any information given is definitely correct.

Emotions and ethics

The debate about whether AI will ever gain sentience probably won’t be answered anytime soon, and while consciousness remains the remit of humanity alone, there will always be some things which AI cannot understand. Of course, on many fronts AI has the upperhand on humanity – whether it’s beating us at chess, or holding large quantities of information. But even this initial data comes from humans who programme the AI, without it, it could not create any true knowledge of its own. This is because it lacks subjective experience.

Why does that matter, you ask? It certainly doesn’t make working with AI any less effective day-to-day. The issue is that AI is reliant on humans for ethical frameworks, as it has no true sense of right or wrong. It also does not have emotions, and does not have access to the broader context of existence in the world more generally. That means we have to be responsible for imposing moral frameworks, and bringing emotional context to the information provided by AI. The alternative is to trust AI implicitly and suffer the consequences of ethical fallout – but avoiding this is easy so long as you’re aware of the wider moral context of a piece of work when collaborating with AI.

Shades of nuance

Another reason we must learn to challenge what we read when working with AI relates to the very nature of nuance and subjectivity. For some answers, there are no universally correct answers. For example, while AI can certainly outline a historic timeline for you, or compare different historical sources, it isn’t able to settle points of historic contention with any more objectivity than a human historian.

This is why critical thinking will be even more important in the AI era. Cognitive dissonance – being able to hold two or more conflicting ideas in your head at one time – is a trait inherent to humans. It comes in great use when working through tricky problems, as it allows us to consider multiple solutions at once. This is why the combined effort of AI and human ingenuity will be the most powerful way to work going forward. AI can provide any information we need at a moment’s notice, and we can use our critical thinking skills to action that information by transforming it into solutions.

An AI headstart

Given all this information – and the advice to temper AI use with a certain amount of scepticism – exactly what purpose does it serve in this new world? Well, ideally, it gives us a foundation for a different kind of work. Over the next few years, so many of the jobs we see as integral now will actually become automated – but this won’t render us redundant. Instead, it will change the nature of work, with more focus on creativity and thinking than ever before.

Those who find success in an AI future, will see these new technologies with the nuance they demand. Rather than viewing them as the solution to every problem, or a solely destructive force, we should view AI as a powerful advancement which gives us a headstart on success.

Everyone in business knows how sought after creativity is, and while AI may not be able to produce truly groundbreaking ideas on its own, it can certainly help us find more of them!

In summary

The AI age brings with it a whole host of benefits – many of which we can’t even imagine yet! While these advancements are certainly to be embraced, we can only achieve our best when we keep our critical thinking skills sharp and learn to challenge what we read. When we do so, AI lays the foundation for human innovation on a level never seen before.

Chris Griffiths a keynote speaker, founder of the AI-powered brainstorming app ayoa.com and co-author of The Creative Thinking Handbook.

Buy The Creative Thinking Handbook
Partager
Ne passez pas à côté de l'économie de demain, recevez tous les jours à 7H30 la newsletter de Maddyness.