Sam Altman, OpenAI

AI and the power of nightmares

Billy MacInnes argues that there's no magic in ChatGPT's hallucination problem
Blogs

15 September 2023

One of the occasional perils involved in writing a weekly column is that you sometimes come across a week when nothing much happens. Or, to be more precise, not much happens that seems to be of any great relevance. It’s at times like this that my thoughts turn to other subjects. This week, it’s horror films.

I have to confess at this point that horror films are not a subject that exercise me greatly in the normal course of events, but you’d be surprised at what your mind will dwell on when there’s not much else to focus on.

As it happens, there is a connection between horror films and a potential extinction level event that confronts us today: the unchecked advance of artificial intelligence, which is a potentially cataclysmic threat to the future of the human race.

Which brings us to horror films and the zombie artificial intelligence genre. In this scenario, the zombies are the people and corporations mindlessly and recklessly advancing the creation and evolution of machine intelligence without the necessary regulation and mechanisms in place to protect human beings from the consequences.

This intelligence is being fed and nourished with the contents of our heads and the thoughts and writings of those who have gone before us. It’s literally gorging on the expression of what we believe constitutes our intelligence and our sentience – and we’re the ones feeding it.

Worse still is that there is no guarantee that the intelligence in AI will be any more ‘elevated’ than ours. For example, the subject of ‘hallucinations’, where AI asserts or frames incorrect information as factually correct might be amusing for some, but it can have serious real life consequences. It’s hard not to believe that there is a malignant purpose behind the deliberate disinformation contained in those hallucinations.This becomes especially perilous if there’s no one or no thing in a position to correct those hallucinations. Worse still when the arbiter you rely on to expose those deliberate falsehoods is the instigator of them.

Conjuring expertise

The fact we are happy to call them ‘hallucinations’ rather than ‘lies’ shows how much leeway we are prepared to give AI in its infancy. This may not be as good an idea as some would like us to believe.

OpenAI CEO Sam Altman tried to defend the tendency of generative AI to ‘hallucinate’ recently, stating: “If you just do the naive thing and say ‘never say anything that you’re not 100% sure about’, you can get them all to do that. But it won’t have the magic that people like so much.”

Call me naive but I always imagined the purpose behind the creation of AI was to develop an intelligence that could never say anything it wasn’t 100% sure about but with the ability to find answers and solutions to issues that we currently can’t.

If you want magic, go to a magic show. And if you want hallucinations, take an hallucinogenic.

If someone had told us back when the work first started on AI that the purpose was to create something that was not only more intelligent but was also a better and more convincing liar than any human could ever hope to be, I expect most of us would have said: “Stop, right now.”

But of course, in horror films, people don’t stop. Instead, they always do the stupid things that you can’t believe they would, like walk downstairs into the cellar (or up into the attic) because they heard a noise down (up) there, even though the light won’t turn on and the battery in their torch is dying.

There are so many points in a horror film where you find yourself asking: “Why the hell did they do that?”

Why aren’t we asking that now?

Read More:


Back to Top ↑

TechCentral.ie