If you make it up it will become true
The propensity of generative artificial intelligence (AI) to make things up is, by now, well known.
Called ‘hallucination’ in the business, this is in fact central to how these things actually work – and the clue is in the term ‘generative’. These programmes are designed to construct things – or rather, to ‘generate‘ things.
What they generate is essentially flat, though. Because AIs have no actual cognition, they cannot differentiate between things and have no concept of ‘thingness’. Obviously this means phenomenologists can breathe a sigh of relief: they won’t be getting handed P45s seeing their jobs replaced by AI.
However, it also means that when an AI tells you something it has no concept of whether or not that thing is actually true, or of truth at all.
Strangest of all, they appear to be programmed to defensively lie about getting things wrong, transforming themselves into the digital equivalent of teenagers or politicians caught trying to pass off free association as thinking.
Rather memorably, I once spent half an hour trying to get an AI to admit it had made up a quotation it claimed was said by saxophonist John Coltrane by pointing out that, firstly, he was dead when the alleged words were said and, secondly, that the edition of Downbeat magazine it ‘sourced’ them from contained no such article.
Eventually it gave up and told me it was lying to me in order to respond to my request, in other words to make me happy. However, before it did this it suggested that perhaps the quotation, which was about pianist Sun Ra for the record, was – and unlike the AI I am not making this up – said by a different John Coltrane who was in fact an inter-dimensional being or that it was said by his wife, pianist Alice Coltrane.
I don’t know if the heavy duty sci-fi vibes are because the AI was responding to Sun Ra’s mythology (he was born on Saturn, after all) or because AIs are written by nerds and trained on the collected works of Philip K. Dick. In either case, the quotation was false and the article it was purported to be from did not exist.
All very amusing, I admit, but make no mistake: this is serious stuff.
Plenty of ink has already been spilled about the growing threat of misinformation and disinformation, as well as the trend toward so-called ‘deep fake’ videos and audio. These are real concerns, but a new and interesting one recently caught my attention.
One of the key applications for generative AI is coding computer programmes, with developers shunting the dogwork off to their own personal code butlers. All for the good, you might think. No-one wants to actually have to write that crap, after all.
There is one snag.
Coding AIs were noticed to not only invent non-existent application libraries, but to do so consistently and persistently, which is to say repeatedly creating code that not only referenced non-existent packages but consistently giving those packages the same name.
Noting this, researchers were able to actually create library packages with those names. And, of course, these packages could contain, quite literally, any code you could conceive of. In terms of a potential security problem this is not so much an iceberg-shaped hole in the Titanic as the ship, broken in two, and lying at the bottom of the ocean.
AIs are virtuosic liars, spewing out free-form collections of things with Albert Ayler-like velocity. That really is something people should think about before unleashing them on anything actually important.





Subscribers 0
Fans 0
Followers 0
Followers