Artificial Intelligence

OpenAI drama merely evidence of a hype machine working at full tilt

The drama at OpenAI demonstrates an immature technology being pushed by immature businesses, says Jason Walsh
Blogs

27 November 2023

A week is a long time in technology. When news broke this week that Sam Altman, chief executive of artificial intelligence (AI) darling OpenAI was ousted in a boardroom coup, he was rapidly snapped-up by OpenAI’s commercial partner Microsoft to lead a new AI unit at the company only to be back where he started last Friday. Accused of being “not consistently candid in his communications with the board”, speculation about the cause of Altman’s canning spread across the Internet and the press.

The latest claim in this extraordinary saga of Altman’s banishment and return, came just a day after demonstrating the latest iteration of OpenAI’s technology, followed a warning from researchers that the company had made “a powerful artificial intelligence discovery that they said could threaten humanity”.

We’ll see. But whatever nature of the internecine dispute that caused this boardroom battle, the fact that it happened at all has demonstrated two things: the tech sector and wider business world clearly think AI is the next great leap in technology, and the companies at the forefront of AI development are immature and woefully prepared for their day in the sun.

 

advertisement



 

An early sign of this included generative AI art applications built on the back of artists’ labour and indignation from techno-fetishists who dismissed artists’ concerns, moaning about their lawsuits. In addition, much of the rhetorical battle over whether AI is a threat to our jobs or even humanity itself has demonstrated a distinct lack of caution and reason, leaving me at least hoping both sides lose.

But wait, there’s more: claims from the likes of Altman that actually intelligent machines, known as artificial general intelligence (AGI), are just around the corner, not only drive fear but are, frankly, wide of the mark. Large language model (LLM) AI is an interesting and useful technology, but the machine isn’t really thinking. In fact, it doesn’t even know what thinking is or that it exists – which, admittedly, is something it shares with some of the people pushing AI as the solution to all of our problems.

The past year has seen billions flow into anything connected to AI, from pioneers like OpenAI to a raft of more dubious enterprises, inflating another classic tech bubble. One sure sign is the comical and predictable appearance of countless AI newsletters, many of which seem to be published by people who just a year ago were hawking useless non-fungible tokens (NFTs) as the Next Big Thing. A more serious sign is the explosive share price growth of Microsoft and chip designers Nvidia and Broadcom.

Given all of this, perhaps now is the time to do some serious thinking. Dreams of hyper-profits being released by an unpaid labour force of algorithms and robots may have investors frothing at the mouth with excitement, but even leaving aside that such a thing would crash the consumption side of the economy, is it really even likely? 

The prediction that the AI apocalypse is upon us is as much a form of hype as are claims that algorithms will soon spring to life, and both are facilitated not only by a misunderstanding of technology but a deeper failure to understand the nature of intelligence. Generative AI is an interesting technology, no doubt, but right now what we need is a bit more human intelligence. Cooler heads need to prevail, including at the apparently rather excitable OpenAI.

Read More:


Back to Top ↑

TechCentral.ie