Artificial Intelligence

The call for an AI halt disguises the real problems with tech

The open letter calling for a pause in the development of artificial intelligence only contributes to hype about AI’s capabilities, writes Jason Walsh
Blogs
Image: Shutterstock via Dennis

31 March 2023

The great and the good of the tech sector are worried. Worried about the future of humanity, no less, fearing we will be out-competed and ultimately outfoxed by machines. So worried by the recent developments in artificial intelligence (AI) are 1,000 luminaries that they have penned an open letter calling for a moratorium on its development so that we can catch a breath and make sense of it.

Understanding what we are doing with AI is to be welcomed, but the open letter, published by an organisation called the Future of Life Institute, makes little sense. Indeed, the first task when it comes to making sense of AI must be understanding what AI is doing, and on that front the open letter falls rather short.

“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” it said. 

 

advertisement



 

It is certainly true that large language model (LLM) AIs produce some curious results, including outright falsehoods, but they are not “digital minds” and they are not thinking.

Artificial general intelligence (AGI), the term retroactively coined to describe a machine that could think in a manner akin to a human, is no closer today than it was before the arrival of the latest batch of AIs. Partly, this is a question of processing power. The development of heavy duty quantum computing might change that, but such a development is not on the cards and will not be for some time (quantum computing is an interesting area, but one that was recently declared by one analyst as uninvestable in its current state). But it is also because no-one is actually performing any research in the area. At the moment, AGI is nothing more than a thought experiment discussed in philosophy departments in order to better understand what human cognition is.

An overnight success eighty years in the making, AI’s promise may be tantalising, but its progress has been glacial and overstating its capabilities helps no-one. Generative AIs can and certainly will be used to fill up the Internet, and by extension our culture, with nonsense, but that was already happening without the assistance of ‘thinking’ machines.

The real problem with the Future of Life institute’s letter is that, in overstating the threat of AI, it also hides the real danger to society: the growing power of the tech industry.

Just as recent demands for a ban on TikTok ring hollow when its business model of harvesting user data and algorithmically feeding users content is as American as apple pie, the Future of Life Letter is long on the threat of one specific application of technology, but light on the history of the tech sector’s failings.

The antics of venture capitalists (VCs) in the run up to the collapse of Silicon Valley Bank (SVB) are only the latest demonstration of the sector’s increasing capture of policymakers. Having pushed the bank to the brink by ordering their charges to withdraw their money, VCs then went into an online meltdown, threatening that this specialist bank’s crash would destroy the global economy. 

The result was an unprecedented capital guarantee that rewarded VCs and the companies they invest in for their risky behaviour. Instead of spreading their money around, including through traditional insurance, cash had been left lying in SVB accounts (accompanied by sweetheart mortgage deals, for the record) in the kind of basic business error your local newsagent would not make.

Rather than getting soaked, the sector got a bailout from the public. Frankly, this could only occur because of the cult of disruption at the sway that technology has over government: what administration wants to risk the next Google going bust, after all? 

More broadly, big tech and its voracious appetite for data, not to mention its desire to be let do whatever it wants with that data, was already a problem long before LLMs. The risk of AI is not that of computers outthinking us so much as humans stupidly using data to lazily make decisions. Machine filtering job applications (something that already goes on), for instance, is more of an immediate threat to any of us than than is losing our jobs to an adding machine.  

Returning to the open letter, if the Future of Life Institute sounds a bit, well, strange, that is because it is. The Institute expounds ‘longtermism’, an ideology that proposes humanity must concentrate on dealing with very long term problems faced by humanity, perhaps even at the cost of sacrificing the present.

A strange form of utilitarianism, it is popular with would-be tech billionaires both for its actionism and how it neatly circumventing inconvenient hindrances like democracy, longtermism as a philosophy appeals to the tech sector’s semi-literate leaders raised on a diet of adolescent fiction and drinking from a firehose of investors’ money. Specifically, many adherents of the idea flatter themselves with the idea that they are getting rich in order to more effectively help the rest of us. What could be better than semi-digested nuggets of philosophy to tell you that you are, in fact, a great man?

As for the ability of LLMs to write e-mails that go unread or help students cheat on coursework, perhaps we should be producing less boilerplate in the first place. Right now, claims that AI is so powerful that it is dangerous sounds more like advertising than a warning.

Read More:


Back to Top ↑

TechCentral.ie