AI and electric dreams
Despite dire warnings from various quarters, there is unlikely to be an AI apocalypse any time soon. The reality is much more prosaicPrint
14 September 2018 | 0
Artificial intelligence — it is one of those phrases that is evocative of promise, wonder and awe, and yet, it is widely interpreted and means different things to different people.
Many dream of intelligent machines that can think like we do, and may thus allow us, in the future to not only have intelligent, synthetic companions (or minions) but also to have repositories for our consciousness when nature calls time on our fleshy containers.
Various experiments in the area have proven certain concepts, and unfortunately, torpedoed others. Take Microsoft’s chat bot Tay, for example. It was released with great fanfare as a Twitter experiment in conversational understanding. It was designed to learn as it went, with playful and casual conversation. Alas, after exposure to Twitter’s unique blend of users, it descended into a rather foul-mouthed, bigoted ranter, not unlike many other of its conversationalists.
“Unfortunately, AI is much closer to being able to deal with the drudgery of sorting out the data deluge than it is to ponder the meaning of life”
It was swiftly taken offline for a bit of, eh, re-education, and its return saw it continue down undesired paths with references to drug related behaviour in front of the 5O.
Now this is not meant to be picking on Microsoft’s efforts, it was a brave move after all, but rather to show that artificial intelligence is nowhere near the dream of the intelligent machine that might one day be endowed with self-awareness.
Perhaps trying to come at it from another angle, researchers from Aalto University and the University of Padua tried to use AI to combat online trolls and hate mongers.
The researchers used AI to detect hate speech in an effort to provide filters for platforms such as Twitter. Disappointingly, the AI performed very poorly when even subtle changes were made in language usage, meaning the hate got through with relatively little effort. The researchers said that “attack effectiveness varied between models and datasets, but the performance of all seven hate speech classifiers was significantly decreased by most attacks.”
It appears that minor changes in word usage and emphasis which were easily recognised by both the human audience and human monitors, were entirely missed by the AI filters. That worked out well — not!
Unfortunately, AI is much closer to being able to deal with the drudgery of sorting out the data deluge than it is to ponder the meaning of life.
This was the argument of Phil Tee, CEO of AI platform provider Moogsoft, who at a recent event in the US decried the increasing complexity of enterprise infrastructure, complicated as it is by the likes of virtualisation and containers, and its consequent inability to handle the volume of data being generated. This inability to do more than simply store the data, with even that still resulting in silos, distributed repositories and unused caches, les led to being unsuitable for the derivation of intelligence from that data.
Tee’s assertion is that AI can provide the means by which this data, some 44TB a day on average, can be analysed and turned into useful insights to guide the business.
AI can do the tedious job of examining, sorting, aggregating and then analysing that humans currently are unable to do, and that today’s data tools also seem to struggle with.
Tee’s comments were broadly supported by Cisco, when their SVP Jonathan Davidson, at the same event, highlighted the rise in network traffic to handle the data deluge, as well as the expected rise due to the likes of edge computing and IoT. Again, Tee asserts that AI is the answer to bolster automated and software-defined networking to begin to make sense of it all.
But there is a benefit to all of this. What we have learned about AI and machine learning is that greater the amount of data you have with which to teach AI, the better it gets, broadly speaking. So, if you are trying to teach an AI machine to recognise say a cat, then the more cat variations you have to show it, the more accurate it is likely to be.
This has proven to be the case in the likes of medical scan analysis, video analysis, facial recognition etc, and it shows that accuracy can be close to or even exceed trained clinicians in some instances. Again, AI is not intended to replace humans in these cases, but rather to triage the deluge of data to allow the human resources to be most effectively used on anything that requires a bit more nous or finesse in judgment.
However, what it does highlight is that AI currently, and arguably into the near future, is still best used for narrow, focused tasks where the broad parameters can be fairly readily defined and thus effectively simplifying the task by reducing ambiguity. They hate ambiguity, the machines.
And that this was the central point being made by Toby Walsh, UNSW professor and research group leader at Data61 (CSIRO). Walsh was speaking at a CIO event in Australia and allaying fears of an AI apocalypse where humans would be either enslaved or eradicated by the smart machines we had created, as per the warning of the likes of the late Stephen Hawking and others.
Now Hawking was talking more about the distant future where autonomous machines may extend to weaponry and the like, whose foundations are being currently laid in AI development for the likes of military drones.
But Walsh’s point is that for now, at least, AI is not that smart. He argues that while the human brain can recognise a face in milliseconds, interpret its expression and extrapolate mood from the information, machines can often require thousands of instances and many hours of analysis to get up to speed on the same exercises. He also warns that in trying to make machines think more like we do, we also risk building in the same biases that we are so prey to.
Walsh more sensible says that instead of machines that will rise up against us, the near future will see, when computing power allows, autonomous machines that will be skilful enough to do the 4D jobs — dirty, dangerous, difficult and dull. Like supercharged versions of your robotic vacuum cleaner, these bots will clean the streets, empty the rubbish bins, clear drains, maintain buildings, change rods in nuclear reactor cores and steer cargo ships across the vast emptiness of the world’s oceans.
So while there is an AI revolution going at the moment, you are more likely to experience it as a superb set of data streams supplying an enterprise app in real time, or a very cleverly chosen movie selection on Netflix, or an automated bin truck quietly going about your neighbourhood without the person in a hi-vis leaping on an off in athletic fashion. You are less likely to experience AI in the near future in the shape of a 2m tall soldier form with a single red light on its head scanning back and forth that sounds suspiciously like a 70s sound effect from a Gary Numan track.