Testing times for smart machines
Androids, automatons, autopilots, bots, cyborgs, droids, drones, humanoids, robots, satellites, self-driving cars, spacecraft, UAVs – some are still in the realms of science fiction, some are practical functioning smart machines today. Self-driving cars and Google have been making recent headline hype but the first autonomous cars were successfully tested in the 80s. As for the airplane autopilot, a venerable and still evolving system, the first transatlantic crossing entirely on autopilot including take-off and landing was by a US Air Force C-54 in 1947. It was electro-mechanical as well, not electronic.
So in the sense of physical machines, we have been developing ever smarter ones for a long time. The game changer, obviously enough, was the incorporation of computing power and programming. Perhaps the key point about smart physical machines is that to be autonomic they need a certain degree of intelligence built in. Arguably, there can never be too much, although single purpose machines will presumably demand less than multi-tasking or general purpose applications. A fetch-and-carry factory robot will hardly need the level of ‘intelligence’ of say an underwater vehicle or a military all—terrain fighting unit.
Virtual smart machines
In more recent times we are seeing the inexorable rise of what can be thought of as virtual smart machines, dedicated systems or agents that perform a set of functions autonomously or as near as their instructions permit. Live credit or fraud checks in financial institutions would be a prime example, aided today by advanced real-time analytics and technologies such as in-memory processing. Online betting and gaming employ similar smart systems that are automated to a very high degree. Just a bunch of algorithms really, but really smart ones capable of growing even and ever smarter over time.
Virtual Personal Assistants (VPA) are already combining all of the software and artificial intelligence (AI) or machine learning bits to become a major strand of development in our use of and interfacing with ever more powerful ICT to perform whatever tasks we wish. They (VPAs) are virtual smart machines in every sense except the physical. But then there is no particular barrier to linking your VPA to any physical machine, from a house cleaning robot to a vehicle to a set of home healthcare support devices. Perhaps the VPA is one aesthetically satisfactory universal interface to multiple sets of M2M controls and communications for different applications.
There are many things that smart machines can do much better than humans, starting with multiple routine tasks without error, at much greater speed and with no fatigue or working hours limitations. Then there are tasks requiring complex calculations which very few humans have been able to do ‘manually’ since the advent of computers. Auto-trading is a much hyped but totally valid example. So, alas, are missile and bomb targeting, although advanced meteorology and weather forecasting must be regarded as a benign boon to all mankind.
So too is computer aided surgery, where delicate precision procedures like laparoscopy are state of the art surgeon—machine collaboration. The patient is of course anaesthetised upon the table. At the right height. Which actually and ironically makes side-of-the-motorway blowout wheel replacement a more challenging robotic task because the potential environment is almost infinitely variable.
It is hard to discuss smart machines without touching on Artificial Intelligence and the Internet of Things (IoT). What we really mean by AI is still something of a techno-philosophical question. Does an AI instance have to pass the Turing test, for example, or is that just a side-track based on a premise that ‘intelligence’ has to be human-like? In the meantime, the IoT will inevitably be composed of billions of relatively dumb limited purpose things like sensors reporting to a higher tier of controllers or network nodes.
The intelligence and any data collection would start at that level but how much and where that layer reports to would presumably depend on the nature of the data collection and the application. There will be lots of bits of monitored data that is not significant because, for example, it is consistent and static.