I, err, robot

Robot
(Image: PCWorld)

Print

PrintPrint
Blogs

Read More:

3 March 2017 | 0

There are scary prospects in technology, and then there is a new report from IOActive.

We all know the story about technology outpacing human 1.0, particularly when it comes to something like moral implications. Well, this new report goes even further.

As we reported earlier in the week, it has emerged that robots are as bad as Internet of Things devices for security vulnerabilities.

“It would seem that in the rush to develop robots, manufacturers are taking shortcuts and failing to properly vet code, allowing these vulnerabilities and insecurities to be incorporated”

“The researchers found that most robots used insecure communications, had authentication issues, were missing authorisation schemes, used weak cryptography, exposed private information, came with weak configurations by default and were built using vulnerable open source frameworks and libraries,” says the report.

Multiple vendors
This is a very scary prospect indeed, because we are not just talking about robot vacuum cleaners, or virtual hotel assistants, we are talking about robots used in “business and industrial” applications, “from multiple vendors”.

The report found that while not all robots had all the issues identified, most had several.

It would seem that in the rush to develop robots, manufacturers are taking shortcuts and failing to properly vet code, allowing these vulnerabilities and insecurities to be incorporated.

With robots now expected to do more than perform hazardous work, but now drive on our roads, operate on us and look after vital medical functions, one must ask, if due care and consideration is being given to safety and security.

Advising caution
While I am not an advocate of the Helen Lovejoy school of caution (Won’t somebody please think of the children!), I would be an advocate of ensuring that any software that allows a machine to interact with any person, service or support, should be subject to deeper and greater rigours than any others, for that very reason — peoples’ safety will depend on it.

It all seems very far from Isaac Asimov’s vision of laws for robotics, but one can easily foresee a situation where a flawed routine, though effective in the vast majority of situations, gets replicated and reused, carrying always the potential for disaster with it. This situation’s analogue has been seen a number of times in other areas.

There was an issue with a flight control mechanism in the world’s most popular airliner that was not discovered for twenty years and many millions of hours of service, and yet the flaw that contributed to the to several losses was replicated again and again through many variants before it was eventually isolated and removed.

Greater potential
The potential for such an occurrence with software in robotics is even greater, given the level of re-use and modularity that is currently employed in the field.

But then of course, we could employ robots to review the code for creating robots — but that might create some kind of singularity loop that could destroy the world anyway.

Is there a robot that makes tinfoil hats?

Read More:



Comments are closed.

Back to Top ↑