Robots as bad as IoT devices for security vulnerabilities
An analysis of robots used in homes, businesses and industrial installations has revealed many of the same basic weaknesses that are common in IoT devices, raising questions about security implications for human safety.
The robotics industry has already seen significant growth in recent years and will only further accelerate. Robots are expected to serve in many roles, from assisting people in homes, stores and medical facilities, to manufacturing things in factories and even handling security and law enforcement tasks.
“Mechanical extremities, peripheral devices, and human trust expand the area where cybersecurity issues could be exploited to cause harm, destroy property, or even kill”
“When you think of robots as computers with arms, legs, or wheels, they become kinetic IoT devices that, if hacked, can pose new serious threats we have never encountered before,” researchers from cybersecurity consultancy firm IOActive said in a new report.
“As human-robot interactions improve and evolve, new attack vectors emerge and threat scenarios expand,” the researchers said. “Mechanical extremities, peripheral devices, and human trust expand the area where cybersecurity issues could be exploited to cause harm, destroy property, or even kill.”
The research, performed by IOActive CTO Cesar Cerrudo and Senior Security Consultant Lucas Apa, involved analysing the mobile applications, operating systems, firmware images and other software used in home, business and industrial robots from multiple vendors.
The robots for which software components were tested included: the NAO and Pepper robots from SoftBank Robotics, the Alpha 1S and Alpha 2 robots from UBTECH Robotics, the ROBOTIS OP2 and THORMANG3 robots from ROBOTIS, the UR3, UR5 and UR10 robots from Universal Robots, the Baxter and Sawyer robots from Rethink Robotics and several robots using the V-Sido robot control technology from a company called Asratec.
The researchers found that most robots used insecure communications, had authentication issues, were missing authorisation schemes, used weak cryptography, exposed private information, came with weak configurations by default and were built using vulnerable open source frameworks and libraries.
While not all of the robots had all of these problems, each robot had several of them, the researchers said in their report. This led them to conclude that other robots that were not included in the assessment likely have many of the same issues.
Some robots can be controlled from mobile apps or can be programmed with software installed on computers. Other robots communicate with cloud-based services to receive software updates and applications.
If the communication channels between these various components are not secure and encrypted, attackers can potentially launch man-in-the-middle attacks and inject malicious commands or software updates to be executed on the robots.
Furthermore, many of the tested robot firmware and operating systems had remotely accessible services that provided access to different functions. Accessing some of these services did not require any authentication. Some services required authentication, but used weak mechanisms that could be easily bypassed.
“This is one of the most critical problems we found, allowing anyone to remotely and easily hack the robots,” the researchers said in their report.
Some robots did not encrypt stored passwords, cryptographic keys, credentials for third party services and other sensitive data. Others attempted to protect data with encryption, but with encryption schemes that were improperly implemented.
Many of the accompanying mobile apps were found to send sensitive information like network, device and GPS details to remote services without user consent and some robots’ default configurations included insecure features that could not be easily disabled or default passwords that could not be changed.
Some of the authentication, authorisation and insecure communication issues were the result of vulnerabilities in open-source software frameworks, libraries and operating systems shared by robot developers. One such case is the Robot Operating System (ROS), a popular OS used in several robots from multiple vendors, the IOActive researchers said.
The researchers believe that another problem is that in many cases robots make the jump from prototype to commercial product too fast, without any cybersecurity audits and additional protections being built in.
Many of the implications of a hacked robot are similar to those of a hacked IoT device or computer: spying through microphones or cameras, providing a foothold inside internal networks to launch other attacks, exposure of personal data and stored credentials for third-party services. However, due to their kinetic abilities, robots pose other dangers as well.
Inside homes, hacked robots could be used to damage objects and hurt people through sudden movements. They could potentially start fires, unlock doors, deactivate home alarms and more. The same problems could arise from hacked robots in a business environment.
Industrial robots are even more dangerous because they’re larger and more powerful than other types of robots. They could easily kill a person and there have been accidents where people have died because industrial robots malfunctioned.
Well known practices
“Many of the cybersecurity issues our research revealed could have been prevented by implementing well-known cybersecurity practices,” the IOActive researchers said. “We found it possible to hack these robots in multiple ways, made a considerable effort to understand the threats, and took care in prioritising the most critical of them for mitigation by the affected vendors. This knowledge enabled us to confirm our initial belief: it is time for all robot vendors to take immediate action in securing their technologies.”
This research suggests that until now robot vendors have prioritised getting products out in the market over security. This has happened in other industries as well, like with the internet of things, which has become a big security mess.
If cybersecurity is not taken into consideration at the beginning of a product’s lifecycle, fixing vulnerabilities after it’s already released is more complex and expensive, the IOActive researchers said. “Fortunately, since robot adoption is not yet mainstream, there is still time to improve the technology and make it more secure.”
IDG News Service