This was the premise of a recent paper published in IEEE Transactions on Robotics, co-authored by researchers at the Massachusetts Institute of Technology (MIT) and the Polytechnic University of Madrid. The solution they came up with is based on blockchain technology, as a form of transaction-based communication. Yes, blockchain can be used for more than cryptocurrencies.
Any robot system that’s integrated into a public infrastructure, such as a fleet of connected self-driving cars, or a swarm of drones for search-and-rescue operations, is exposed to malicious acts that can cause chaos. If a hacked “leader” starts sending misleading data to the other robots, they should be able to detect the “lie” and get back on track even if they’ve been temporarily misdirected. Also, the hacked leader should be prevented from spreading further erroneous information.
The central idea of this experimental solution is that robots within a system would be able to detect when one of them is “lying,” by identifying any inconsistency in the information trail, or blockchain.
Another advantage of the blockchain is a permanent record of all transactions, which allows a robot to eventually understand it has been misled, by comparing the false indication with the previous transactions stored in the blockchain.
Blocks within the chain contain not just the basic information, but also a coded version of it and of the previous block’s information, known as the “hash.” When the content of a block is maliciously modified, it will also change the hash, altering the block’s connection with the rest of the chain. When this disconnection occurs, the other robots in the system become aware of it and ignore the information contained by that block.
Plus, in the system devised by the researchers, each leader gets a fixed number of tokens, for adding transactions to the chain. When followers detect a misleading block, the leader loses that token, and when it’s out of tokens, it can no longer send messages. This would help prevent a hacked drone or self-driving vehicle from continuing to send erroneous information to the group.
Eduardo Castelló, a Marie Curie Fellow in the MIT Media Lab, and lead author of the paper, hopes that this is the first step towards creating advanced security systems for connected robots. Considering how soon we could be sharing our public space with more and more autonomous vehicles, security solutions are much needed.