Researchers suggest that the use of blockchain technology as a communication tool for a team of robots could provide security and safeguard against deception.
With robots and other IoT services connected to the cloud, the risk of hacking them increases. Researchers at MIT and Polytechnic University of Madrid have suggested the use of blockchain technology as a communication tool for a team of robots could provide security and safeguard against deception.
The blockchain technology is used as a secure ledger for cryptocurrencies. It is a list of data structures, known as blocks, that are connected in a chain. Each block consists of information it is meant to store, the “hash” of the information in the block, and the “hash” of the previous block in the chain. Hashing is the process of converting a string of text into a series of unique numbers and letters.
The blockchain provides a record of all transactions. In this case, it is used with robots so that they can identify inconsistencies in the information trail. Leaders use tokens to signal movements and add transactions to the chain, and forfeit their tokens when they are caught in a lie, so this transaction-based communications system limits the number of lies a hacked robot could spread, according to Eduardo Castelló, a Marie Curie Fellow in the MIT Media Lab and lead author of the paper.
In this system, each leader of robots receives a fixed number of tokens that are used to add transactions to the chain—one token is needed to add a transaction. The followers can determine the information in a block is false by checking what the majority of leader robots signaled at that particular step. In this process, the leader loses the tokens. Once a robot is out of tokens it can no longer send messages.
The researchers tested their system by simulating several follow-the-leader situations where the number of malicious robots was known or unknown. They found that even when follower robots were initially misled by malicious leaders, the transaction-based system enabled all followers to eventually reach their destination.
“Since we know how lies can impact the system, and the maximum harm that a malicious robot can cause in the system, we can calculate the maximum bound of how misled the swarm could be. So, we could say, if you have robots with a certain amount of battery life, it doesn’t really matter who hacks the system, the robots will have enough battery to reach their goal,” Castelló says.
“When you turn these robot systems into public robot infrastructure, you expose them to malicious actors and failures. These techniques are useful to be able to validate, audit, and understand that the system is not going to go rogue. Even if certain members of the system are hacked, it is not going to make the infrastructure collapse,” he says.
The work is described in the IEEE Transactions on Robotics.