A team of scientists from Belgium may have solved one of the biggest problems in AI using a blockchain-based decentralized learning method. Although the research is still in its earliest stages, its potential implications could range from revolutionizing space exploration to posing an existential threat to humanity.
In a simulated environment, the researchers developed a way to coordinate learning between individual autonomous AI agents. The team used blockchain technology to simplify and secure agent communications, thus creating a decentralized “swarm” of learning models.
The individual learning results of each agent in the swarm were then used to develop a larger AI model. Because the data was processed via a blockchain, this larger system benefited from the collective intelligence of the swarm without having access to the individual agents’ data.
AI Swarms
Machine learning, a concept closely related to artificial intelligence, comes in many forms. A typical chatbot, such as OpenAI’s ChatGPT or Anthropic’s Claude, is developed using several techniques. It is pre-trained using a paradigm called “unsupervised learning” and then refined using another known as “reinforcement learning based on human feedback.”
One of the main problems with this approach is that it typically requires storing training data in a centralized database. This makes it impractical for applications that require continuous, autonomous learning or where privacy is important.
The team of researchers conducted their blockchain research using a learning paradigm called “decentralized federated learning.” As a result, they found that they could successfully coordinate models while keeping the data decentralized.
Swarm Security
Much of the team’s research focused on studying the swarm’s resilience to various attack methods. Because blockchain technology is a public ledger and the learning network used in the experiment was decentralized, the team was able to demonstrate resilience to traditional hacking attacks.
However, they found a threshold for the number of bad robots that a swarm could tolerate. The researchers created scenarios that included robots intentionally designed to cause harm to the network. These scenarios included agents with malicious intent, agents with outdated information, and robots coded with simple instructions to disrupt the network.
While simple and outdated agents were relatively easy to defend, the team found that smart agents with malicious intent could eventually disrupt the swarm’s intelligence if enough of them were able to infiltrate.
This research remains experimental and was only conducted in simulations. But the time may soon come when swarms of robots can be coordinated in a decentralized manner. This could one day allow teams of AI agents from different companies or countries to work together to train a larger agent without sacrificing data privacy.