Perhaps you’ve driven by a field and seen a flock of tens of thousands of starlings pulsing and whipping through the air, a relatively common sight, especially in Europe. Perhaps you’ve been swimming in the ocean and have seen a “fish ball” doing basically the same thing in the water. Different animals use a type of distributed behavior, often defensive, that cannot be traced to the movements of a “leader.”
This behavior, called scale-free correlation, murmuration, or swarming, may provide an answer to what to do with the geometric increase in data collected at the edge.
How we got here
Data has proliferated, in part, because sensors have proliferated writ large, gathering info from all over, such as from assembly lines, smartphones, people, medical devices, and vehicles.
IDC predicts worldwide data will grow from 33 zettabytes in 2018 to 175 ZB in 2025, and a good portion of that will be collected at the edge.
Today, we create artificial intelligence models by sending data collected at the edge to a central point. The data is used to “train” the model, and then the model is pushed back out to all devices at the edge. When you ask your smartphone to find you a restaurant, the phone is using an AI model to “infer” from the microphone data what you’re asking for.
Yet, it’s often impractical to send all of the data collected at the edge to a central server for computing. Latency, heat and energy demands, compliance issues, and mounting transportation and opportunity costs are to blame.
Eventually, the impractical will become the impossible. But researchers, inspired by nature, have an answer.
An alternative, based on observation of swarming phenomena, was discovered by Krishnaprasad Shastry, distinguished technologist at Hewlett Packard Enterprise’s Bangalore research lab, and his team. If birds, bees, bats, ants, and fish can move intelligently as a direct response to their environment, why can’t self-driving cars and other computing devices at the edge do the same?
Enter swarm learning, a term coined by Dr. Eng Lim Goh, vice president and chief technology officer for high-performance computing and artificial intelligence at HPE.
Google has a similar distributed learning project called federated learning. However, swarm learning has additional functionality that obviates the need for a central leader: The AI modeling is done completely by the devices at the edge. The practice combines the use of AI, edge computing, and blockchain. Simply put, it’s AI at the edge.
“The general benefit of these frameworks is that of distributed learning—that is, learning a centralized model using data originating from a large number of clients,” says Mehryar Mohri, head of the Learning Theory Team at Google and professor at the Courant Institute of Mathematical Sciences at New York University. “The distinguishing feature of federated learning is that the centralized model is trained while the data remains distributed over clients. Thus, client data is not shipped to a server at any time.”
In other words, in the federated learning model, the data and the learning process are distributed across clients. Those models are then combined by a central coordinator into a single model that is then distributed back out to the individual clients.
The benefit of this type of learning is that it enables a model to be derived from a limitlessly large data pool without having to move that data across borders, however you might define them. As Goh puts it, “local training, global insight.”
There are challenges, however.
“The networking and communication bottleneck is still one of the key issues in federated learning due to frequent interactions between the central server and the clients,” says Mohri.
Because the AI training in swarm learning is done at the edge, using the compute available on the clients, the back and forth to a central control is removed. Blockchain is used in its place; it tracks the interactions in any swarm and makes the swarm more secure.
Edge, AI, blockchain
The three main elements of swarm learning are edge computing, AI, and blockchain.
A Wired article, “The Sensor-Based Economy,” notes National Science Foundation research that predicted 6.4 billion connected objects in 2017, a 30 percent increase over 2015, and 20.8 billion by 2020, with 1 trillion connected sensors in operation shortly thereafter.
The result of the radical increase in devices is “data gravity,” meaning the more data, the greater the tendency to pull compute to the data as opposed to sending the data to the compute. To meet this demand, providers have created smaller edge devices with more memory and more compute that run efficiently and with more independence and that are physically tough enough to survive away from a climate-controlled data center. The intelligent edge, where people and the edge meet, is one integral element of swarm learning.
Another element of swarm learning is AI. The proliferation of AI is real, all across the spectrum, from reactive/instructional AI to machine learning to deep learning and branching out to outliers like associative memory.
With swarm learning, the compute-intensive training in machine learning can be done in situ by yoking edge devices together.
Finally, in a similar way that edge computing and swarm-based AI are distributed, so is blockchain. Blockchain is simply a distributed ledger, each of whose pages produces a distinct number, or hash. Any change to the input changes the hash. So it provides a forensic record of contents and changes that is both transparent and difficult to hack, or as Christian Reichenbach, worldwide digital adviser for HPE Pointnext, puts it, a fingerprint.
“There is no central server or a central instance responsible for taking care of the transactions,” explains Reichenbach. “It’s distributed across a lot of peers, and therefore, by itself, it’s hard to manipulate.”
Swarm learning also produces a blockchain to establish confidence that the results of the shared learning at the edge can be distributed, exchanged, and secured. Each member can add data as long as it’s legitimate.
Swarm learning in practice
In day-to-day life, bias can be damaging to the spirit as well as simply inaccurate. In healthcare, it can kill you. Goh, who is co-inventor of some swarm learning applications, provides an example of two hospitals. One of the hospitals sees a lot of tuberculosis cases but fewer of pneumonia. The other sees a lot of patients with pneumonia but fewer with tuberculosis. If each hospital uses AI to understand its X-ray sets better, it will create a bias in the direction of the most common illness. Personal health data is some of the best protected in the world, with legal barriers to exchange of such information. Swarm learning offers a solution.
Via blockchain, the neural network weights of each AI can be shared, averaged, and distributed, allowing for a more accurate picture of pulmonary disease in the area in which the two hospitals operate. Both hospitals will be able to detect tuberculosis and pneumonia equally well thanks to this secure, private, iterative exchange.
The process allows a user to sell or exchange information without giving away the underlying data. Monetizing information processed at the edge is one of the promises that technologies like swarm learning offer.
A case in point: One of the world’s largest automotive parts manufacturers will soon launch a data monetization platform that exchanges individual car information with the safety and privacy of blockchain. Although this platform is separate from swarm learning, it provides an additional means to test the validity of blockchain in the data market.
“If we want to increase safety, accelerate the autonomous car, and drive more efficiently to reduce pollution, this information should be exchanged between car manufacturers or even beyond that, to the city, to other drivers, to insurance companies,” says Reichenbach.
The future of swarm learning
Swarm learning is still a research project, but developing the technology today is recognition of a set of trends that make a new way of thinking vital.
First, data is distributed. The days of banks of humming machinery grinding away under a nuke-proof dome are done. Because sensors and devices are everywhere, data is everywhere. Second, because those devices are whirring away like clockwork under the skin of the world, we are inches away from being subsumed by a data flood. This flood makes understanding a heavier lift even while it sweeps our money off the table. We’ve added compute to our edge devices to attempt to stem the tide. But we need AI to happen out at the edge as well. Finally, we need a safe, private way of exchanging what we’ve learned with each other.
Edge computing, AI, and blockchain create a process by which we move from a data flood to kind of hydroelectric power, where the vast amounts of data are channeled and put to work, adding, not subtracting, from our lives and what we can do with them.
Swarm intelligence: Lessons for leaders
- Every technological solution to a problem produces new problems of its own. Swarm learning obviates one of the main issues of AI: potential loss of privacy. Instead of moving data, it moves weights.
- The more sensors that fill our world, the more data increases. The more data increases, the more we need AI to deal with it.
- AI on the edge with blockchain will enable companies to share insights without giving away proprietary tech or competitive data.
Related links:
“Agnostic Federated Learning,” Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh
Explained in 60 seconds: The basics of distributed ledgers and security by Dr. Goh
Improving machine learning models with swarm learning with Krishnaprasad Shastry
Blockchain, AI combine to make an Internet of smarter things
Evaluating the role of IoT and blockchain in transportation
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.
For more information contact United Imaging Technology Services today