Decentralized Computation Power to train Machine Learning Model – Part 4

Artificial IntelligenceBlockchainDecentralized ModelMachine LearningRaven Protocol

In the last blog, we discussed how Federated Learning is used to train machine learning models in a decentralized manner. We will now talk about the challenges presented by computational power in training machine learning models and how we can overcome it by using decentralization and blockchain.

Centralized machine learning highly depends on the hardware, mainly GPU, where we train the model. A limitation in hardware can result in the poor performance of the model. We can utilize blockchain and decentralization to enhance the model’s performance.

The main idea is to create an incentive-based system that uses the computational power provided by other participants. While many such projects are working on the concept of decentralized computing power, we will only discuss the most prominent ones- DeepBrainChain and Raven Protocol.

DeepBrainChain

DeepBrainChain is the first blockchain-based platform to provide computational power for AI. Launched in May 2017, it has over 100 organizations using the platform, including Microsoft, Samsung, Siemens, and Lenovo. DeepBrainChain has deployed more than 100 artificial intelligence with more than 20,000 AI practitioners.

Its vision is to build an infinitely scalable, distributed, high-performance computing power network based on blockchain technology and become the most crucial computing power infrastructure in the AI+ metaverse era. DeepBrain Chain consists of three essential parts: high-performance computing network, blockchain main network, and GPU computing main network.

System Framework

  • Customer-end Node – Provides HTTP interfaces for querying node machine configuration information, creating GPU virtual machines, querying virtual machine configuration information, etc. Users can operate the virtual machines on a node through the client end.
  • Seed Node – Pre-defined network nodes to provide alternate access portals for computing power network nodes.
  • GPU Function Nodes – Responsible for managing GPU VMs/containers (e.g., creation, configuration information query, etc.), interaction with authentication servers (including user identity verification, lease term check, etc.).

Advantages

  • Flexible and accessible scalability
  • Massive amount of machine models
  • Ultra-low-cost
  • Flexible and easy to use

Raven Protocol

Another exciting project is Raven Protocol which provides distributed computational power for ML/DL model training. They devised a solution to this challenge: to utilize idle computer power on desktops, laptops, and mobile devices, allowing anyone worldwide to contribute to the Raven Network.

Raven Protocol has its decentralized community-build ML, and DL-based framework called The Ravenverse.

The Ravenverse consists of three main actors:

  • Requesters are the ones who request some computing power for their AI applications.
  • Providers are the ones who wish to provide their idle computing resources to support the requester’s AI apps.
  • Facilitators are platforms, websites, and applications that use their tools like ravop, ravjs, ravpy to empower requesters and providers with no code tools to participate in the ravenverse network.

Their libraries are RavML (machine learning bases) and RavDL (deep learning based), built on top of RavOp, the core operations model for distributed computing.

Raven Protocol connects diverse computational resources (nodes) such as Laptops, PCs, smartphones, etc., providing chunks of data and some tasks to perform using that data. These nodes can coordinate their operations to quickly and effectively meet a large-scale, complicated computational requirement. Some of the machine learning algorithms supported by RavML are K-Nearest Neighbours, Naive Bayes Classifier, K-Means, Linear Regression, Logistic Regression, Multi-Layer Perceptron, and Decision Trees.

RavDL provides computational power to more complex tasks such as deep learning. It divides the model graphs into optimized model subgraphs, which are then assigned to participant nodes for computation in a secure manner. Once all subgraphs are computed,

We can return the saved model to the requester. The requester gets his job done without utilizing much of its computational power, and the providers get rewarded for providing computational power.

Conclusion

Artificial intelligence, Machine Learning, Mobile-edge Computing, and IoT are creating waves in the technology world.

Traditional ML approaches require training models in a centralized server. With the rise of new decentralized machine learning approaches and mobile edge computing, data training has become more accessible and possible. To realize AI at the network’s edge, IoT can offload training tasks to mobile edge computing servers. However, these distributed frameworks introduce new challenges such as user privacy and data security.

Blockchain is a promising solution for these challenges. It is highly scalable, respects privacy, and is decentralized. It is also known for automated script execution and immutable data records.

In our series of blogs, we discussed the available open source projects that help train machine learning models and how we can overcome existing challenges using decentralization and blockchain.