Other

AI Should Be Decentralized, But How?

The intersection of Web3 and artificial intelligence (AI), specifically in the form of generative AI, has become one of the hottest topics of debate within the crypto community. After all, generative AI is revolutionizing all areas of traditional software stacks, and Web3 is no exception. Given that decentralization is the core value proposition of Web3, many of the emergent Web3-generative-AI projects and scenarios project some form of decentralized generative AI value proposition.

Jesus Rodriguez is the CEO of IntoTheBlock.

In Web3, we have a long history of looking at every domain through a decentralization lens, but the reality is that not all domains can benefit from decentralization, and for every domain, there is a spectrum of decentralization scenarios. Breaking down that idea from a first principles standpoint leads us to two key questions:

Does generative AI deserve to be decentralized?

Why hasn’t decentralized AI worked at scale before, and what’s different with generative AI?

What are the different dimensions of decentralization in generative AI?

These questions are far from trivial, and each one can spark passionate debates. However, I believe that thinking through these questions is essential to develop a comprehensive thesis about the opportunities and challenges at the intersection of Web3 and generative AI.

Does AI Deserve to be Decentralized?

The philosophical case for decentralizing AI is simple. AI is digital knowledge, and knowledge might be the number one construct of the digital world that deserves to be decentralized. Throughout the history of Web3, we have made many attempts to decentralize things that work extremely well in a centralized architecture, and where decentralization didn’t provide obvious benefits. Knowledge is not one of the natural candidates for decentralization from both the technical and economic standpoint.

The level of control being accumulated by the big AI providers is creating a massive gap with the rest of the competition to the point that it is becoming scary. AI does not evolve linearly or even exponentially; it follows a multi-exponential curve.

Read more: Jesus Rodriguez – How DeFi Protocols Are Building More Granular and Extensible Capabilities

GPT-4 represents a massive improvement over GPT 3.5 across many dimensions, and that trajectory is likely to continue. At some point, it becomes unfeasible to try to compete with centralized AI providers. A well-designed decentralized network model could enable an ecosystem in which different parties collaborate to improve the quality of models, which enables democratic access to knowledge and sharing of the benefits.

Transparency is the second factor that can be considered when evaluating the merits of decentralization in AI. Foundation model architectures involve millions of interconnected neurons across several layers, making it impractical to understand using traditional monitoring practices. Nobody really understands what happens inside GPT-4, and OpenAI has no incentives to be more transparent in that area. Decentralized AI networks could enable open testing benchmarks and guardrails that provide visibility into the functioning of foundation models without requiring trust in a specific provider.

Why Hasn’t Decentralized AI Worked Until Now?

If the case for decentralized AI is so clear, then why haven’t we seen any successful attempts in this area? After all, decentralized AI is not a new idea, and many of its principles date back to the early 1990s. Without getting into technicalities, the main reason for the lack of success of decentralized AI approaches is that the value proposition was questionable at best.

Before large foundation models came into the scene, the dominant architecture paradigm was different forms of supervised learning that required highly curated and labeled datasets, which resided mostly within corporate boundaries. Additionally, the models were small enough to be easily interpretable using mainstream tools. Finally, the case for control was also very weak, as no models were strong enough to cause any level of concern.

In a somewhat paradoxical twist, the prominence of large-scale generative AI and foundation models in a centralized manner helped make the case for decentralized AI viable for the first time in history.

Now that we understand that AI deserves to be decentralized and that this time is somewhat different from previous attempts, we can start thinking about which specific elements require decentralization.

The Dimensions of Decentralization in AI

When it comes to generative AI, there is no single approach to decentralization. Instead, decentralization should be considered in the context of the different phases of the lifecycle of foundation models. Here are three main stages in the operational lifespan of foundation models that are relevant to decentralization:

Pre-training is the stage in which a model is trained on large volumes of unlabeled and labeled datasets.

Fine-tuning, which is typically optional, is the phase in which a model is “retrained” on domain-specific datasets to optimize its performance on different tasks.

Inference is the stage in which a model outputs predictions based on specific inputs.

Throughout these three phases, there are different dimensions that are good candidates for decentralization.

The Compute Decentralization Dimension

Decentralized computing can be incredibly relevant during pre-training and finetuning and may be less relevant during inference. Foundation models notoriously require large cycles of GPU compute, which are typically executed in centralized data centers. The notion of a decentralized GPU compute network in which different parties can supply compute for the pre-training and finetuning of models could help remove the control that large cloud providers have over the creation of foundation models.

The Data Decentralization Dimension

Data decentralization could play an incredibly important role during the pre-training and fine-tuning phases. Currently, there is very little transparency around the concrete composition of datasets used to pretrain and finetune foundation models. A decentralized data network could incentivize different parties to supply datasets with appropriate disclosures and track their usage in pretraining and fine-tuning foundation models.

The Optimization Decentralization Dimension

Many phases during the lifecycle of foundation models require validations, often in the form of human intervention. Notably, techniques such as reinforcement learning with human feedback (RLHF) enable the transition from GPT-3 to ChatGPT by having humans validate the outputs of the model to provide better alignment with human interests. This level of validation is particularly relevant during the fine-tuning phases, and currently, there is very little transparency around it. A decentralized network of human and AI validators that perform specific tasks, whose results are immediately traceable, could be a significant improvement in this area.

Read more: Jesus Rodriguez – A New Blockchain for Generative AI?

The Evaluation Decentralization Dimension

If I were to ask you to select the best language model for a specific task, you would have to guess the answer. AI benchmarks are fundamentally broken, there is very little transparency around them, and they require quite a bit of trust in the parties who created them. Decentralizing the evaluation of foundation models for different tasks is an incredibly important task to increase transparency in the space. This dimension is particularly relevant during the inference phase.

The Model Execution Decentralization Dimension

Finally, the most obvious area of decentralization. Using foundation models today requires trust in infrastructures controlled by a centralized party. Providing a network in which inference workloads can be distributed across different parties is quite an interesting challenge that can bring a tremendous amount of value to the adoption of foundation models.

The right way to do AI

Foundation models propelled AI to mainstream adoption and also accelerated all the challenges that come with the rapidly increasing capabilities of these models. Among these challenges, the case for decentralization has never been stronger.

Digital knowledge deserves to be decentralized across all its dimensions: data, compute, validation, optimization, execution. No centralized entity deserves to have that much power over the future of intelligence. The case for decentralized AI is clear, but the technical challenges are tremendous. Decentralizing AI is going to require more than one technical breakthrough, but the goal is certainly achievable. In the era of foundation models, decentralized AI is the right way to approach AI.

Source

Click to rate this post!
[Total: 0 Average: 0]
Show More

Leave a Reply

Your email address will not be published. Required fields are marked *