Centralised AI is dangerous: how can we stop it?

Date:

The intelligence displayed by generative AI chatbots like OpenAI’s ChatGPT has captured the imagination of individuals and corporations, and artificial intelligence has suddenly become the most exciting area of technology innovation.

AI has been recognised as a game changer, with potential to transform many aspects of our lives. From personalised medicine to autonomous vehicles, automated investments to digital assets, the possibilities enabled by AI seem endless.

But as transformational as AI will be, there are a lot of risks posed by this new technology. While fears about a malicious, Skynet-style AI system going rogue are misplaced, the dangers of AI centralisation are not. As companies like Microsoft, Google and Nvidia forge ahead in their pursuit of AI, fears about the concentration of power in the hands of just a few centralised players are becoming more pronounced.

Why should we worry about decentralised AI?

Monopoly power

The most pressing issue arising from centralised AI is the prospect of a few tech giants achieving monopolistic control over the industry. The big tech giants have already accumulated a very significant market share in AI, giving them possession of vast amounts of data. They also control the infrastructure that AI systems run on, enabling them to stifle their competitors, hobble innovation, and perpetuate economic inequality.

By achieving a monopoly over the development of AI, these companies are more likely to have an unfair influence on regulatory frameworks, which they can manipulate to their advantage. It will mean that smaller startups, which lack the enormous resources of big tech giants, will struggle to keep up with the pace of innovation. Those that do survive and look like they might thrive will almost certainly end up being acquired, further concentrating power in the hands of the few. The result will be less diversity in terms of AI development, fewer choices for consumers, and less favourable terms, limiting the use-cases and economic opportunities promised by AI.

Bias and Discrimination

Aside from monopolistic control, there are genuine fears around the bias of AI systems, and these concerns will take on more importance as society increasingly relies on AI.

The risk stems from the fact that organisations are becoming more reliant on automated systems to make decisions in many areas. It’s not unusual for a company to employ AI algorithms to filter job applicants, for example, and the risk is that a biased system could unfairly exclude a subset of candidates based on their ethnicity, age or location. AI is also used by insurance companies to set policy rates, by financial services firms to determine if someone qualifies for a loan and the amount of interest they’ll need to pay, and by law enforcement to determine which areas are more likely to see higher crime. In all of these use-cases, the potential implications of biased AI systems are extremely worrying.

Whether it’s law enforcement targeting minority communities, discriminatory lending practices or something else, centralised AI can potentially exacerbate social inequality and enable systemic discrimination.

Privacy and surveillance

Another risk posed by centralised AI systems is the lack of privacy protections. When just a few big companies control the vast majority of data generated by AI, they gain the ability to carry out unprecedented surveillance on their users. The data accumulated by the most dominant AI platforms can be used to monitor, analyse and predict an individual’s behaviour with incredible accuracy, eroding privacy and increasing the potential for the information to be misused.

It’s of particular concern in countries with authoritarian governments, where data can be weaponised to create more sophisticated tools for monitoring citizens. But even in democratic societies, there is a threat posed by increased surveillance, as exemplified by the revelations of Edward Snowden about the US National Security Agency’s Prism program.

Corporations can also potentially misuse consumer’s data to increase their profits. In addition, when centralised entities accumulate vast amounts of sensitive data, this makes them more lucrative targets for hackers, increasing the risk of data leaks.

Security risks

Issues of national security can also arise due to centralised AI. For instance, there are justified fears that AI systems can be weaponised by nations, used to conduct cyberwarfare, engage in espionage, and develop new weapons systems. AI could become a key tool in future wars, raising the stakes in geopolitical conflicts.

AI systems themselves can also be targeted. As nations increase their reliance on AI, such systems will make for enticing targets, as they are obvious single points of failure. Take out an AI system and you could disrupt the entire traffic flow of cities, take down electrical grids, and more.

Ethics

The other major concern of centralised AI is about ethics. That’s because the handful of companies that control AI systems would gain substantial influence over a society’s cultural norms and values, and might often prioritise profit, creating further ethical concerns.

For example, AI algorithms are already being used widely by social media platforms to moderate content, in an attempt to identify and filter out offensive posts. The worry is that algorithms, either by accident or design, might end up suppressing free speech. 

There is already controversy about the effectiveness of AI-powered moderation systems, with numerous seemingly innocuous posts being blocked or taken down by automated algorithms. This leads to speculation that such systems are not broken but being manipulated behind the scenes based on the political narrative the platform is trying to promote.

The alternative? Decentralised AI

The only logical counterweight to centralised AI is the development of decentralised AI systems that ensure that control of the technology remains in the hands of the majority, rather than the few. By doing this, we can ensure that no single company or entity gains a significant influence over the direction of AI’s development.

When the development and governance of AI is shared by thousands or millions of entities, its progress will be more equitable, with greater alignment to the needs of the individual. The result will be more diverse AI applications, with an almost endless selection of models used by different systems, instead of a few models that dominate the industry.

Decentralised AI systems will also mean checks and balances against the risk of mass surveillance and manipulation of data. Whereas centralised AI can be weaponised and used in a way that’s contrary to the interests of the many, decentralised AI hedges against this kind of oppression.

The main advantage of decentralised AI is that everyone is in control over the technology’s evolution, preventing any single entity from gaining an outsized influence over its development.

How to decentralise AI

Decentralised AI involves a rethink of the layers that make up the AI technology stack, including elements like the infrastructure (compute and networking resources), the data, models, training, inference, and fine-tuning processes.

We can’t just put our hopes in open-source models if the underlying infrastructure remains fully centralised by cloud computing giants like Amazon, Microsoft and Google, for instance. We need to ensure that every aspect of AI is decentralised

The best way to decentralise the AI stack is to break it down into modular components and create markets around them based on supply and demand. One such example of how this can work is Spheron, which has created a Decentralised Physical Infrastructure Network (DePIN) that anyone can participate in.

With Spheron’s DePIN, everyone is free to share their underutilised computing resources, essentially renting them out to those who need infrastructure to host their AI applications. So, a graphic designer who uses a powerful laptop with a GPU can donate processing power to the DePIN when they’re not using it for their own work, and be rewarded with token incentives.

What this means is that the AI infrastructure layer becomes widely distributed and decentralised, with no single provider in control. It’s enabled by blockchain technology and smart contracts, which provide transparency, immutability and automation.

DePIN can also work for open-source models and underlying data. For instance, it’s possible to share training datasets on a decentralised network like Qubic, which will make sure the provider of that data is rewarded each time their information is accessed by an AI system.

To ensure access and permissions are decentralised, every part of the technology stack is distributed in this way. However, the AI industry currently struggles to provide such a level of decentralisation. Although open-source models have become extremely popular among AI developers, most people continue to rely on proprietary cloud networks, meaning the training and inference processes are heavily centralised.

But there are strong incentives for decentralisation to win out. One of the primary advantages of DePIN networks, for example, is that they help to reduce overheads. Because networks like Spheron don’t rely on intermediaires, participants don’t need to make any payments or share revenue with third-parties. Moreover, they can afford to be more competitive in terms of pricing than corporations that are under pressure to grow profitability.

Decentralisation must win

The future of AI holds a lot of potential, but it’s also perilous. While the capabilities of AI systems have improved dramatically in the last few years, most of the advances have been made by all-powerful companies and that has resulted in an increase in their influence over the industry. There’s a price to pay for this, not just in monetary terms.

The only reasonable alternative is to promote the greater adoption of decentralised AI, which can enhance accessibility and ensure a greater flexibility of AI. By allowing everyone to participate in the development of AI on an equal footing, we’ll see more diverse, interesting, and useful applications that can benefit everyone equally, as well as putting their users first.

Building a decentralised AI future will involve a great deal of coordination and collaboration across every layer of the AI stack. Fortunately, there are strong incentives for participants to do just that. And again, the incentives are not just monetary.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

spot_imgspot_img

Popular

More like this
Related

Alibaba Marco-o1: Advancing LLM reasoning capabilities

Alibaba has announced Marco-o1, a large language model (LLM) designed to tackle both conventional and open-ended problem-solving tasks.

New AI training techniques aim to overcome current challenges

OpenAI and other leading AI companies are developing new training techniques to overcome limitations of current methods. Addressing unexpected delays and complications in the development of larger, more powerful language models, these fresh techniques focus on human-like behaviour to teach algorithms to ‘think.

Ai2 OLMo 2: Raising the bar for open language models

Ai2 is releasing OLMo 2, a family of open-source language models that advances the democratisation of AI and narrows the gap between open and proprietary solutions.

Generative AI use soars among brits, but is it sustainable?

A survey by CloudNine PR shows that 83% of UK adults are aware of generative AI tools, and 45% of those familiar with them want companies to be transparent about the environmental costs associated with the technologies.