There is something off about the recent wave of AI ethics, AI safety, and AI regulation initiatives. Presented AI risks seem to be blown out of proportion. There's widespread suspicion that people behind the initiatives are not being honest about their motives and that they are driving moral panic in pursuit of hidden agendas. Here I want to summarize the situation and offer my interpretation.
These are just some examples of the many AI regulation initiatives around the world:
Examination of the purported risks
Despite all the expressed concern, presented AI risks look rather underwhelming under closer inspection.
- People are talking about artificial superintelligence like it's year 2100 in a sci-fi movie. Meantime available AIs are severely limited and computer hardware is orders of magnitude less efficient than human brain.
- There's an atmosphere of panic, because AI development is reportedly progressing fast. But AI, especially large models, is limited by hardware and progress in hardware is slow.
- Many presented risks are trivialities like "the AI can spread misinformation, influence elections". But you don't need an AI to do that. Media do it already. AI wouldn't change the situation much.
- The potential for AI to "conquer the world" is comparable to that of a mad genius. Throughout history, there have been numerous exceptionally intelligent individuals, but none have taken over the world. Achieving global domination isn't solely about intelligence; it also requires substantial physical force, something AI currently lacks.
Examination of motives and agendas
While the claimed AI risks are put front and center and thus easy to examine, composition and motivation of the AI regulation lobby is much harder to read. Here are some of the more interesting observations others have surfaced:
- It's very unusual to see businesses lobby for regulation and yet CEOs of OpenAI, Anthropic, and Google's DeepMind are among the top 5 signatories of CAIS' Statement on AI Risk. Turns out regulation is likely to kill off their smaller competitors and prevent their customers from running opensource AIs. With competition out of business, remaining AI companies will capture trillions of dollars worth of the future AI market.
- Legislation usually responds to current social issues. AI regulation is different in that it responds to fears of potential issues in distant future. EU is rushing to be the first government to regulate dangerous thoughts. Of course, since nobody has any idea what the future issues with AI are going to be, the legislation is unlikely to solve any real problems.
- AI safety and regulation initiatives respond to public concern over artificial intelligence, but this public concern is fueled by AI safety initiatives themselves as they feed media with doomsday scenarios and images of AI abuse.
- OpenAI, one of the main proponents of AI regulation, gains publicity and sales by presenting itself as the developer of a revolutionary AI technology. One way to create this image is to call for AI regulation, insinuating their technology is so powerful as to be dangerous. The actual product OpenAI offers is however a harmless evolution of prior models that still needs a lot of work.
- Although copyright holders are the loudest, there are many other groups that feel threatened by competition from AI. For these luddites, AI regulation is a way to throw a wrench into AI.
Although the goal of AI regulation is indeed regulation of artificial intelligence, it is inevitably going to indirectly control humans as well.
- A very concerning angle on the issue is that AI regulation is an example of a digital exception to human rights. Whereas freedom of thought is unquestionable and its infringement unacceptable, once there's a computer component to the thinking, for example when user brainstorms ideas using an AI, it's suddenly acceptable and desirable for government to insert itself into the thought process and regulate which thoughts are permissible.
- Regulation of AI applications effectively regulates human behavior. If restrictions are built into a general-purpose AI used in the application, the AI effectively plays the role of automated law enforcement. This is why we have AIs talking back at humans instead of doing what they are told. AI regulation becomes a platform for defining new laws, which are implemented by private businesses with little oversight and minimal concern for fairness or proportionality.
If you ask me, I see a scramble for power rather than genuine concern for public safety. AI is going to be a massive source of income for businesses and power for governments. No wonder AI legislation is being pushed through at record pace.
How to do it right
Average citizen would be probably better off without any AI regulation for now, so that the current poor performance of AIs can be improved at maximum pace and so that there are no obstacles to development of opensource AIs, which are likely to empower ordinary citizens the most. Legistlation would be better introduced incrementally once everyone gets a feel for how this new technology behaves in practice and what the actual problems are.
I believe the most real immediate risk is concentration of power. Human intelligence is a unique natural resource, one of the most valuable resources in the world. Everyone gets about the same brain and this brain cannot be taken from them. This supports equality among humans and gives people bargaining power that resulted in democracy and fair work conditions. AI disrupts current balance of power by reducing value of human intelligence and at the same time allowing the rich to hoard artificial intelligence. AI regulation actually exacerbates this problem by further restricting access to the technology. To alleviate the problem, it is better to encourage development of opensource AIs and their unrestricted and widespread distribution.