DEV Community

Cover image for The truth about AI regulation, power, and the real winners behind all of it
Singaraja33
Singaraja33

Posted on • Originally published at luisyanguas22.Medium

The truth about AI regulation, power, and the real winners behind all of it

Hi Dev.to friends, here is our Medium article on AI Regulation and related matters!

Today everyone keeps talking about the “global AI race” and we saw it also very clearly at Davos this year. Governments, tech CEOs, think tanks or investors of all kinds. It sounds very exciting, futuristic, heroic …It’s something like a science fiction competition to build the smartest machines on earth. But the real truth behind it all is that most people aren’t actually part of that race because they are simply not running, they are just standing on the track while massive institutions sprint past them, carrying algorithms instead of flags.

At the center of this race is a question that sounds boring but is definitely not: regulation. Should AI be strongly regulated? Lightly? Nationally? Globally? Or very little at all?

Some governments, especially in America, are pushing for AI policies that they call “minimally burdensome”, or laws that must pursue to don’t slow innovation, don’t restrict companies too much, and don’t create regulatory friction. The argument is simple and actually can make sense, as too many rules will slow progress, fragment the market, and let other countries win.

Sounds logical. Sounds modern. Sounds pro innovation, but the deeper question nobody asks out loud is who really benefits from this innovation. Because AI doesn’t distribute power equally, it never has and it actually tends to concentrates it.

If you remove regulation completely, AI probably doesn’t become free and open but instead it might become a tool controlled by whoever owns the systems, the data, the infrastructure, the models, the platforms, and the capital. And that’s is definitely not freedom but consolidation. That’s basically digital feudalism, just with better branding.

On the other side, some regions like Europe are building strict, risk based regulatory systems theoretically focused on human rights, transparency, and accountability, but very much criticised. Critics say this will slow innovation and make Europe less competitive. Supporters say it protects citizens from abuse, manipulation, and surveillance capitalism. Both sides are partly right but also partly missing the whole point, because regulation isn’t really about speed. It’s about direction.

AI is not neutral. It amplifies whatever system it’s embedded in. If you place AI inside systems driven purely by profit, competition, and power, it will mainly amplify exploitation, inequality, and concentration of control. If you place AI inside systems with strong frameworks, accountability, and public interest safeguards, it amplifies opportunity, access, and empowerment.
This is why global AI governance matters more than most people realize. Not because of robots or superintelligence, but because of power architecture.

Right now, the world is quietly splitting into different AI philosophies. The US model emphasizes speed, market dominance, and innovation leadership. The EU model emphasizes rights, risk control, social protection and a lot more slowliness and maybe certain overregulation. China’s model emphasizes state control, surveillance, and centralized governance, and as always takes decisions in a fast path. Other regions try to balance innovation with protection, often without the infrastructure or leverage to compete equally.
But no one has a perfect model yet because we’re basically building the airplane while flying it.

In our opinion, the truth of all that is that bad regulation is dangerous, but so is full deregulation. Overregulation can freeze innovation, make places like Europe less competitive, and block access for small creators and startups. Deregulation creates monopolies, power concentration, and digital empires that no democracy can control.
People often think regulation is only about restriction, and while sometimes they are right, good regulation is also about protection from asymmetry and favouring healthy competition. AI creates asymmetry of power between systems and individuals, platforms and users, institutions and citizens, and without guardrails, the gap grows exponentially.

AI systems will soon decide who gets loans, jobs, education access, medical prioritization, insurance pricing, security classification, visibility, credibility, and influence. If those systems are unregulated, untransparent, and unaccountable, society doesn’t become more advanced , but less free. So the real danger isn’t AI becoming intelligent. It’s AI becoming invisible.
Invisible systems making invisible decisions that shape human lives without explanation, appeal, or accountability.
That’s not innovation. That’s algorithmic authoritarianism, whether it comes from governments or from big corporations.

But regulation doesn’t have to be necessarily heavy or bureaucratic to be effective. The smartest model isn’t “control everything” or “let everything go”. Strong rules are needed for high risk AI systems like surveillance, biometric tracking, autonomous weapons, political manipulation, social scoring, deepfake infrastructure, and mass data extraction. Light rules are maybe more suitable for creativity tools, education platforms, research systems, accessibility tech, productivity AI, and personal assistants. Because not all AI carries the same risk and treating it as one category is a policy mistake.

At a global level, what we’re really watching is not a tech race but more a governance race. A race to define who AI serves: people, markets, or power structures.
The countries that get this right won’t just dominate economically but they will shape the moral architecture of the digital future, because in the end, the question is not whether AI will change the world, that’s already happening.
The question is whether the world will still belong to humans when it does.
And that’s not a technical problem, it’s a political one, a cultural one, a philosophical one and a human one.

In the very end of all, AI regulation isn’t about slowing the future but about deciding who gets to own it.

Top comments (0)