Credit: Suriya Phosri
Algorithms decide who lives and dies in Gaza. AI-powered surveillance tracks journalists in Serbia. Autonomous weapons are paraded through Beijing’s streets in displays of technological might. This isn’t dystopian fiction – it’s today’s reality. As AI reshapes the world, the question of who controls this technology and how it’s governed has become an urgent priority.
AI’s reach extends into surveillance systems that can track protesters, disinformation campaigns that destabilise democracies, and military applications that dehumanise conflict by removing human agency from life-and-death decisions. This is enabled by the absence of adequate safeguards.
Last month, the UN General Assembly adopted a resolution to establish the first international mechanisms – an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance – agreed as part of the Global Digital Compact at the Summit of the Future in September. The non-binding resolution marked a positive first step towards stronger regulation, but its negotiation process revealed deep geopolitical fractures.
Through its Global AI Governance Initiative, China champions a state-led approach that excludes civil society from governance discussions, while positioning itself as a leader of the global south. It frames AI development as a tool for economic advancement and social objectives, presenting this vision as an alternative to Western technological dominance.
Meanwhile, the USA under Donald Trump has embraced technonationalism, treating AI as a tool for economic and geopolitical leverage. Recent decisions, including a 100 per cent tariff on imported AI chips and the purchase of a 10 per cent stake in chipmaker Intel, signal a retreat from multilateral cooperation in favour of transactional bilateral arrangements.
The European Union (EU) has taken a different path, implementing the world’s first comprehensive AI Act, which comes into force in August 2026. Its risk-based regulatory framework represents progress, banning AI systems deemed to present “unacceptable” risks while requiring transparency measures for others. Yet the legislation contains troubling gaps.
While initially proposing to ban live facial recognition unconditionally, the AI Act’s final version permits limited use with safeguards that human rights groups deem inadequate. Emotion recognition technologies are banned in schools and workplaces but remain allowed for law enforcement and immigration control—a particularly concerning decision given documented racial bias in such systems. The ProtectNotSurveil coalition has warned that migrants and racial minorities in Europe are serving as testing grounds for AI-powered surveillance tools. Most critically, the AI Act exempts systems used for national security purposes and autonomous drones employed in warfare.
The growing climate and environmental impacts of AI development add another layer of urgency. Interactions with AI chatbots consume roughly 10 times more electricity than standard internet searches. The International Energy Agency projects that global data centre electricity consumption will more than double by 2030, with AI driving most of this increase. Microsoft’s emissions have risen by 29 per cent since 2020 due to AI-related infrastructure, while Google quietly removed its net-zero pledge after its carbon footprint grew 48 per cent between 2019 and 2023. AI expansion is fuelling construction of new gas-powered plants and delaying coal phase-outs, directly contradicting climate goals.
The current patchwork of regional regulations, non-binding international resolutions and lax industry self-regulation falls far short of what’s needed to govern a technology with such profound global implications. State self-interest continues to prevail over collective human needs and universal rights, while the companies that own AI systems accumulate immense power largely unchecked.
The path forward requires recognising that AI governance isn’t merely a technical or economic issue – it’s about power distribution and accountability. Any regulatory framework that fails to address the concentration of AI capabilities in the hands of a few tech giants will inevitably fall short. Approaches that exclude civil society voices or prioritise national advantage over human rights protections will also prove inadequate.
The international community must urgently strengthen AI governance mechanisms, starting with binding agreements on lethal autonomous weapons systems that have stalled in UN discussions for over a decade. The EU should close the loopholes in its AI Act, particularly on military applications and surveillance. Governments worldwide need to establish coordination mechanisms that can counter tech giants’ control over AI development and deployment.
Civil society cannot stand alone in this fight. A shift towards human rights-centred AI governance depends on champions emerging within the international system to prioritise rights over narrow national interests and corporate profits. With AI development accelerating rapidly, there is no time to waste.