Battle over EU regulation of artificial intelligence enters pivotal moment
Negotations over the EU’s AI Act have moved into a third, possibly final, day after a marathon 24-hour session addressed concerns raised by the bloc’s three largest economies, France, Germany, and Italy.
Last November, these countries argued that the most powerful AI models, such as so-called foundational models (of which ChatGPT is the best-known example), should not be subject to the strict rules of the AI Act. They want the companies behind these models to regulate themselves through a code of conduct. This position deviates from the European Commission’s original proposals, which were intended precisely to handle a wide range of AI applications.
All fingers point toward the tech lobby as the ‘culprit’. MEP Van Sparrentak highlighted the impact of the tech lobby on public debate and negotiations. Recent statements by political leaders such as German Economy Minister Robert Habeck, who advocates for “innovation-friendly regulation,” show that national interests and economic visions play a prominent role. Commissioner Thierry Breton also indicated that there is a lot of lobbying around the AI Act.
Sceptics might note the official launch of the AI Alliance a group consisting of “leading organisations across industry, start-ups, academia, research and government coming together to support open innovation and open science in AI”. Its 50 founding members include IBM, Intel, Oracle, Red Hat and Meta. If ever there was a body showing a willingness to develop AI in a responsible and transparent manner, this would be it. Notable by their absence, however, are Microsoft, Google and OpenAI (backed to the tune of $13 billion by Redmond).
The importance of regulation
This tension between economic interests and European values is becoming increasingly difficult. On the one hand, there is the desire to stay caught up with the US and China, which are investing heavily in AI. On the other, there is fear that AI systems can be used without regulation for purposes that undermine the privacy and rights of citizens. The AI Act would also mean the EU sets an example for the rest of the world. AI distortions, such as deep fakes and voting clones, are already a reality. They have the potential to harm democratic processes and individual freedoms.
Experts and MEPs, therefore, call for mandatory security testing and independent oversight. Such strict regulation should ensure the integrity of AI technologies and prevent misuse. The European Parliament has previously voted for proposals imposing strict requirements on the creators of so-called “foundational models”. Still, major member states’ direction changes are putting pressure on this progress.
Today negotiations are expected to conclude with AI in biometric surveillance the final hurdle, in particular how to balance the right to privacy with security concerns.
If successful, the AI Act could become a new global standard akin to the General Data Protection Regulation, aspects of which have been adopted worldwide. The big question is: Does the EU again opt for strict regulation that protects citizens’ rights? Or will it give in to pressure from the tech lobby and let AI companies regulate themselves in the name of supporting ‘innovation’?