Ursula von der Leyen

Artificial intelligence European style

A preliminary deal has been struck on what will be the world’s first attempt to regulate AI. Jason Walsh asks if it addresses the basic questions raised by the technology
Ursula von der Leyen Image: EU

11 December 2023

And then there was law: the Council of the European Union, which represents member states, and the European Parliament last week finally reached a provisional agreement on the long-proposed Artificial Intelligence Act (AI Act), marking the creation of the world’s first comprehensive set of rules governing artificial intelligence.

Though the fine details remain to be worked out, the EU has agreed on the terms of AI regulation, which will come into force in 2025. Broadly speaking, the Act seeks to regulate AI without entirely stifling its development. Central to this is a series of safeguards, including a ban on social scoring and a complaints process.

European Commission President Ursula von der Leyen (pictured) welcomed the agreement, saying that it would guarantee “the safety and fundamental rights of people and businesses” and “support the human-centric, transparent and responsible development, deployment and take-up of AI in the EU”.

Posting on X (the platform formerly known as Twitter), EU Commissioner for the Internal Market Theirry Breton described the deal as historic, writing that the Act was “much more than a rulebook –  it’s a launchpad for EU start-ups and researchers to lead the global AI race”. He also added a ‘thumbs-up’ emoji, as this is how politicians now communicate, apparently.

Risk profiling

Delaying the Act was a dispute over whether law enforcement agencies should be allowed to use AI-based biometric systems to identify or categorise people based on sensitive characteristics such as gender, race, ethnicity, religion or political affiliation. The parliament said nay, whereas member states – notably including France, which hopes to use biometrics for security during the 2024 Olympics in Paris – felt national security trumped fundamental rights. In the end, the parliament buckled and accepted a proposal for the use of AI in combating certain serious crimes including terrorism and abductions.

Leaving aside the horror of predictive policing (which is hard to do, given AI’s obvious failings and the tendency of its users to view it as a form of magic), the Act is interesting because it reveals another faultline in society, one that the tech sector has proven expert at exploiting. AI is perhaps the ne plus ultra of the its desire to wring profit from everything, from other people’s work to temporary blind spots created in the wake of a technology’s deployment. It is increasingly obvious that today’s ‘platform capitalism’ is asset light and seeks to gain profits from rent-seeking activities rather than anything as dull, or risky, as being economically productive.

The question the Act will have to answer in the long term, then, is will it stop tech companies from scooping up user data (which is to say, people’s labour, and even traces of their personal lives and identities) and using it to squeeze more juice? The jury is very much out on this. Europe is clearly hoping it can use the power of its 345 million-strong market to tame their worst excesses: if you want access to these consumers, you have to play by the EU’s rules.

Unsurprisingly, however, other countries are taking a different approach. For their part, both the US and Britain are advocating lighter touch regulation, giving companies greater freedom to consume and process. 

So far, it seems to me that the US-UK axis is winning. While some critics say the EU AI Act is too weak, the dominant discourse is that AI should be unleashed at any cost and let the chips fall where they may: AI is transformative, and disruptive and a range of other adjectives that can be translated as meaning ‘easy to milk money out of’, so any attempt to regulate it is, at best, folly and, at worst, counter productive. Europe, we are told, risks being left behind in the race to destroy both jobs and capital and make the world an uglier and stupider place.

And some of those voices are coming from within the EU. According to Germany’s bosses’ union the Federation of German Industries, for example, Europe is now at risk of falling behind when it comes to the key technology of AI. According to a report from news agency Deutsche Presse-Agentur, managing director Iris Ploger said: “With the comprehensive regulation of basic AI models and AI applications, the AI Act is jeopardising the ability to compete and innovate on both the manufacturer and user side”.

We’ll see. It is true that Europe has lagged terribly in technology, but the problem is the failure of businesses to invest, not regulation. Still, one thing is for sure, though: the coming scandal on the source of AI training data will not be boring. 

Read More:

Back to Top ↑