EU flag

EU bares its fangs to tech giants

The EU Digital Safety Act is another sign that Europe wants to rein in data gobblers, says Jason Walsh
Image: Shutterstock via Dennis

25 August 2023

Coming into effect today, the European Union’s Digital Services Act (DSA) will, the bloc hopes, protect consumers from a range of online hazards including illegal content.

First tabled in 2022, the DSA aims to “tackle the spread of illegal content, online disinformation and other societal risks”. What is most interesting, however, is that the Act is another sign that countries outside the US and China are tiring of how Internet companies gorge themselves on personal data.

The Act places a ban on certain types of targeted advertisements on online platforms, notably, but not only, when they target children. For example, use of “special categories of personal data”, such as ethnicity, political views, sexual orientation will be prohibited.

While small tech firms don’t have to comply until next year, the 800-pound gorillas, as of today, face potential fines of 6% of turnover and even being blocked in Europe if they are found to be in breach.

Those services designated as “very large” by dint of having more than 45 million users, also face additional scrutiny. Popular online platforms and search engines will have to perform and publish risk assessments in relation to their impact on basic rights and potential for interference in electoral processes.

They must also share details of how their algorithms work, something Facebook and Instagram owner Meta Platforms did this week.

Like GDPR but more expensive

The complete list of platforms classified as “very large” is: Alibaba, AliExpress, Amazon, Apple’s App Store, Bing,, Facebook, Google Play, Google Maps, Google Search, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, X (formerly Twitter), Wikipedia, YouTube and, er, shoe retailer Zalando.

The Act is only the latest in a long line of EU regulations aimed directly at the Internet and communications sector, beginning with the General Data Protection Regulation (GDPR). Next up will be the Artificial Intelligence Act (AI Act), which classifies uses of AIs according to a risk score.

Obviously, the EU does not have extraterritorial jurisdiction but it is already having an effect outside its borders – and it is safe to presume that it wants to. One of the provisions of the Act is that targeted advertising based on profiling children is no longer permitted. Lo and behold, Meta’s Nick Clegg has said the company has “made changes” to how it pushes adverts to kids. 

“Since February, teens aged 13-17 globally no longer see advertising based on their activity on our apps – like following certain Instagram posts or Facebook pages. Age and location is now the only information about teens that advertisers can use to show them ads,” he wrote on 22 August.

Snapchat is doing pretty much the same in the EU and UK, while TikTok is applying the restrictions only in the EU.

The big question is whether other businesses will choose to emulate Meta or TikTok, by either deciding to apply the rules universally for the sake of efficiency, or, TikTok-style, carving out a special regime for European users.

Read More:

Back to Top ↑