The European Union’s AI Act came into force yesterday. This law introduces a risk framework for AI applications, with stricter rules for risky uses. The legislation prohibits practices such as social scoring and manipulative AI and ensures that AI systems respect fundamental rights and security. Clear obligations for developers, and a focus on transparency, form the basis. Here’s what you need to know about the regulation.
At the heart of the AI Act is a risk-based classification system for AI applications. AI systems are classified into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk. Systems classified as unacceptable risk, including social scoring by governments and toys that use voice assistance to encourage dangerous behaviour, will be banned entirely. The law targets high-risk AI systems, which are now subject to strict obligations such as risk assessment, high-quality datasets, human oversight and robustness.
Developers must ensure that their AI is transparent about its operation and provenance. This includes AI in education, employment, credit scoring and law enforcement. About 15% of all AI systems are expected to fall under these strict regulations.
Developers of high-risk AI systems will be most affected. They must do thorough risk assessments, use high-quality data, and they must ensure that their systems are robust and secure. These systems will be registered in an EU database. Moreover, before bringing their AI to market, developers must confirm that they meet the requirements of the law.
Users of high-risk AI systems, while having fewer obligations than providers, still have important responsibilities. For example, they must deploy AI systems according to instructions.
Brand new AI Pitch Competition helps advance technology and entrepreneurship in Brabant
“Artificial intelligence is the future. There are endless possibilities. That is why this technology is one of Brabant’s main growth drivers,” says Liesje Goldschmidt, head of business development at Erasmus Enterprise and one of the organisers of the AI Pitch Competition.
Any AI systems that pose a clear threat to the safety, livelihood and rights of individuals will be banned. This includes AI that manipulates behavior, or exploits vulnerabilities based on age, physical or mental abilities. Moreover, real-time biometric identification technologies are considered risky and subject to strict requirements, especially in law enforcement.
Oversight
The European AI Bureau, established earlier this year, will work with member states to oversee enforcement and implementation of the law.
Although the AI Act takes effect today, it will be phased in. Bans on AI systems with unacceptable risk will take effect after six months, rules for general purpose AI models will apply after 12 months, and high-risk AI systems embedded in regulated products will have a 36-month compliance period.
An AI Office will create guidelines and encourage conversations between AI providers, national authorities and stakeholders so that AI rules are aligned across Europe.
Fines for non-compliance with the AI Act are significant and can amount to €35 million or a percentage of a company’s global turnover. Such fines underscore the EU’s commitment to ethical AI development.
Internationally, the EU legislation is being closely watched as a potential blueprint for AI regulation around the world. Reggie Townsend, advisor to the US president on AI, emphasises the importance of AI technology and the need for education on its consequences. Groundbreaking European legislation may inspire other countries.
Reaction has so far remained mixed. While the clarity on AI governance has been welcomed, the risk-based approach has yet to silence some critics.
“Even though the new act will encourage trustworthy and safe AI practices, it neither technically enforces it nor eliminates all the security risks the technology brings. Yes, the new guidelines offer valuable ethical guidance, but unfortunately vendors won’t always follow those, and criminals will always operate beyond both ethical and legal constraints. AI users can’t rely solely on regulatory frameworks for protection against cyberattacks, nor can they be too careful about the trust they place in the AI systems they use,” said Shaked Reiner, principal security researcher at CyberArk Labs.
Ashley Casovan, managing director of The International Association of Privacy Professionals AI Governance Center said: “…we now have the first set of official rules for the governance and oversight of AI. These comprehensive rules and guidance provide a clearer path forward for those who are building or using AI to do so in a safe and responsible manner. However, given the changing nature of AI paired with limitless possibilities for its application, many implementation questions remain. As a result, we are seeing many organizations, who build and use AI, establish AI governance teams with qualified and experienced AI governance professionals to unpack and implement these rules for their context.”
Subscribers 0
Fans 0
Followers 0
Followers