By Alberto Martín Casado (ERNI Spain)
At a time like the present, marked by changes and technological advances, the emergence of Artificial Intelligence (AI) raises many questions, especially in relation to its regulation. Is it necessary to place limits on AI? The European Union (EU) believes it is, and its Artificial Intelligence Act (AI Act), the first of its kind in the world, is proof of this.
With AI promising substantial benefits, from improvements in healthcare to support for education, there is a debate about the need to balance these advances with the protection of users – their safety and fundamental rights. Transparency, weighing risks and benefits, and the regulation of specific applications such as biometric identification are key aspects of this ground-breaking legislation.
It is also important to mention that, given the rapid pace at which Artificial Intelligence is advancing, regulators will have the difficult task of adapting to the environment and amending the law in order not to fall behind and avoid loopholes. This gap, together with the legal complexity of sensitive issues such as cybersecurity in the handling of personal data, highlights the urgency of an agile and adaptive approach to address the challenges that AI presents specifically in the field of health.
Turning to the substance, the new European Union law on Artificial Intelligence represents a historic milestone. This provisional agreement on the proposal for harmonised rules aims to ensure safety and respect for fundamental rights in the use of AI systems placed on the European market and used in the EU.
A global pioneer, the AI law aims to regulate the use of AI according to its capacity to cause harm, adopting a “risk-based” approach. The higher the risk, the stricter the rules. In parallel, the law aims to facilitate investment and innovation in AI in Europe, positioning the EU as a leader in technology regulation globally, just as the General Data Protection Regulation (GDPR) did.
Key elements of the interim agreement include rules for high-impact AI models and high-risk systems, a new governance architecture, extended prohibitions and enhanced rights protection. It also establishes a classification of AI systems, with requirements and obligations for access to the EU market, as well as prohibiting certain uses, such as cognitive-behavioural manipulation and biometric categorisation to infer sensitive data.
The new governance architecture includes an AI Office to oversee advanced models, a scientific panel of experts and an AI Board with representatives from Member States, which will be responsible for enforcing penalties for violations of the law, all with more proportionate limits for SMEs and start-ups.
Its entry into force is projected for the first two years after its approval, with exceptions for specific provisions. The provisional agreement will be subject to review and formal approval by both institutions, marking the beginning of a new era in the regulation of artificial intelligence in the EU.
Positives and negatives of the AI Law
Despite all these developments, the new Artificial Intelligence Law presents a thought-provoking duality. On the one hand, the regulation acts as a ‘quality filter’ for AI projects intended for commercial use, providing a layer of reliability that can build consumer confidence. However, this positive aspect is counterbalanced by operational barriers that could become obstacles to innovation. The need for concrete standards without loopholes is evident, as ambiguity could slow down progress and limit the transformative potential of artificial intelligence.
Furthermore, the lack of a specific section on the application of AI in the healthcare sector within the law raises questions about the regulation of this important and sensitive sector. Without clear guidelines, despite their mention in certain sections, there is a risk of a gap in the safety and ethics of our healthcare systems – an issue that will undoubtedly need to be addressed sooner rather than later.
Another factor to be taken into account are the significant fines considered in the law, ranging from 35 million euros or 7% of global turnover to 7.5 million or 1.5% of turnover, which raises the question of whether they are proportionate and fair, especially for SMEs and start-ups. In short, the EU’s new Artificial Intelligence Law, through seeking to establish necessary legal and ethical boundaries, opens the door to a crucial dialogue on how to balance quality and safety with innovation, and how to adjust sanctions to ensure fair and effective enforcement to allow for the prosperous and, above all, safe development of a technology that is destined to accompany us in our daily lives.