International Technical Support: (EU): +44 (20) 80891215 & (US): +1 312 248 7781 | support@trustcloud.tech

EU Artificial Intelligence Act: a milestone in regulating future technology

Share This:

TrustCloud | EU Artificial Intelligence Act: a milestone in regulating future technology

On March 13, 2024, the European Parliament marked a historic milestone by approving the Artificial Intelligence (AI) Act, becoming the world’s first Regulator to establish a comprehensive legal framework on this disruptive technology by categorising AI systems based on their potential risks and impact.

T

his law, whose final text will be published in April 2024 and implementation will take place over the course of the next two years, aims to ensure that AI is developed and used in a responsible, safe and ethical manner within the European Union. 

Background: a path to regulation 

Recent years have witnessed rapid advancement in the field of Artificial Intelligence, with applications ranging from facial recognition to autonomous driving, and of course, generative AI. However, this rapid progress has also raised concerns about potential risks associated with AI, such as discrimination, lack of transparency, and misuse. 

One of these risks, perhaps the most notable, is deepfake: an artificial intelligence technique that is used to create fake videos or images that appear authentic, usually by replacing one person’s face in an audiovisual content with that of another person, using machine learning and image processing algorithms. 

TrustCloud applies AI to detect deepfakes both in real-time facial identification and recognition processes and in retrospective data analysis. It covers both images and videos and is supported by a team of AI and cybersecurity experts who are constantly working to stay up-to-date with the latest trends and deepfake generation techniques. 

TrustCloud’s automatic deepfake detection protects people’s identities and secures video identification processes, seamlessly integrating into complex transactions. 

In response to these concerns, the European Commission presented a proposal for an AI Act in 2020, which was subsequently debated and modified by the European Parliament and the European Council. The final approval of the law in March 2024 represents the culmination of a lengthy deliberation process aimed at establishing a robust legal framework in this field. 

But what do we mean by AI? One of the key points during the negotiation of the text was to find the most appropriate definition for Artificial Intelligence. In the end, it was decided to use: 

“Machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the output it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”. 

Objectives and purpose: towards beneficial AI for society 

The European Act’s main objective is to ensure that AI systems are developed and used in a way that benefits society as a whole. To achieve this, the law seeks to promote the following objectives: 

  • Protecting human rights and fundamental freedoms: AI must not be used in a way that discriminates against or harms individuals. 
  • Ensuring security and protection: AI systems must be designed to be secure against cyberattacks and other threats. 
  • Promoting transparency and accountability: Individuals should have the right to know how AI is being used and who is responsible for its operation. 
  • Encouraging innovation and responsible development: The law should create an environment conducive to investment and responsible innovation. 

Risk levels: a proportional approach 

One of the most interesting points that has garnered much debate surrounding the law is the criterion for considering AI as “dangerous” or exceeding its utility to become a tool of control. In this regard, the law adopts a risk-based approach that classifies Artificial Intelligence systems into four risk levels and defines corresponding regulations for each. 

  • Unacceptable Risk: Applications using subliminal techniques, exploitation systems, or social scoring systems used by authorities are strictly prohibited. This means that the use of systems that track and analyse personal data on a large scale to evaluate behaviour, generate profiles, or make decisions affecting people’s lives is prohibited. The aim is to prevent the creation of “surveillance states” where AI is used to monitor and manipulate the population. Remote real-time biometric identification systems used by law enforcement in public spaces for mass surveillance of citizens are also prohibited. 
  • High Risk: Applications related to transportation, education, employment, and social welfare, among others, fall into this category. Also, biometric identification systems. These AI systems can potentially create an adverse effect on an individual’s health, safety or their fundamental rights. Companies intending to market or deploy a high-risk AI system within the EU must undergo a prior “conformity assessment” and meet a series of requirements to ensure system safety. As a pragmatic measure, the regulation also requires the European Commission to create and maintain a public database where providers will be obliged to provide information about their high-risk AI systems, ensuring transparency for all involved stakeholders. 
  • Limited Risk: This refers to AI systems that meet specific transparency obligations. For example, an individual interacting with a chatbot must be informed that they are speaking with a machine so they can decide whether to continue (or request to speak with a human instead). AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception, irrespective of whether they qualify as high-risk AI systems or not. 
  • Minimal Risk: These applications are already widely deployed and constitute the majority of AI systems we interact with today. Some examples include spam filters, AI-powered video games, and inventory management systems. 

Prohibitions of the Artificial Intelligence Act 

The regulation establishes a series of robust prohibitions aimed at protecting the fundamental values of the European Union and preventing the misuse of this powerful technology. These prohibitions address specific practices that could lead to serious negative consequences for society. 

Automated decision-making based on AI algorithms must be free from any bias or discrimination. The law prohibits the use of AI systems that make decisions discriminating against people based on race, ethnicity, gender, sexual orientation, religion, disability, or any other characteristic protected by law. The aim is to ensure equal treatment and non-discrimination in all areas where AI is implemented. 

The law prohibits the use of AI to manipulate people’s emotions or behavior. This includes techniques such as microtargeting, personalized advertising based on psychological profiles, or the use of bots to spread false information or propaganda. The goal is to protect individuals’ autonomy and freedom from algorithmic manipulation and ensure that AI is not used to unduly influence their decisions or behaviors. 

The law requires that AI systems be transparent and auditable. This means that it must be possible to understand how these systems work, what data they use, what decisions they make, and on what criteria. 

AI in the future: responsibility and social service 

The text represents a fundamental step in ensuring that AI is developed and used responsibly, safely, and ethically in the European Union. The law has the potential to drive innovation, foster trust in technology, and contribute to creating a more beneficial future for society as a whole. However, any regulatory framework of this kind cannot be immune to scrutiny from the public. Every step must be closely followed to ensure that Artificial Intelligence remains a tool of humanism and progress. 

The framework aims to guarantee possibilities while minimizing risks to the greatest extent possible. Europe has strived to make it flexible enough to understand future technologies or functions, making it easy to adapt to any changes that may arise. However, some Member States are considering “national” rules to amend certain aspects or ensure internal security. This decision could have the opposite effect: insecurity for providers and users of services due to contradictions in implementation. 

The implementation and success of the law will largely depend on cooperation among different actors, such as governments, businesses, researchers, and civil society. It is essential to work together to ensure that AI is used in a way that benefits everyone and respects the fundamental values of the European Union. 

We can help you implement tailor-made processes for your secure digital transactions and meet the latest compliance standards.  

Contact our specialists now and ask for personalised advice

Back To Top