Sorry, no posts matched your criteria.

EU takes the lead in regulating AI

06.05.2021 Reading time: 8 minutes

Alexa, Siri and the Google Assistant, self-driving cars, speech tech and facial recognition, image or text analysis software, … Artificial Intelligence has been booming in recent years and each and every one of our lives is, consciously or unconsciously, affected by AI on a daily basis, from the advertisement we see, to the control of the traffic lights where we are waiting while we swipe away that advertisement on our smartphone. 

Artificial intelligence is therefore rightly regarded by the EU as one of the essential building blocks for the digital society of the future and given the speed with which computing power is evolving, that future does not start tomorrow or today, but it did so yesterday already.

However, this lightning-fast evolution has also raised awareness in recent years that there is an urgent need for a regulatory framework. At best, AI makes our lives more pleasant, but that same technology in the wrong hands could have potentially dire consequences, for example for your and my privacy.  

Precisely for this reason, behind the scenes within the EU, a regulatory framework has been underway for 2 years to ensure the safe and ethically responsible use of AI within the EU. The first draft of the “AI Regulation” that should result from this, was made public on April 21, after being unintentionally leaked a few days earlier, as usual.

Let’s have a look at that draft together, shall we…?

 

First attempt in the world to regulate AI

The proposal for a regulation that the European Commission made public on April 21, 2021 is no less than the very first regulatory framework for AI in the world. The rules are part of the European Commission’s strategy to make the EU a global hub for new technology and digitization. To achieve this goal, the EU wants to provide legal safeguards for the privacy and fundamental rights of European citizens, while strengthening support for AI, investment and innovation across the EU. 

(Cynics are already wondering whether the design that is before us today, because it is so strict, is not likely to have the exact opposite effect, but more on that later).  

 

Risk-based approach that is reminiscent of GDPR

A first reading of the new rules immediately reminds one of the GDPR, which has over recent years shaped the way we look at data protection in the EU and far beyond its border. In a similar way, the EU now wants to regulate the way in which companies throughout the EU or even from outside the EU can use data (whether or not personal data) within AI algorithms. 

One of those striking points of contact with the approach under GDPR is the so-called “risk-based approach”: AI development will require a risk analysis and AI systems that pose a clear threat to the safety and rights of European citizens will be banned. These are AI applications that manipulate human behavior in order to circumvent the free will of users. The European Commission itself gives as an example “smart” toys with speech technology that can encourage children to engage in dangerous behavior. The Commission makes a distinction between such “high risk” AI, “limited risk” AI and “minimal risk” AI. 

High risk AI systems are those used for public infrastructure (e.g. traffic), in medical environments, in the context of vocational training or access to education (e.g. improving exams), employment, personnel policy (e.g. screening during job applications), essential services such as banking and credit services (creditworthiness checks), use by police services, customs services, courts and other authorities (including, for example, also all possible biometric systems of facial recognition, voice recognition, fingerprint recognition, etc…). 

These applications will be subject to strict obligations before they can be placed on the market:

  • Serious obligations to risk assessment and risk mitigation obligations
  • High quality of the datasets that feed the system in order to exclude risks and discriminatory results as much as possible
  • Registration(!)
  • Detailed documentation and transparency towards governments about the operation of the algorithms
  • Transparent information for users
  • Obligation to ensure appropriate human supervision of the operation
  • (cyber) Security, robustness and accuracy

All systems for remote biometric identification in particular are considered high risk and must meet strict requirements. In public places, the direct use of those systems for law enforcement purposes is in principle prohibited. Limited exceptions are strictly defined and regulated (e.g. to find a missing child, avert a specific and imminent terrorist threat, or track, identify or prosecute a perpetrator or suspect of a serious criminal offense). Prior authorization must be given by a judicial or other independent authority, which is only valid for a limited period and environment and for specific databases.

By AI applications with limited risk, the Regulation means amongst others chatbot applications. In particular, the requirement will be that the user is informed transparently and correctly about the use of AI, so that he or she can decide for themselves whether or not to engage with a software application.

The latter category is by far the largest. These are thousands of AI applications for daily use, which involve only “minimal risk”. Examples are: “smart” spam filters, self-learning video games, predictive marketing tools, smart kitchen appliances, … The draft regulation leaves those systems untouched as the risk to the rights or safety of citizens is minimal or nonexistent (which does not mean that other rules like exactly GDPR might not apply to these applications, of course!).

Some AI applications will by nature be prohibited. This is the case, for example, for the use of real-time automated facial recognition systems by government agencies in publicly accessible places or also AI applications that “use subliminal techniques that go beyond a person’s consciousness“, or that attempt to exploit the vulnerabilities of people due to age, physical or mental disability, in both cases to disrupt their behavior in a way that could cause physical harm or psychological damage.

 

Overreaching limitations?

Incidentally, there is also a lot of criticism, rightly or wrongly, of the draft regulation. Early opponents point out that the European Union is in danger of shooting itself in the foot. After all, it is the first political bloc in the world to impose a legislative framework around AI that also entails far-reaching restrictions and it immediately imposes the same rules – just as with GDPR – on non-European companies that want to offer their software in the EU. In particular, the prohibition to use AI for credit scoring, for example, or even the far-reaching restrictions in the use of biometric data, critics say, threaten to impose serious restrictions on competition on European players and thus threaten to move innovation from the EU to other parts of the world.

It seems however that the EC has given more than sufficient thought to the innovation aspect. This is witnessed by the fact that the draft contains measures to support innovation. For example, there is a sandboxing scheme in the field of AI and there are measures to exempt SMEs and start-ups from too much regulatory pressure or the creation of digital hubs and facilities for testing experiments. 

 

Fines and penalties

The design provides, moreover, a strong sanction mechanism and the proposed fines again are reminiscent of what we already know under GDPR, with administrative fines up to 20 million euros, or 4% of the total global annual revenue. As under the GDPR, the national supervisory authorities are empowered to enforce the rules and a “European Artificial Intelligence Board” (EAIB) is being established, analogous to the EDPB under GDPR for uniform application throughout the EU. 

 

Additional “Machine Regulation” to follow soon

In addition to the future AI regulation, the EU is also working on a “Machine Regulation”, which should replace the current Machine Directive in due course. While the new AI regulation will address the safety risks of AI systems, the Machine Regulation aims to guarantee the safe integration of AI ​​systems in the machine as a whole. The current Machine Directive, which will be replaced by the new Machinery Regulation, already sets health and safety requirements for machines today. These rules will therefore be updated in the foreseeable future. This concerns the safety of a wide range of products for consumers and professionals, from robots to lawn mowers, 3D printers, construction machines and industrial production lines. 

 

Next steps?

Today only a draft from the European Commission has been submitted. That draft must now be discussed and amended by both the European Council (the heads of government) and the European Parliament before a final proposal can be expected. The analogy with GDPR teaches us again that this is an exercise that requires 24 months at best. The final version will therefore take a while, but it is clear that the EU is serious about regulating AI. We at Sirius Legal will be following every evolution and will certainly report on this in time on our office blog.

 

Questions about AI or the legal aspects of new technology in general?

We are happy to make time for you. Feel free to call or email Bart Van den Brande at bart@siriuslegal.be or +32 492 249 516 or book a no-obligation online introductory meeting with Bart via Google Meet or Zoom.

Sorry, no posts matched your criteria.