AI Act Implementation: All You Need to Know about AIA Draft by the EU
On April 21st, 2021, the European Commission issued a proposal for a Regulation on the AI act, which is very likely to be adopted in the second half of 2022. This act will definitely affect the AI industry, at least in the European Union. Our experts have analyzed the law and prepared a digest with answers to basic questions that will help you figure out whether these regulations will impact your business, to what extent, and in what manner.
The draft of the AI regulation introduces a holistic legal framework aimed at ensuring that AI-powered technologies are secure and respect fundamental rights. The EU AI regulation is a robust and serious initiative that will have a substantial impact on the European IT landscape. No wonder there have been many discussions around this topic.
On the one hand, AI technology's penetration into our daily lives grows deeper with each year and some form of regulation is needed to bring more order to the process. On the other hand, there are lots of concerns that such initiatives could lead to over-regulation of the AI market hindering further development of AI-related technologies.
Let’s take a closer look at the AIA provisions, the AI act compliance requirements, and the AI act fines to see what the new regulations actually are and see what adjustments are needed to existing processes to stay compliant.
If you look at the Act from a practical perspective, the answers to the most common questions are as follows:
This is just the tip of the iceberg. Here's an in-depth overview with answers to the most important questions about the draft AI law to help you prepare for its implementation.
Who will be affected by AI legislation?
According to the current draft, the AIA will apply extraterritorially to any AI supplier or distributor whose services or products are produced or used inside the EU. The law will impose AI act obligations on participants at different stages of the value chain, from manufacturers of AI systems to importers, distributors, and users.
All systems that meet the European Commission definition for AI systems will fall under these regulations.
As you can see, the definition is quite broad and includes not only machine learning but also expert systems and statistical models that have been around for a long time. This has drawn much criticism. However, it is important to understand that the severity of legal requirements does not depend on the technologies used, but on the degree of risk they pose to consumers.
Therefore, the fact that you are using some technologies that fall under the definition described above does not necessarily mean that all the regulations will apply to you. You need to carefully analyze and evaluate which risk group your system falls into.
What countries are subject to AI regulations?
In the current draft, the initiative will apply not only to those organizations that operate in the EU, but also to those companies that have at least part of their AI-related activities carried out within the EU.
This means that the following companies will fall under the regulations:
- suppliers/importers/distributors who develop artificial intelligence systems that will be placed on the market or commissioned within the EU, regardless of whether these suppliers are registered in the EU or in another country;
- users of AI systems that are based (located) in the EU;
- suppliers and users of AI systems who are located in a third country, but whose products are used in the EU.
Similar initiatives have already been introduced in other countries. For example, The US Guidance for Regulation of Artificial Intelligence Applications, The Pan-Canadian Artificial Intelligence Strategy, China’s Generation Artificial Intelligence Development Plan, and the UK’s Digital Economy Strategy. There is a high probability that over time, similar laws regulating the AI domain will appear in other countries. So, if your business is connected to AI, but does not fall under the current AI Act, it still makes sense to start looking towards AI compliance as a part of your long-term strategy.
How long will it take to enforce the AI act procedures?
The adoption of the law is planned for the second half of 2022. The date is quite preliminary as there are numerous legal and tech-related discussions that have revealed certain gaps and inconsistencies in the current version. The AI law is way too comprehensive and will have a significant impact on the entire ecosystem so there are still many aspects that need to be defined and included in the regulation.
Moreover, the implementation of the AI Act practices will also take time. The European Commission will need to establish several new institutions including a new European AI Board with the Member States representatives, the European Data Protection Supervisor, and the Commission. Member states will need to come up with AI act penalties that should be proportional to the company’s size and take into account the interests of small and medium-sized companies.
Currently, the law provides for a transition period of 2 years, but taking into account the complexity of the transition process, possible changes and additional clarifications, there is a chance that the AI act enforcement period will also be extended.
AI act compliance requirements
The AI Act implies a horizontal framework that divides all AI systems into 4 groups according to the degree of risk a system imposes on humans and sets different requirements for each of those groups.
What are AI levels of risk according to the AI act?
The regulation divides AI-based systems into the following categories:
- Unacceptable: this group includes AI systems that present a clear threat to the safety and rights of people. For example, social scoring by governments, AI-driven systems used for social opinion manipulations, real-time remote biometric identification systems used in public spaces, etc. Such systems will be forbidden.
- High risk: these AI systems can have potentially devastating effects on the personal interests of people and should be thoroughly evaluated before being released or used. Examples include CV-sorting software for recruitment procedures, AI applications in robot-assisted surgery, etc.
- Low risk: certain transparency obligations apply to these AI systems so that users can make informed decisions, know that they are interacting with AI-based software systems, and allow them to opt out of using them. Examples include chatbots, digital assistants, etc.
- Minimal/no risk: the new regulations will not restrict these AI systems as they present minimal or no risk to the rights or safety of citizens. Examples include spam filters and video games.
What AI practices will be prohibited
In accordance with the law, AI solutions that are associated with unacceptable risk will be prohibited. This way, the systems that meet the following criteria will no longer be developed, imported, or used in the EU:
- use hidden methods to significantly manipulate human behavior;
- take advantage of any vulnerability of certain groups of people (due to their age, physical or mental disabilities) in order to harm the individuals belonging to this group or other people;
- are used for social assessment - in particular, to classify the reliability of people based on their social behavior or personality traits.
However, there will be some exceptions to this rule that are important for the safety of society.
For example, the use of remote real-time biometric identification systems (such as facial recognition) in public places for law enforcement purposes will be prohibited. Nevertheless, when the use of such systems can be justified (e.g., when it is necessary to find a missing child or to prevent a specific terrorist threat), this use case can be considered as exclusion and permitted (with prior authorization by a corresponding judicial or independent body).
What are the high-risk AI systems and how are they impacted?
The biggest and most regulated category is “high-risk” AI systems. Those will not be prohibited, but are subject to additional obligations for AI providers and users.
A “high risk” AI solution will be viewed as one that has the potential to pose a threat to health, safety or fundamental human rights, taking into account both the severity of the harm and the likelihood that it will happen.
The categories of high-risk AI applications include:
- biometric identification (e.g. AI systems used for facial recognition systems);
- management and operation of critical infrastructure (e.g., AI systems used in road traffic, the supply of water, gas, heating, and electricity);
- education and vocational training (e.g., AI systems used in evaluating students on tests required for university admission);
- migration management, border control (e.g., polygraphs and similar tools used to detect the emotional state of a person);
- administration of justice and various democratic processes (e.g., AI systems aimed at helping analyze and interpret facts regarding judicial authority);
- employment, employee management, asylum (e.g., AI systems designed to be used to recruit, promote or dismiss people);
- law enforcement (e.g., AI systems used for detection, investigation, or prosecution of criminal offenses);
- essential private services and public services (e.g., AI systems intended to be used to evaluate the creditworthiness of individuals).
According to the regulations, suppliers of high-risk AI systems will need to draw up detailed technical documentation to demonstrate that the system complies with the rules. This documentation should contain a general description of the AI system, its main elements, including clear instructions on its use and information about its operation, including accuracy indicators. In addition, all high-risk AI systems will need to be registered in the EU public database (which will be created by the European Commission in accordance with the Regulation).
If the AI system you develop and sell falls into the high-risk category, you will need to take the following steps:
If you distribute or use a high-risk system, you will need to check whether it has CE marking. If not, it makes sense to contact the vendor and make sure they take all the steps described above so that the system is AI Act compliant and is authorized for use inside the EU.
Low-risk AI systems
When it comes to low-risk AI systems (e.g., email spam filters, chatbots, predictive maintenance systems, etc.), the requirements are less strict because such systems pose minimal risk to the rights and safety of citizens. However certain transparency obligations still apply to those systems, so the vendor will need to:
- let users know they are interacting with an AI system and give them an option to opt out of using it;
- notify users if emotional biometric or recognition categorization systems are applied;
- apply labels to deep fakes or other manipulated content.
Penalties for non-compliance with the EU AI act
Penalties for non-compliance are determined by the Member States and will depend on each specific situation considering the following indicators:
- the nature, severity, and duration of the violation and its consequences;
- whether administrative fines have already been applied by other market authorities against the same provider for the same violation;
- the size and market share of those providers that violate the rules.
The strictest penalties apply to violations related to the use of AI systems with unacceptable risk (Article 5 - 7) and amount to € 30,000,000 or up to 6% of the total global annual turnover for the previous financial year, whichever is higher.
For other violations other than those set out in Articles 5-7, the fines will amount to € 20,000,000 or up to 4% of the total global annual turnover for the previous financial year, whichever is higher.
If the violations are related to the incorrect, incomplete, or misleading information provided to the competent authorities in response to a request, the fine will amount to € 10,000,000 or up to 2% of the total global annual turnover for the previous financial year, whichever is higher.
As you see, the penalties will be quite high, so it will definitely make sense to invest in compliance-related activities and double-check that the systems you use conform with the regulations.
What does it mean to be AI-act compliant?
It is important to remember that the AI document is still at the approval stage, so additional changes and clarifications will definitely be made. Therefore, it is crucial to track these changes and clarify the wording/instructions/interpretations. At the same time, we recommend starting corresponding preparations in advance so that you are able to go through the compliance procedure as early as possible.
You will need to take the following steps as part of the preparations:
- Analyze the AI projects you are involved in and understand which of your projects fall within the AI definition given in the act.
- Determine the risk level of your AI projects (unacceptable, high, minimal) according to Appendix III of the Regulation.
- Define your obligations and assess how and to what extent you will need to revise your development procedures.
- Thoroughly plan the changes remembering that you will have a two-year transition period.
The EU's draft AI regulation is a necessary step that naturally fits into the evolution of AI technologies. Technologies evolve and have a deeper penetration into our daily lives so some form of regulation is inevitable and essential for safeguarding fundamental rights.
The draft regulation looks rather sketchy so far and many details still need additional elaboration. Nevertheless, the proposed framework looks quite solid and despite much discussion about the impact it will have on the AI market, we believe the changes needed are not as drastic as they may seem.
There are very few AI systems of unacceptable risk, and questions about the ethical side of such applications are highly relevant. As for high-risk systems, additional regulations and requirements imposed may complicate the development process. However, if we look at the regulations presented in the document, they are nothing new and have a lot of correlation with standard software development best practices that companies should follow to ensure the quality and transparency of the software they develop.
We believe that instead of cutting down on AI development, organizations should develop a risk and compliance framework that will allow their companies to innovate and deploy AI efficiently and quickly.
Our tech-savvy team has been working with AI technologies for 5 years. So, if you need to create smart AI-enabled solutions that solve business problems and are well-documented, do not hesitate to check our ML and AI services. If you have any additional questions or would like to receive support regarding AI act implementation, we will be happy to help, just contact us.
Draft of the regulation: here