As a result, the EU is proposing its first ever four-tiered legal framework on AI with each tier aiming to set proportionate requirements and obligations for providers and users of AI systems, assessing the range of potential risks to health, safety and fundamental rights. The Draft Regulation will apply to providers, users, importers and distributors of AI systems, (whether the providers are established within the EU or in a third country), and providers and users of AI systems that are located in a third country, where the output produced by the system is used in the EU.
the draft regulation
The European Commission is to identify and categorise four levels of risk in AI, before moving onto set obligations and recommendations for AI systems that fall under specific risk categories as outlined below:
Minimal or no risk: refers to AI-enabled video games or spam filters; the use of these AI systems will be allowed in the EU.
Limited risk: refers to AI systems with transparency obligations (chatbots).
High risk: refers to employment, management of workers and access to self-employment (CV sorting software), law enforcement (evaluation of the reliability of evidence), migration, asylum, and border control management (verification of authenticity of travel documents).
Unacceptable risk: refers to AI systems considered a threat to society and livelihoods e.g., social scoring by governments and toys using voice assistance to encourage dangerous behaviour. These AI systems will be prohibited in the EU.
High risk AI systems will be subject to new strict obligations including, amongst other things: adequate risk assessment and mitigation systems; high quality of the datasets feeding the system to minimise risks and discriminatory outcomes; logging of activity to ensure traceability of results; detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance; clear and adequate information to the user; appropriate human oversight measures to minimise risk and high level of robustness, security, and accuracy.
For providers of high risk AI systems, the obligations work in practice through a four step process: (1) AI high-risk AI system is developed; (2) the high-risk AI system will need to undergo the conformity assessment (algorithmic impact assessment that analyses data sets, biases, how users interact with the system, and the overall design and monitoring of system outputs); (3) registration of AI system in the EU, and (4) the system is placed on the market. If substantial changes happen in the AI’s system’s Lifecycle, step (2) is repeated.
important considerations
The Draft Regulation, once effective, could be applicable to providers, users, importers, and distributors of AI systems. Once enacted, those that make AI available within the EU, or use AI within the EU, or whose outputs from AI affect people in the EU will become subject to the regulations, wherever they are based. Such organisations must therefore consider the extraterritorial reach of the regulations and how they will affect the way business is carried out, to avoid being subjected to substantial fines and penalties, as well as ensuring the safety and fundamental rights of their clients. C&Co works with a variety of investment management platforms, including technology providers servicing the industry, and is in a unique position to help in this space.