Friday, December 8, 2023, will go down in history as the day we pondered over the question – How do we regulate AI?
The three branches of the EU have reached an agreement and passed the AI Act, setting the stage to regulate specific applications of the technology and mandate transparency from businesses. However, despite concerns expressed by certain global leaders, the specific modifications that will be mandated for AI companies remain uncertain.
The EU AI Act aims to guarantee the safety of AI systems and establish legal clarity for investments and innovation in AI. Simultaneously, it seeks to minimize consumer risks and reduce business compliance costs. Central to the EU AI Act is a risk-based strategy categorizing AI systems into four distinct risk classes, each addressing various use cases. While specific AI systems are outright prohibited, with limited exceptions, the legislation imposes particular responsibilities on AI companies and users of high-risk AI systems. It includes testing, documentation, transparency, and notification requirements.
What Does the Act Entail?
The EU AI Act has decided on a risk-based approach, where the level of risk determines the stringency of regulations in the AI Act. This legislation imposes obligations on AI systems based on their potential risks and impact on individuals and society. The categorization involves distinguishing between systems with limited risk and those presenting high risk, with specific prohibitions on certain AI systems.
AI systems with limited risk will be subject to transparency requirements to ensure users are informed about their interaction with such systems.
AI systems that are deemed high-risk because of their significant potential harm to health, safety, fundamental rights, environment, and democracy will include:
- Mandatory Fundamental Rights Impact Assessments.
- Conformity Assessments.
- Data governance requirements.
- Registration in an EU database.
- Adherence to risk management and quality management systems.
High-risk AI systems include specific medical devices, recruitment tools, HR and worker management tools, and critical infrastructure management (e.g., water, gas, electricity).
What Does This Mean for You?
As you wait for the formal adoption and full applicability of the AI Act, those utilizing AI systems should proactively address potential impacts. It involves mapping your processes and evaluating the compliance level of your AI systems with the newly established rules. The AI Act addresses ethical and regulatory principles you must adhere to when deploying AI, marking a crucial step in bridging existing gaps.
Implement a framework of policies to ensure the onboarding of only compliant developers and the deployment of models aligns with regulations. Proper identification and mitigation of risks are crucial, necessitating thorough monitoring and supervision across the entire AI system lifecycle. Essential measures, such as internal training and market surveillance, should be implemented. These measures can be derived from existing risk management processes, especially data protection risk assessments, vendor due diligence, and audits.
Lastly, look for a global perspective. While the EU AI Act takes a leading role, it won’t stand alone as the sole international legislation addressing AI risks and fostering trust. Your global strategy must incorporate the fundamental principles outlined in the EU AI Act while anticipating the potential evolution of other regulatory acts.
Conclusion
The EU AI Act holds the promise of shaping a better tomorrow by establishing comprehensive regulations to govern the deployment of AI. However, in the broader global context, it is recognized that the EU AI Act is a pivotal step but not the sole solution, as a truly effective strategy must remain adaptable to evolving regulatory landscapes worldwide. The EU AI Act represents a significant stride toward a more responsible and trustworthy AI future, laying the groundwork for ethical innovation and safeguarding societal values.