AI Act Set to Come into Force on 1 August 2024
The countdown to compliance with the Artificial Intelligence Act (“AI Act”) has started. Signed into law on June 13, 2024, the AI Act was set for publication in the EU Official Journal on July 12, 2024, and will enter into force on August 1, 2024.
Background
The AI Act establishes a legal framework aimed at achieving human-centric AI, protecting health, safety, and fundamental rights from the harmful effects of AI, while promoting innovation.
Scope of the AI Act
The AI Act applies to all stakeholders in the AI value chain, including AI providers (such as those of general-purpose AI, or “GPAI”), users, importers, distributors, manufacturers, and authorized representatives. Exemptions exist for AI systems used in scientific research, military, defense, or international cooperation, provided fundamental rights safeguards are in place.
Extra-Territorial Scope
The AI Act has extra-territorial reach, impacting organizations inside and outside the EU. It applies to entities placing AI on the EU market, using AI outputs within the EU, or providers of AI systems and general AI models outside the EU, who must appoint an EU-based representative.
Risk Categories
The AI Act adopts a risk-based approach, with regulations varying based on the severity and likelihood of harm:
- Prohibited: AI systems for social scoring, cognitive behavioral manipulation, biometric categorization.
- High: AI in employment, credit decisions, health/life insurance risk assessment.
- GPAI: Large language models like ChatGPT.
- Limited: Chatbots.
- Minimal: Spam filters, video games.
High Risk Providers
High-risk AI system providers must adhere to various obligations:
- Risk management systems
- Data governance
- Technical documentation
- Record-keeping
- Transparency
- Human oversight
- Accuracy, robustness, and cybersecurity
- Quality management systems
- Documentation and log generation
- Cooperation with authorities
- Displaying the CE Mark
- Registering with the EU database
GPAI Providers
GPAI providers must prepare technical documentation, copyright policies, and publish training data. They may adhere to voluntary codes of practice for compliance. GPAI systems posing systemic risks must undergo model evaluation, ongoing assessment, risk mitigation, and incident reporting.
User Obligations
AI users have fewer obligations but must ensure staff have AI literacy. Users of high-risk AI must implement technical and organizational measures, human oversight, monitoring, and data protection impact assessments. Transparency rules apply to AI systems creating deep fakes or involving emotion recognition.
Enforcement
The EU AI Office will regulate the AI Act’s implementation, supported by the AI Board and national supervisory authorities. National authorities will oversee enforcement, appointing a public authority to supervise fundamental rights.
Fines
The AI Act imposes significant fines:
- Up to €35 million or 7% of annual global turnover for breaches of prohibited AI provisions.
- Up to €15 million or 3% of annual global turnover for other breaches.
- SME fines will consider economic viability, applying the lower of the percentages or amounts mentioned.
SME Support
Special provisions help SMEs boost innovation:
- Priority access to AI regulatory sandboxes free of charge.
- Tailored training on the AI Act.
- Information and templates for documentation.
- Simplified technical documentation for high-risk AI system providers.
Timeline
Key dates for compliance:
- November 1, 2024: Identify and notify the Commission of the national public authority for fundamental rights.
- February 1, 2025: Scope, definitions, and prohibited AI systems provisions apply.
- August 1, 2025: GPAI, penalties, and EU governance provisions apply.
- August 1, 2027: Safety components and specific high-risk products (Annex I) provisions apply.
Future Developments
The AI Act is part of the EU’s broader legal approach, including the proposed AI Liability Directive and the Product Liability Directive, addressing procedural rules for civil claims and compensation for defective AI systems.
What to Do Now
Organizations should proactively:
- Identify AI used in the business and the applicable risk category.
- Implement an AI governance framework with policies, staff training, and vendor due diligence.
- Communicate compliance measures to stakeholders.
Developing an AI compliance program is time-consuming, and businesses must start early to meet the deadlines. Detailed guidance will take months to emerge, so a risk-based approach and benchmarking against industry practices are essential in the meantime.
In case you have any questions, please do not hesitate to contact us for further professional assistance.
Disclaimer: The information contained in this article is provided for informational purposes only, and should not be construed as legal advice on any matter. Andria Papageorgiou Law Firm is not responsible for any actions (or lack thereof) taken as a result of relying on or in any way using information contained in this article and in no event shall be liable for any damages resulting from reliance on or use of this information.
Latest Posts
A Quick Guide to IP Rights for Fintech Companies in Cyprus
A. IP Protection for Software Under Cyprus law, software or computer programs are considered literary works protected by copyright, specifically under...
New Rules for Crypto-Asset Service Providers (CASPs) in Cyprus: Key Updates
The Cyprus Securities and Exchange Commission (CySEC) has made an important announcement regarding regulating Crypto-Asset Service Providers (CASPs). Here’s...
The EU’s Digital Operational Resilience Act 2022/2554 (DORA)
Financial regulators have long faced the challenge of ensuring stability in financial markets, especially given the growing reliance on third-party systems,...