EU AI Act: New challenges for U.S. tech giants

The European Union’s groundbreaking AI Act, the world’s first major law regulating artificial intelligence, has officially come into force. This law introduces a comprehensive regulatory framework for AI across the EU, significantly impacting major U.S. technology companies.

What is the AI Act?

The AI Act is a legislative effort by the EU to regulate the development, use, and deployment of AI technologies. It aims to address potential negative impacts of AI, particularly in terms of privacy, security, and fairness. The law takes a risk-based approach, categorizing AI applications into different risk levels:

  • High-Risk AI: Includes systems like autonomous vehicles, medical devices, and remote biometric identification. These require stringent measures such as risk assessments, high-quality data sets to avoid bias, and regular compliance checks.
  • Unacceptable-Risk AI: Bans certain applications outright, such as social scoring systems and predictive policing.

Implications for U.S. Tech Firms

The AI Act will primarily target large U.S. tech companies, including Microsoft, Google, Amazon, Apple, and Meta. These companies are heavily invested in AI development and have substantial operations in the EU.

  • Compliance Requirements: Firms must comply with the new regulations, which include ensuring transparency in AI operations, adhering to EU copyright laws, and maintaining robust cybersecurity measures.
  • Fines for Non-Compliance: Companies that violate the AI Act could face hefty fines, up to 35 million euros or 7% of global annual revenue, whichever is higher. This is stricter than the penalties under the EU’s General Data Protection Regulation (GDPR).

Meta has already taken steps to limit the availability of its AI models in Europe due to regulatory concerns, signaling the immediate impact of the new law.

Special Provisions for Generative AI

Generative AI systems, like OpenAI’s ChatGPT and Google’s Gemini, are labeled as “general-purpose” AI. These systems must comply with specific regulations, such as:

  • Transparency Requirements: Companies must disclose how their AI models are trained and used.
  • Open-Source Considerations: Open-source AI models, which are freely available for public use, are treated differently. To qualify for regulatory exemptions, these models must provide full transparency regarding their architecture and usage.

However, the law includes exceptions for open-source models unless they pose significant systemic risks.

Looking Ahead

While the AI Act has entered into force, most of its provisions won’t be enforced until at least 2026. This gives companies a transitional period to align their practices with the new regulations. The European AI Office will oversee compliance, ensuring that AI technologies used in the EU adhere to the law’s stringent standards.

As with the GDPR, the EU aims to set a global standard for AI regulation, influencing how companies worldwide approach AI ethics and governance. The next few years will be crucial as businesses adapt to this new regulatory landscape.