World’s First Major AI Law Takes Effect: Implications for U.S. Tech Giants
Overview
The European Union’s landmark AI Act, a pioneering regulation aimed at governing the development and use of artificial intelligence, has received final approval from EU member states, lawmakers, and the European Commission. After four years in the making, this legislation officially goes into effect on Thursday. CNBC explores the key aspects of the AI Act and its impact on major technology companies globally.
The AI Act is a significant piece of legislation introduced by the European Commission in 2020, designed to address the adverse effects of AI technology. It establishes a comprehensive and uniform regulatory framework for AI across the EU. Although the Act primarily targets large U.S. tech firms—key players in AI development—it will also affect many businesses, including those outside the tech sector.
Tanguy Van Overstraeten, head of the technology, media, and telecommunications practice at Linklaters in Brussels, calls the AI Act “the first of its kind in the world.” He emphasizes that it will impact many businesses, especially those involved in developing, deploying, or using AI systems under specific conditions.
The Act adopts a risk-based approach, categorizing AI applications based on their potential societal risks. High-risk AI systems—such as autonomous vehicles, medical devices, and biometric identification technologies—will face rigorous requirements, including risk assessments, high-quality training datasets, routine activity logging, and mandatory documentation sharing with authorities.
Additionally, the Act bans AI applications deemed to have “unacceptable” risks, including social scoring systems, predictive policing, and emotional recognition technologies in workplaces or educational institutions.
Major U.S. tech companies like Microsoft, Google, Amazon, Apple, and Meta, which have heavily invested in AI, are expected to be significantly impacted by the new regulations. Their cloud platforms—such as Microsoft Azure, Amazon Web Services, and Google Cloud—are crucial for AI development due to their substantial computing resources.
Given their extensive involvement in AI, these tech giants will face intense scrutiny under the new rules. Charlie Thompson, Senior VP of EMEA and LATAM at enterprise software firm Appian, notes that the AI Act’s influence extends beyond the EU, impacting any organization with operations or effects within the region. This means U.S. tech firms will encounter heightened scrutiny regarding their EU operations and the handling of EU citizen data.
Meta has already restricted the availability of its AI models in Europe due to regulatory uncertainties. Earlier this month, Meta announced it would not offer its LLaMa models in the EU, citing concerns about compliance with the EU’s General Data Protection Regulation (GDPR). This follows previous directives to stop training models on data from Facebook and Instagram in the EU due to GDPR compliance issues.
Eric Loeb, EVP of Government Affairs at Salesforce, suggests that other governments might adopt the EU’s AI Act as a model for their own regulations. He highlights that Europe’s risk-based framework fosters innovation while ensuring safe technology development and deployment.
Generative AI is categorized as “general-purpose” AI under the AI Act, referring to systems designed to perform a wide range of tasks at or above human capability. This includes models such as OpenAI’s GPT, Google’s Gemini, and Anthropic’s Claude. The Act imposes strict requirements on these systems, including compliance with EU copyright laws, transparency in training processes, and robust cybersecurity measures.
Open-source generative AI models, such as Meta’s LLaMa and Stability AI’s Stable Diffusion, are subject to certain exceptions. To qualify for exemption, these models must have their parameters and architecture publicly available and support open access, usage, modification, and distribution. However, open-source models that pose "systemic" risks do not qualify for this exemption.
Violations of the AI Act could result in fines ranging from €35 million ($41 million) or 7% of global annual revenues—whichever is higher—to €7.5 million or 1.5% of global revenues. These penalties exceed those under the GDPR, which allows fines up to €20 million or 4% of annual global turnover.
The European AI Office, a regulatory body established in February 2024, will oversee AI model compliance. Jamil Jiva, global head of asset management at fintech firm Linedata, notes that substantial fines are intended to enforce adherence, similar to how GDPR sets global standards for data privacy.
Although the AI Act is now in force, most of its provisions will not take effect until at least 2026. General-purpose AI systems will have a 12-month grace period, and commercially available generative AI systems, like OpenAI’s ChatGPT and Google’s Gemini, will have a 36-month transition period to achieve compliance.