Amazon, Tech Interests Concerned With EU AI Act as Compliance Looms
With Europe's AI Act now law, all companies that provide or use AI systems and do business in the EU must begin considering compliance. The measure is rankling the U.S. business sector and government. One major compliance sticking point is a data-quality requirement aimed at rooting out systemic bias, said a European IT attorney.
Sign up for a free preview to unlock the rest of this article
If your job depends on informed compliance, you need International Trade Today. Delivered every business day and available any time online, only International Trade Today helps you stay current on the increasingly complex international trade regulatory environment.
The EU AI law went into effect Aug. 1 and will fully apply in two years, according to a European Commission Q&A. However, some provisions become effective after six months; rules for general-purpose AI models apply after 12 months; and those for AI systems, embedded into regulated products, come into force after 36 months. Noncompliance carries a top fine of up to $39 million (35 million euros), or 7% of a company's total worldwide revenue from the preceding financial year, whichever is higher.
The measure takes a risk-based approach to compliance. It divides AI systems into four buckets: unacceptable, high-risk, specific transparency risk and minimal risk.
Before becoming law, the act encountered strong pushback from many businesses, especially AI startups, and subsequently was watered down, said Simmons & Simmons IT attorney Christopher Goetz in an interview.
An Amazon executive recently spoke critically about the EU's blunt, vague and burdensome approach to tech regulation. Given uncertainty surrounding the benefits and risks of the technology, regulators should take a more flexible approach, Amazon Vice President Andrew DeVore said during the Technology Policy Institute Aspen Forum last week. Any “benefits” from regulation like the AI Act are “at best uncertain,” and previous regulations have already created negative consequences for regional tech investment, he added.
The head of the U.S. Copyright Office spoke separately about the risks of “regulatory arbitrage” associated with the EU AI Act, in which companies will seek favorable oversight regimes. Register of Copyrights Shira Perlmutter highlighted the Act’s requirement that companies train AI tools in ways consistent with EU law. The EU is trying to put its "stamp on how AI is developed and trained in other countries of the world,” she said.
The AI Act includes intellectual property protections and transparency requirements for AI model training data. Perlmutter said the Copyright Office and jurisdictions around the world are grappling with questions about fair use when companies employ copyrighted material when training AI models. Copyright owners say it is unfair for AI developers to take original, labor-intensive work and use it as “raw material” for machine learning, she said. But tech companies and public interest groups say the technology can work only if it has access to large amounts of raw material, she said: It’s a difficult balance.
Companies using or planning to use AI must first determine if they're subject to the act, Goetz noted. Many companies' AI applications will fall into the minimal risk bucket, while those that want to use AI for such things as social scoring are barred from doing so. It's the high-risk category, with its extremely rigid requirements, which is trickiest, he said.
Businesses already subject to other regulations, such as safety rules for the medical sector, must add an AI Act compliance layer on top, Goetz said. In less clear situations, use cases will determine if an AI application is high-risk.
Existing AI systems, Goetz said, are already regulated because they use massive data must comply with the EU's general data protection regulation (GDPR). In addition, certain copyright laws and other requirements apply, even if such systems are not considered high risk under EU classification. Companies must now determine what their systems are and are doing, and whether they're a provider or a user (deployer) of AI; the burden is higher on the former, he said.
Firms that provide AI are "going nuts" because the AI Act requires them to monitor the quality of their data to avoid systemic bias for the entire life cycle of the product, Goetz said. Data quality could be an obstacle to compliance, especially for startups, he said. But because most companies globally want to do business in the EU, they want to get this right, he added.
While some U.S. companies claim the AI Act stifles competition, the same argument they raised about the GDPR, Goetz said, "Everyone will adjust," and compliance will likely end up being seen as a marketing tool.
Some U.S. companies are monitoring the act's implementation, but others aren't very interested, German data protection attorney Axel Spies wrote in an email. That could change when the actual rules become available and real enforcement and fines kick in. Since the GDPR already fully applies to AI, those now taking an interest in the AI Act are focusing on the positions European data protection agencies will adopt on AI applications, Spies added.