Tech companies are buying small AI startups without antitrust scrutiny, which could have long-term, negative impacts on consumers, Public Knowledge said Monday in comments to the FTC and DOJ. Tech associations argued empirical evidence shows there aren’t competition concerns in the sector and said antitrust enforcers should rely on statistics, not conjecture. DOJ and the FTC on Friday closed public comment on their inquiry into “serial acquisitions and roll-up strategies” that they believe harm competition. Public Knowledge, in joint comments with the Responsible Online Commerce Coalition, cited the strategic investment of companies like Microsoft, Google and Amazon. Companies in recent years have purchased hundreds of small tech startups, including those offering AI services, and the deals are so small they often don’t trigger antitrust review. “This has allowed Big Tech to shape numerous digital markets and expand their dominance unchallenged,” said PK. Tech companies already enjoy dominant positions in their respective markets, but purchasing AI companies further entrenches their dominance, said PK: “The lack of competition in technology ecosystems can lead to stagnation in innovation and service improvement and presents significant hurdles for consumers seeking to explore different products.” The Computer & Communications Industry Association said in comments that enforcers failed to show “how and why these business strategies raise particular competitive concerns.” The agencies’ annual Hart-Scott-Rodino report for fiscal 2022 showed enforcers don’t identify competition concerns in “most notified mergers.” The agencies requested additional information on 47 of the 3,029 notified merger transactions in the report, or fewer than 2% of the deals, said CCIA. NetChoice urged enforcers to keep their focus on “demonstrable consumer harm rather than abstract structural concerns or protection of competitors.” The association recommended the agencies rely on “grounded analysis in rigorous economic evidence rather than anecdotes or political considerations.”
FCC Chairwoman Jessica Rosenworcel will visit the University of California, Berkeley Law School Sept. 27 to address the Berkeley Law AI Institute, the agency said. The FCC has made AI a top focus under Rosenworcel (see 2404040040). The event starts at noon.
The Irish Data Protection Commission is investigating whether Google performed a required assessment before it started processing personal data of EU and European Economic Area (EEA) citizens for its AI model Pathway Language Model 2. Under the country's data protection act, assessments can be required to ensure that people's rights are protected when data processing will likely result in a high risk, the DPC said. The cross-border inquiry is part of a wider effort by the DPC and its EU counterparts to regulate personal data processing as AI models and systems develop, it said. A Google spokesperson, in an email, said the company takes "seriously our obligations under the [EU general data protection regulation] and will work constructively with the DPC to answer their questions." Earlier this month, the privacy watchdog announced that X permanently agreed to stop using personal data in public posts of EU/EEA users to train its AI tool Grok (see 2409040001).
The U.S. is among the first 11 signers of a Council of Europe treaty on AI, said the 46-member organization that promotes democracy, the rule of law and human rights. The agreement is a legal framework covering the entire lifecycle of AI systems and includes public authorities and private actors, the CoE said. Among other things, it requires signers to ensure that AI systems comply with fundamental principles such as respect for privacy and personal data protection. It requires risk and impact management assessments to ensure that AI systems protect rights, along with prevention and mitigation measures. Moreover, it gives authorities power to introduce bans on some AI applications. Signers must also ensure that remedies, safeguards and procedures are in place for challenging AI systems. The treaty will be effective three months after the date on which five signatories, including at least three CoE members, ratified it. Signers so far include seven CoE members, including the U.K., two nonmembers (the U.S. and Israel) and one international organization (the EU).
The Cybersecurity and Infrastructure Security Agency named its first chief artificial intelligence officer Thursday. The agency promoted Lisa Einstein, a senior adviser on AI for the past year. In addition, Einstein served as CISA Cybersecurity Advisory Committee executive director in 2022.
The federal government shouldn’t impose immediate restrictions on the “wide availability of open model weights in the largest AI systems,” the NTIA said Tuesday (see 2402210041 and 2404010067). Model weights refer to core components of AI systems that enable machine learning. Open AI models are open-source, allowing public access to data, while closed models are private. NTIA gathered public comment on the benefits and risks of open and closed models in response to President Joe Biden’s executive order on AI. Current evidence isn’t “sufficient to definitively determine either that restrictions on such open-weight models are warranted, or that restrictions will never be appropriate in the future,” NTIA said in its Report on Dual-Use Foundation Models with Widely Available Model Weights. The agency recommended the federal government “actively monitor a portfolio of risks that could arise from dual-use foundation models with widely available model weights and take steps to ensure that the government is prepared to act if heightened risks emerge.” However, NTIA laid out possible restrictions for the technology, including a ban on “wide distribution” of model weights, “controls on the exports of widely available model weights,” a licensing framework for access to models and limits on access to application programming interfaces and web interfaces. NTIA noted that restrictions on open public model weights “would impede transparency into advanced AI models.” Model weight restrictions could limit “collaborative efforts to understand and improve AI systems and slow progress in critical areas of research,” the agency said. Open Technology Institute Policy Director Prem Trivedi said in a statement Tuesday that NTIA is correct in recommending the rigorous collection and evaluation of empirical evidence, calling it the right starting point for policymaking on the issue.
The private sector is largely responsible for the U.S. maintaining a lead over China in R&D investment, particularly in AI technology, White House Office of Science and Technology Policy Director Arati Prabhakar said Tuesday. Speaking at a Brookings Institution event, she said China is seeing unprecedented increases in R&D spending, but the U.S. remains ahead. She cited the most recent statistics, from 2021, a year in which the U.S. spent $800 billion in R&D across the public and private sectors. The U.S. is spending 3.5% of its gross domestic product on R&D, “which is terrific,” Prabhakar said. She noted the federal government spends about $3 billion to $4 billion on AI R&D annually, which is “pretty modest,” compared with the private sector.
Apple signed President Joe Biden’s voluntary commitment to ensure AI develops safely and securely, the White House announced Friday. Apple joins Amazon, Google, Meta, Microsoft, OpenAI, Adobe, IBM, Nvidia and several other companies in supporting the plan. Companies initially signed in July 2023 (see 2307210043). They agreed to internal and external security testing and sharing information with industry, government and researchers to ensure products are safe before they’re released to the public.
AI poses potential competition challenges and countries must work together to address them, DOJ, the FTC, the European Commission and the U.K. Competition and Markets Authority said in a joint statement Tuesday. “We are working to share an understanding of the issues as appropriate and are committed to using our respective powers where appropriate,” they said. There are risks companies “may attempt to restrict key inputs for the development of AI technologies” and those “with existing market power in digital markets could entrench or extend that power in adjacent AI markets or across ecosystems,” the entities said. Lack of choice for content creators among buyers “could enable the exercise of monopsony power,” they said: AI also “may be developed or wielded in ways that harm consumers, entrepreneurs, or other market participants.” FCC commissioners are expected to vote at their Aug. 7 open meeting on an NPRM examining consumer protections against AI-generated robocalls (see 2407170055).
The generative AI marketplace is “diverse and vibrant” and there are no “immediate signs” of competition issues related to market entry, the Computer & Communications Industry Association told DOJ in comments that were due Monday. DOJ requested comments on AI marketplace competition. Previously, the department declined releasing AI-related comments publicly (see 2405310039). “There are several new entrants present with diversified business models and products, with more entering the market every week, showing how there are no evident signs of competitive problems,” said CCIA. If competition concerns fall outside the scope of antitrust law in the future, it “would then be appropriate to consider new laws or regulations that focus on addressing real problems that the current framework cannot reach.”