Artificial intelligence, and the way it affects the law and legal enforcement, is top of mind for state attorneys general. Simply put, AI is not limited to the tech industry. AI is likely to alter, and perhaps revolutionize, every sector of industry, and within the last few months has scaled in many sectors. Firms continue to implement software strategies and hire additional employees specializing in software development. And the rise of generative AI is already creating advantages and economies of scale to larger firms, but alongside the advantages, enforcers, such as attorneys general offices, appear likely to raise concerns. State attorneys general are mindful of both the advantageous uses both firms and their own offices may employ, and cautious of consumer protection issues and potential unfair or deceptive business practices that may develop either accidentally or intentionally.
To be mindful of the thinking of state offices of the attorneys general, Crowell & Moring attended the National Association of Attorneys General Eastern Region Meeting earlier this month. The conference was hosted by Connecticut Attorney General William Tong, Eastern Regional Chair, and the Attorney General Alliance. Overall, the conference focused on gaming and antitrust with sessions discussing online gaming and sports-betting, as well as multistate antitrust and consumer protection enforcement. But, the conference would not have been complete without the attorneys general discussion of AI—a topic on the mind of top enforcement officers across the United States. The artificial intelligence session, titled “AI – The Challenge of the Intelligent Computer,” explored the use and sophistication of AI and potential competitive harms that may affect antitrust, consumer protection, and privacy enforcement.
State attorneys general offices remain cognizant of the many uses of AI in all aspects of the economy, many quite positive. For example, the use of artificial intelligence may provide additional avenues of enforcement to state attorneys general offices. New York Attorney General James commented that attorneys general will be able to analyze markets from a social and labor issues perspective with advances in AI. Further, state attorneys general offices discussed that they may use AI to complete in depth research, monitor social media posts, or launch antitrust-related investigations. And, the panel participants acknowledged that many current uses are neither anti-consumer or anticompetitive, and the extent of their effects are unknown to state attorneys general offices at the moment. In fact, the uses could be both pro-consumer and procompetitive, to the extent firms use AI to share data, create or provide access to open source models or modeling, and even to sell or license patents.
However, the attorneys general noted that advances in software have had negative results for consumers at times in the past. The panel discussed how complex software, including generative AI, provides opportunities for businesses to deceive consumers and evade regulators. And complexities with software and software design may lead to an inability for regulators to effectively monitor products or firms.
The potential for negative consumer outcomes based on uses of generative AI keeps attorneys general positioned to monitor developments. Thus, we expect state attorneys general to closely monitor whether generative AI leads to unfair or deceptive business practices. As firms take advantage of the benefits of AI, they should ensure that they do not engage in unfair methods of competition. The panel discussed collusion as an example, if generative AI systems are communicating independently across firms. More specifically, generative AI may allow for pricing monitoring and algorithmic matching. This use of AI could be procompetitive and lower prices for consumers, but if the AI is communicating with other firms’ technologies or taking into consideration other firms’ data, there may be questions from regulators about whether the communications constitute an anticompetitive collusive practice. The panel discussed how these types of technologies thus allow for potential anticompetitive results based on software capabilities and use of generative AI. Although market power is not necessary to prove an unlawful agreement in restraint of trade, we expect state attorneys general offices to focus on firms with perceived market power in the technology sector, as well as other sectors as the use of AI grows across all sectors of industry.
Further, attorneys general and the Federal Trade Commission are considering how AI may affect merger and conduct analysis. Although antitrust law has yet to fully address these issues, firms should begin to consider how the use of AI may have the perception of reducing competition. Generative AI requires access to expensive and expansive inputs, namely data. An aggressive attorney general or private plaintiff might claim that a firm with large amounts of data, or a trained AI model, could obtain, preserve, or exercise market power—though the viability of such a claim is yet to be tested. Firms developing generative AI strategies, likely intended to produce better results for their consumers, should be aware of novel consumer harm theories that may be relied upon by both state and federal enforcers to challenge perceived market power or unlawful conduct.
The White House and Congress are considering the implications of AI, alongside state enforcers. Many firms, including the leaders of major AI firms, are asking for such federal guidance. However, thorough guidance on artificial intelligence is likely to suffer from regulatory lag. Firms should expect state-level enforcers to move forward without coordinated federal guidance on AI or to push their own state legislatures for desired guidance. In many states, the attorneys general may attempt to use state unfair or deceptive acts or practices statutes to pursue claims against firms using AI in perceived anti-consumer or anticompetitive methods.
As generative AI and the ways firms implement the technology continues to develop, firms should be cognizant of consumer protection principles and monitor state enforcement of those principles. Firms also should consider implementing audits, impact assessments, and/or internal guidelines for their uses of generative AI to ensure any uses are documented, impact-neutral, and reviewed.
For further information, please contact:
Toni Michelle Jackson, Partner, Crowell & Moring
tjackson@crowell.com