On December 4, 2025, an advisory committee of the U.S. Securities and Exchange Commission (SEC or Commission) voted to advance a recommendation that the agency issue guidance requiring issuers to disclose information about the impact of artificial intelligence (AI) on their companies.
In its recommendation, the Investor Advisory Committee (IAC) cited a “lack of consistency” in contemporary AI disclosures, which “can be problematic for investors seeking clear and comparable information,” and called for a rule that would require issuers to define AI, disclose board oversight mechanisms, and report on material AI deployments internally and for consumer-facing deployments.
The IAC recommendations are neither formal guidance nor a rule, and the SEC—which earlier this year withdrew Biden-era proposed rules related to AI (including on conflicts of interest in predictive data analytics)—has responded tepidly. But, as firms of all types rush to develop and deploy AI systems, the IAC guidance can serve as a model for bringing material information to market. Companies are already facing litigation for allegedly incomplete AI-related disclosures, and the SEC has begun to scrutinize entities that misrepresent their use of AI in a practice known as “AI washing.”
IAC Reports AI Disclosures are “Uneven and Inconsistent”
The IAC, which advises the Commission on regulatory priorities and the effectiveness of disclosure, reviewed recent filings and academic analysis and concluded that, despite years of spending on AI, “markets are still looking for guidance” on the AI that firms are developing and deploying. Only 40% of the S&P 500 provide AI-related disclosures, and just 15% disclose information about board oversight of AI, while 60% of S&P 500 companies view AI as a material risk (with concerns spanning across cybersecurity, competition, and regulation, among others), according to a source cited by the IAC. “AI-related information can be material and of interest to investors, but the issue is how to sort the relevant information into operational categories that help inform investment decisions,” the IAC wrote. “[T]he disclosures currently remain uneven and inconsistent.”
The Committee attributes these inconsistencies to several factors, including the absence of a single accepted definition of AI, a lack of comprehensive SEC guidance, an uncertain and rapidly evolving regulatory environment, differences in how industries deploy AI, difficulties in measuring its operational impact, a lack of internal training and adoption for AI metrics, and a tendency for companies to either overstate or underreport their AI activities due to unclear materiality standards.
IAC Proposes Initial Framework for Standardized Disclosures
The IAC endorsed a recommendation to establish the “initial scaffolding” of a disclosure framework.
1. Require that issuers define what they mean when they use the term “Artificial Intelligence”
Recognizing that there are multiple definitions and ongoing debate about what qualifies as AI, the Committee recommends that the Commission solicit public input as it develops any rule-making on this topic.
Suggested approaches include allowing issuers to provide their own definition of AI or permitting them to adopt existing definitions such as those in the National Artificial Intelligence Initiative Act of 2020 or from the National Institute of Standards and Technology.
2. Disclose board oversight mechanisms, if any, for overseeing the deployment of AI at the firm
Issuers should disclose whether the Board of Directors or a board committee is responsible for overseeing “aspects of AI deployment,” given the significance, capital-intensive nature, and risks of AI. “Investors have an interest in understanding whether there are clear lines of authority regarding the deployment of technology on internal business operations as well as product lines,” the IAC wrote, so the SEC should require issuers to disclose the board’s oversight, if any, “of the implementation of AI into an issuer’s operations.”
3. If material, issuers should report on how they are deploying AI and the effects of AI deployment on (a) internal business operations, and (b) consumer facing matters
Issuers should separately disclose the material effects of AI deployment on internal business operations and on consumer-facing products.
For internal operations, disclosures would include the impact of AI on human capital such as workforce reductions or upskilling, financial reporting, governance, and cybersecurity risks.
For product lines/consumer-facing matters, the Committee encourages disclosure of the investment into AI and its integration within products. Examples include medical firms reporting on regulatory impacts from AI use, financial firms disclosing R&D spending on AI-driven platforms for investment advice, and airlines explaining how AI influences pricing strategies and business benchmarks.
The IAC suggests integrating this guidance into existing Regulation S-K disclosure items (such as Items 101, 103, 106 and 303) on a materiality-informed basis, rather than establishing a new subchapter.
The recommendation suggests a transition period of up to one year between the issuance of new guidance and its effective date to provide issuers adequate time to implement the disclosure requirements.
SEC Unlikely to Embrace Recommendation
In line with its withdrawal earlier this year of a proposed rule on conflicts-of-interest associated with the use of AI, the SEC responded wanly to the IAC recommendation. Chair Paul S. Atkins urged the Commission to “resist the temptation to adopt prescriptive disclosure requirements for every ‘new thing’ that affects a business,” he said. And Commissioner Hester Peirce questioned whether AI disclosures need to “force conformity” where industry adoption varies. Given the majority Republican representation on the SEC to date (by year’s end, the Commission will be left with three Republican members, two vacancies, and no Democratic appointees), it is not likely that the Commission will act on the IAC recommendation
Insights
Nevertheless, issuers may wish to consider the IAC’s recommendations.
By focusing on standardized definitions, the role of the Board in managing the risks related to AI-related investments and deployments, and the operational and consumer impacts of these technologies, the IAC gives a blueprint as to what current best practices are likely to look like as issuers mature their own internal risk management and disclosure approaches to AI technologies.
With respect to definitions, it may be useful for organizations to draw strategic distinctions between the routine use of traditional algorithmic approaches and machine learning techniques to enhance business outcomes, and the broader deployment of generative AI applications.
Boards may leverage existing risk management and governance capabilities used to oversee and disclose cyber and data protection risks in order to formalize AI oversight and descriptions of the approach taken to govern AI risk. Related to this effort might be the need for the responsible committee of a board to gain a deeper understanding of its dependencies on integrations with major AI developers in either the United States, the European Union, and/or China, and to disclose the relative materiality of prospective geopolitical disruptions to the AI stack or supply chains that can impact those integrations.
Operational impacts of AI deployment could involve disclosures of financing, the status of integrations in the context of M&A, and efficiencies that may be gained. With respect to consumer impact, taking a note from both cyber and data protection methodologies, ensuring disclosures describe the mechanisms used to assess risks to individuals and what best practices are used to mitigate those risks and to ensure compliance with applicable data protection and consumer protection laws would likely be useful, particularly considering companies are already facing litigation for allegedly incomplete or misleading AI-related disclosure, including claims of AI washing where capabilities are overstated.

For further information, please contact:
Matthew F. Ferraro, Partner, Crowell & Moring
mferraro@crowell.com




