On September 14, 2023, the Senate Committee on Homeland Security & Governmental Affairs hosted a hearing called “Governing AI Through Acquisition and Procurement” (Hearing). Senator Gary Peters (D-MI) opened the Hearing, explaining that the purpose was to explore how the U.S. government would purchase AI technologies and establish guidelines and standards for the development and use of those technologies. Sen. Peters noted that over half of AI tools used by the federal government are purchased from commercial vendors and that the U.S. government should be careful in its procurement and use of these tools.
The hearing covered a wide range of topics, including: (i) how the U.S. government can establish standards for AI data to better serve the needs of AI vendors/contractors when procuring AI technology; (ii) potential changes to procurement training; (iii) how the acquisition process can be changed to adapt to the unique demands of AI procurement; (iv) how the procurement process can be used to influence the development of AI technologies; and (v) other topics including potential censorship, AI bias, and government overreach as discussed below.
Establishing Standards for Data Sets and AI Model Training and Testing
Witnesses expressed a belief that the government needs to set itself up for success in AI procurement by establishing standards for data sets and AI model training/testing. More specifically, one witness explained that commercial “off-the-shelf” AI systems need government-curated data in order to build models that fit the unique government missions to which the systems are being applied, which runs counter to the idea of “ready-to-use” government systems. The witness recommended that government agencies compile and furnish data sets to allow for faster transition during contract execution. Another witness recommended that the government conduct rigorous market analysis and model testing based on government-wide standards in order to keep pace with the rapid growth of open-source AI technologies. Finally, a witness expanded on these ideas by explaining that a holistic procurement process should follow standards that ask designers to explain how the AI system was designed, what it was designed to do, whether that performance was validated, and what processes exist to continually monitor the performance of the system.
Proposed Changes to Contracting Officer Training
In order to ensure that AI procurements are successful, the Committee members and witnesses agreed that some level of AI-specific training for procurement officials may be necessary. This is especially because the lack of U.S. Government involvement in AI has deprived the government of a federal workforce equipped with the skills necessary to create meaningful AI regulations.
Congress already has made some progress addressing these issues. Congress enacted a law last year requiring procurement-official training in risks and capabilities of AI tools. This year, Sen. Peters also introduced S.1564 – AI Leadership Training Act, to extend that training to all federal managers and supervisors, as well as S.2293 – AI LEAD Act, to designate a Chief AI Officer at every federal agency. While there was an overall understanding at the Hearing that training was needed for the federal workforce, there was some disagreement among the members of the Committee on how to approach training and responsibility for AI within federal agencies.
Proposed Changes to the Contracting Process
The Committee also discussed ways to update the contracting process to fit the needs of AI procurement. A few of the witnesses discussed the need to avoid getting bogged down in overly technical requirements in AI procurements. For example, one witness explained how the current procurement process is overly focused on the mechanics of the AI system being procured, which distracts focus from “truly functional requirements.” There was also a discussion on the need for agile contracting (contract fast, iterate often, fail early) and a recommendation to grant additional contract authorities (i.e. Other Transaction Authorities) across the federal government to give agencies more tools to purchase AI. Finally, the Committee contemplated adjustments in contracting to account for the infrastructure needed for continuous AI model training. To that end, one of the witnesses recommended treating AI procurements like software procurements with continuous updates or treating AI contracts as service level agreements, rather than as one-time software purchases.
Using the Acquisition Process to Change the AI Industry
In addition to outlining goals and recommendations specific to changes to the procurement process, participants in the Hearing also touched on the potential use of the federal procurement process to encourage the development of AI-system standards that have yet to emerge from the private sector.
Sen. Peters noted that practices for managing AI risk have not been standardized in the private sector. He viewed the federal procurement process as offering an opportunity to influence standards/frameworks for AI development/deployment. One witness discussed how the government can encourage the responsible use of AI by making it (i) a discriminator in source selection and (ii) a testing metric for successful performance. But the witness also noted that it is difficult to define the requirements to do this successfully.
Community Involvement and Concerns About Bias and Censorship
Beyond ensuring that the federal government procures AI systems with appropriate standards, testing, and values in mind, the Hearing discussed how the rights and interests of citizens outside of government would be considered during AI procurements.
One witness advocated for the early involvement of the communities that a given AI system would impact. Senator Richard Blumenthal (D-CT) also asked several witnesses whether disclosure should be required when a person is interacting with AI or when AI makes a decision about persons interacting with AI. The witnesses all agreed with such disclosure requirements.
Sen. Rand Paul (R-KY) expressed fear that AI, left unchecked in the hands of the government, could lead to suppression of first amendment rights. Similarly, a witness discussed a lack of oversight surrounding government use of AI and a fear that AI could be used to inappropriately censor citizens. The witness recommended that Congress require that users be allowed to moderate their own content, that users have greater visibility into content moderation filters and algorithms, and that DARPA contractors be required to share their research more transparently.
Finally, Senator Josh Hawley (R-MO) discussed concerns over the government giving social media direct demands to censor content on social media. Sen. Paul also touched upon this topic, noting that the decision to take content down is often made by AI.
***
This hearing highlighted a litany of issues facing government procurement officials, policymakers, contractors, and other stakeholders impacted by AI. It also produced concrete suggestions for how the government might provide data sets, perform training-model oversight, train contracting officials to better understand the intricacies of AI systems, and partner with experts in academia and industry. While there is clearly some disagreement about how the federal government should procure AI systems and engage in policymaking, this hearing made clear that most stakeholders believe there is much work to be done if AI technology is to be developed, procured, and deployed as effectively as possible. We expect to see rules and regulations regarding the procurement and use of AI in the federal government in the future. Given that discussions have just begun, those rules and regulations likely will take time. Crowell continues to monitor developments for the procurement of AI technologies by the federal government as these undoubtedly will impact the purchase and sale of AI across the public and private sectors.
For further information, please contact:
Michelle D. Coleman, Crowell & Moring
mcoleman@crowell.com