Congressional Hearing Asks Tough & Important Questions About Artificial Intelligence


[Editor’s Note: This post was written by CRA’s new Tisdale Policy Fellow for Summer 2023, Fatima Morera Lohr.]

On Thursday, June 22 the House Committee on Science, Space and Technology held a hearing on Artificial Intelligence: Advancing Innovation Towards the National Interest to discuss different ways the federal government can utilize Artificial Intelligence (AI) in a, “trustworthy and beneficial manner for all Americans.” The committee heard from several witnesses from government, academia, and industry about the risks and benefits of the technology and, “how to promote innovation, establish proper standards, and build the domestic AI workforce.”

In his opening statement, Science Committee Chairman Frank Lucas (R-OK) highlighted that, even though the United States is in the lead in AI research, the gap with other countries is narrowing. In particular, he called attention to a Stanford University report which, “ranked universities by the number of A.I. papers they published,” and, “found that nine of the top ten universities were based in China;” only one U.S. school was on the list, at number 10 (MIT). At the same time, Chairman Lucas made clear that advances in AI do not have to come at the expense of, “safety, security, fairness, or transparency.” At several points during the hearing, Lucas, other committee members, and the witnesses discussed “embedding our (ie: American) values in the technology will be key and have long last[ing] impacts.” Finally, Chairman Lucas talked about the national interest need to ensure the country has a, “robust innovation pipeline that supports fundamental research, all the way through to real-world applications,” and that, “the academic community, the private sector, and the open-source community,” need, “to help us figure out how to shape the future of this technology.”

Following the chairman’s remarks, Ranking Member Zoe Lofgren’s (D-CA) opening statement supported Lucas’ view that the federal government needs to, “strike a balance that allows for innovation and ensures the U.S. maintains leadership.” She continued that, “at a minimum, we need to be investing in the research and workforce to help us develop the tools we will need going forward.” Rep. Lofgren finished her opening statement by listing several challenges that she wanted to tackle in the hearing: intersection of AI and intellectual property, research infrastructure and workforce challenges, and what the Science Committee should focus on in this field.

The witnesses represented views from government, industry, and the academic research communities, and the panelists shared how each area is tackling their unique challenges with adapting and adopting AI. Dr. Jason Matheny, President and CEO of RAND Corporation, who has experience at OSTP and the National Security Council in the Obama Administration, provided a view on government actions. Dr. Shahin Farshchi, General Partner at Lux Capital gave an investor perspective from industry. Clement Delangue, Co-founder and CEO of Hugging Face, had a different industry outlook, being an entrepreneurial immigrant to the United States. Dr. Rumman Chowdhury, Responsible AI Fellow at Harvard University, presented the researcher’s perspective. Finally, Dr. Dewey Murdick, Executive Director of the Center for Security and Emerging Technology, provided a think tank view on the matter. They reiterated the need for the federal government to continue supporting research in AI in order for the country to continue being the world leaders, while simultaneously reaping the benefits and mitigating the risk of the technology.

All witnesses shared the view that the country needs to become more comfortable with the idea that AI is here to stay. There was also discussion about the positive impact technology can have for the country, when used correctly. Matheny spoke about the role the federal government can play in, “advanc[ing] AI in a beneficial and trustworthy manner for all Americans,” and outlined the different actions the federal government could take in order to make AI as trustworthy as possible. The need to provide researchers with resources was also a common theme, echoed by both Farshchi and Matheny, particularly if the U.S. wants to stay in front of China.

Delangue commented on the need for open systems since, “open systems foster democratic governance and increased access, especially to researchers, and can help to solve critical security concerns by enabling and empowering safety research.” He also commented on how not all the data is available even by open research organizations. On the other hand, Chowdhury spoke about the duality of AI, how it can be both useful and harmful; “while it has immense capability, like many other high-potential technologies, it can also be used for harm by both malicious and well-intentioned actors.” Murdick agreed with Chowdhury regarding the need to recognize both sides of technology, saying “we need to learn when to trust our AI teammates and when to question or ignore them.”

During the hearing, Chairman Lucas made the point that, “these advances do not have to come at the expense of safety, security, fairness, or transparency,” and that no one, including the nation as a whole, should have to compromise their values to reap the benefits of AI technology. This hearing is likely to be just one of many that the Science Committee will hold concerning artificial intelligence. And, at present, Congress is full of ideas, proposals, and legislative ideas on how to handle AI. Case in point, the day before the hearing, Senate Majority Leader Schumer (D-NY) announced his “SAFE Innovation Framework” for potentially regulating the technology. This is far from the last word on the matter from Congress, so the computing research community will need to stay involved and be aware of matters. CRA will continue to monitor this issue and report on any new developments.