Senate Committee Examines the “Dawn of Artificial Intelligence”
Computer scientists from industry, government and academia today told a Senate panel that artificial intelligence (AI) has passed an inflection point — a confluence of the enormous increase in the availability of data, the ability of computers to perceive the world, and the ability to search over a wide range of possibilities — that promises to spawn new waves of innovation in applications of the technology. Those applications are likely to reshape our world in beneficial, but potentially disruptive ways, panel members explained, and so the Federal government ought to ramp up its investments in fundamental research, encourage more students to pursue computing careers, and buttress efforts to evaluate the security, safety, and ethics of the technology.
The scientists presented their testimony at a hearing of the Senate Commerce, Science and Transportation Subcommittee on Space, Science and Competitiveness. Appearing at the hearing were:
- Eric Horvitz, head of Microsoft Research Redmond and co-Chair of the new Partnership for AI (and a former member of CRA’s Computing Community Consortium Council);
- Andrew Moore, Dean of the School of Computer Science at Carnegie Mellon (and member of the CRA Government Affairs Committee);
- Greg Brockman, Cofounder and Chief Technology Officer at OpenAI;
- Steve Chien, Senior Research Scientist in Autonomous Space Systems at NASA’s Jet Propulsion Laboratory.
Sen. Ted Cruz (R-TX), chaired the hearing and opened by comparing the disruptive potential of AI to the Industrial Revolution, Henry Ford’s assembly line, the invention of flight, and the Internet, noting that “many believe that there may not be a single technology that will shape our world more in the next 50 years than artificial intelligence.” Cruz, echoed by other members of the subcommittee, cited concerns about whether the U.S. would continue to lead development in the area, or if competition by an increasingly capable “China, Russia, or other foreign government” would put the U.S. at a technological disadvantage, with implications for both economic competitiveness and our national security.
The panel agreed that the U.S. couldn’t afford to cede leadership in the area. But while there are enormous opportunities in AI, they noted there are also some important challenges that need continued focus. For Moore, the key problem “keeping [him] up at night” is a workforce challenge. “We need to be training a million of the nation’s high schoolers to be ready to join this industry,” Moore testified. “And we must retrain existing technologists who are not up to speed on AI.” Asked by full committee Chair John Thune (R-SD) how we win this “talent war” to keep leadership in the U.S., Moore responded that he thought it began in middle school, by encouraging more students to learn mathematics and science.
For Brockman, the challenge he noted is the need to continue focus on developing fundamental building blocks of AI. Likening the development of AI to the development of the integrated circuit, he explained that we’re at “vacuum-tube level,” but that the Federal government was well-positioned to help push innovation forward. In particular, Brockman recommended the government focus on basic research in the fields enabling AI, an increase in the use of public contests and measurements for gauging our AI capabilities (and spurring competition to improve them), and coordination of work on the security, safety and ethics of AI.
Horvitz noted the huge number of areas that stood to benefit from AI — in health care, in transportation, in education, in critical infrastructures and national defense, to name just a few — but noted other challenges to innovations there. He noted that we had to continue to focus on designing systems that complement human abilities and intellect, rather than replace them. He noted that AI systems will need more transparency in their reasoning if they’re going to be trusted by users — users will want to understand why the system is making a particular recommendation rather than trusting something that emerges from some black box. And he noted that the challenges of cyber security are heightened in the often very modular world of AI systems, especially in high-stakes, safety critical applications.
There was some discussion of the dangers of AI systems — Cruz at one point invoked SkyNet and asked if Elon Musk’s concern that we might be “summoning the demon” was justified. The panel agreed that there isn’t an imminent threat given the current state of the art — Moore explained that AI systems are at best “idiot savants” that can only be focused on very specific ranges of data. But all also agreed that now was the time to start thinking about those issues. Horvitz noted that it’s useful to really push our vision of what bad things could be possible in order thwart them. “The things we do today are really important,” Horvitz said, so a focus on ethics and security is justified. Moore noted that the academic community felt the same way and explained that ethics and responsibility are a key part of the CMU AI curriculum.
But the recurring theme of the hearing was enabling the positive waves of innovation likely to flow from AI, and the primary recommendation the committee heard many times was the importance of the Federal investment in basic research in the area. Open, fundamental research is the fuel for the innovation ecosystem, an ecosystem that’s likely to bring trillions of dollars in return on investment, and likely on a timescale shorter than we think. Members seemed particularly interested in removing barriers to innovation and the adoption of the technologies, noting that technology moves much faster than policy, and so perhaps the time to study the policies is now. Cruz noted that he thought that this was the first congressional hearing on AI, but certainly wouldn’t be the last.
Copies of the hearing charter, the Chairman’s opening statement, and witness testimony are all available at the committee website. If a video archive of the hearing becomes available, we’ll link to it here, too.