Current State of Artificial Intelligence Legislation & Where Research Fits into It All


Unless you’ve had your head in the sand, you’ve likely heard that artificial intelligence is a big deal right now. Nowhere is this more evident than in Congress, where there has been an almost constant drum beat to do something legislatively with regard to AI. But what is going on?

This article will review several notable efforts around AI that are happening in Congress and how they could possibly impact the computing and IT research communities. We’ll also assess the general prospects of each proposal’s chances of moving forward in the legislative process. However, this won’t be a comprehensive review of all proposed AI legislation; such an all-encompassing review is near impossible, given the large numbers of Members interested in weighing in on the topic and the pace at which new ideas are floated. It also won’t cover proposals in the Executive Branch and the research agencies, like DOE’s proposal on AI research. Those may be the topic of future Policy Blog posts.

Senator Majority Leader Schumer’s AI Framework

The proposal that has the best chances of producing results comes from a familiar source: Senate Majority Leader Schumer (D-NY). Regular readers will remember that he was one of the original sponsors of the Endless Frontier Act, one of the legislative forerunners of the Chips and Science Act.

In a speech in June, Senator Schumer released his proposed “SAFE Innovation Framework” (for “Security, Accountability, Foundations, Explainability”) to regulating AI. The acronym covers the specifics of the proposal; in short, and you’ll see this repeated often, it proposes a general outline of utilizing AI while mitigating the risks. The framework is, unfortunately, big on ideas and light on details. But it’s also only one part of the senator’s plans.

Schumer is also proposing an approach for translating this framework into legislative action. To that end he announced the Senate will, “convene the top minds in artificial intelligence here in Congress for a series of AI Insight Forums to lay down a new foundation for AI policy.” The first forum was held the second week of September, and it brought together technology leaders in the AI industry with advocates in labor and civil rights fields. But it was also by invitation only and closed to the media, which did not endear it to many Senators. As originally proposed, each forum will focus on specific topics surrounding AI; the list, as mentioned in Schumer’s speech, is:

  • Asking the right questions
  • AI innovation
  • Copyright & IP
  • Use-cases & risk management
  • Workforce
  • National security
  • Guarding against doomsday scenarios
  • AI’s role in our social world
  • Transparency, explainability & alignment, and
  • Privacy & liability

This is likely not a final list and it could be longer or shorter, depending on what Schumer wants to cover. It has been implied that these forums would replace the traditional Congressional hearings in order to craft legislation. However, that is unlikely, as it violates tenets of good, open government, and other members of Congress are unlikely to give up a chance to say something publicly during the legislative process.

In addition to this framework and these forums, Schumer has convened a bipartisan group of Senators to take the lead on the subject of AI and craft any possible legislation. They are Senators Young (R-IN), Rounds (R-SD), and Heinrich (D-NM).

Why is this the most likely effort to produce results? First, since this is being led by the leader of the Senate, it has legislative legs. Also, since it has bipartisan backing, it’s more likely to represent a consensus approach that can clear the Senate and, potentially, the House. If this all looks familiar, that’s because we went through it with the Chips and Science Act. Schumer learned that a slow, deliberate, and bipartisan approach will produce results in a closely divided and partisan Congress. The drawback is that it will take time; remember that the Chips Act was the end product of over two years of legislative work. In fact, Schumer is not expecting to release any concrete proposals for several more months, and that timeline could slip until next year.

Senators Blumenthal and Hawley’s AI Framework

Senators Blumenthal (D-CT) and Hawley (R-MO) released their own legislative framework, the “Bipartisan Framework for US AI Act” on September 8th. Unlike with Schumer’s SAFE Innovation Framework, this one has more concrete proposals, while still being fairly light on specifics. Its main thrust is to establish an independent oversight body that would require companies developing, “sophisticated general-purpose A.I. models (e.g., GPT-4) or models used in high-risk situations (e.g., facial recognition),” to register their products and services. What that oversight body is, whether new or already establishes, is not stated; nor is it said where such a body would be located within the Federal Government.

Additionally, the Blumenthal/Hawley framework has two novel parts. The first is that it would exempt AI products from Section 230 protections. Reforming Section 230, which provides legal protections to internet service providers and websites for user generated content, has been a major policy argument in Congress for the past several years. It’s not a surprise for it to be included in a related, though different, technology field.

The second is it would encourage the use of, “export control, sanctions, and other legal restrictions to limit the transfer of advanced A.I. models, hardware, and related equipment, and other technologies to China, Russia, and other adversary nations, as well as countries engaged in gross human rights violations.” While using the export control regime to maintain the country’s standing with a specific technology isn’t new, this is the first time it’s been proposed specifically for AI, and so broadly. Should this proposal become law, how it is implemented and how broadly it covers the field, will determine its impact. This could be a serious impediment to the research community, or no different than in other cutting edge technology sectors. But it’s worth keeping in mind that this idea is being considered.

While this framework proposal is bipartisan, and it’s arguably more substantial than Schumer’s, it’s unlikely to move by itself. What’s more likely to come of this is that ideas could be included as amendments in some other proposed legislation. It could also be the forerunner of a more substantial legislative proposal in the near future. We’ll have to wait and see.

House Science Committee’s AI Efforts

The House Science, Space, and Technology Committee has been taking a methodical, bipartisan, and traditional legislative approach to the subject of artificial intelligence. Back in June, a day after Schumer released his framework, the committee held a hearing on AI in the national interest. Calling several witnesses from government, industry, and academia research communities, the committee asked tough and important questions about artificial intelligence. Many of those questions were about the potential impacts of AI and what actions the government can take to both harness its potential and mitigate its adverse impacts.

That hearing is expected to be one of many that the Science Committee will hold on the subject of artificial intelligence over the coming months. In much the same approach as Schumer’s forums, the committee is taking a long-view, deliberative approach to crafting any legislation. But, like much of the other efforts discussed here, it’s currently light on specific proposals.

House AI Caucus CREATE AI Act

Unlike the other efforts mentioned above, the “Creating Resources for Every American To Experiment with Artificial Intelligence Act of 2023” (CREATE AI Act) is a specific legislative proposal and could impact the research community directly. The bill would establish the National Artificial Intelligence Research Resource (NAIRR), a cyberinfrastructure resources proposed by a Congressionally established task force of the same name. The NAIRR, run by NSF and overseen by an interagency steering committee, would provide, “free or low-cost access to datasets and computing resources for development of AI workflows,” helping to democratize the development and use of artificial intelligence.

The legislation is sponsored by Representatives Eshoo (D-CA), McCaul (R-TX), Beyer (D-VA), and Obernolte (D-CA), the co-chairs and vice-chairs of the House Artificial Intelligence Caucus. It also has a Senate counterpart sponsored by Senators Heinrich (D-NM), Young (R-IN), Rounds (R-SD), and Booker (D-NJ); note that three of the Senate sponsors are heavily involved in Schumer’s AI efforts.

While this appears to be an easy add on to the federal budget, keep in mind the budget environment we are currently in. To say the least, this is not the best time to propose a new $200 million research program. Opposition to new spending is likely where any public pushback to this legislation will come from. There is also the fact that no one in House Leadership, for either party, is involved in this effort; typically, you need some buy-in from leadership for a piece of legislation to move in the House. While that doesn’t sink the bill’s potential, it does make it more difficult for it to move.

Of all the proposals discussed in this article, the CREATE AI Act is the most likely to be passed into law. In theory, it could move as a piece of a larger, must-pass piece of legislation, such as an amendment to a funding bill. But the prospects for that right now aren’t great.

Final Analysis

While it can seem that Congress could pass major legislation covering artificial intelligence at any moment, the reality is that legislators are still trying to understand the problem. Everyone knows what they want (all the benefits of AI), and what they don’t want (all the problems with AI), but they don’t have a solid plan on how to do it. Any major action is still months away; possibly longer. There are small pieces, such as the CREATE AI Act, that could move this year, but they will likely be the exception. CRA will continue to follow this issue and represent the computing and IT research community in these discussions. We will continue to make the case to policymakers that research is an important part of any national policy with regard to AI and that the computing research community needs to be involved.

Current State of Artificial Intelligence Legislation & Where Research Fits into It All