CCC Quadrennial Papers: Artificial Intelligence
This post was originally published in the CCC Blog.
As part of the rollout of the 2020 Computing Research Associations (CRA) Quadrennial Papers, the Computing Community Consortium (CCC) is pleased to publish the final group of papers around the “Artificial Intelligence (AI)” theme, including papers on AI being deployed at the edge of the network, cooperation between AI and humans, new approaches to understanding AI’s impact on society, AI-driven simulators, and the next generation of AI. The Quadrennial Papers are intended to help inform the computing research community and those who craft science policy about opportunities in computing research to help address national priorities. This group of papers is the final installation of the CCC’s contribution, in addition to the previous themes of Broad Computer Science, Core Computer Science, and Socio-Technical Computing.
AI is being utilized across disciplines and industries and impacting more aspects of our lives than ever before. How the technology is conceived, designed, and deployed, how it cooperates with humans, and how its simulations can best be utilized are all areas that require greater research. However, this next wave of AI research needs to be adaptable and robust in order to continually grow and positively benefit society. Investments in this area of research is critical if the U.S. is to maintain its leadership role in AI. Brief descriptions, author details, and links to the Quadrennial Papers released today are included below.
Artificial Intelligence at the Edge
Authors: Sujata Banerjee (VMware Research) and Elisa Bertino (Purdue University)
Could AI better support societal needs if it were deployed at the edge of the network, close to application end-points, as opposed to in a centralized cloud? This white paper examines those potential uses and identifies requirements and areas of research that need to be explored before implementation of AI systems at the edge can be realized.
Artificial Intelligence and Cooperation
Authors: Elisa Bertino (Purdue University), Finale Doshi-Velez (Harvard University), Maria Gini (University of Minnesota), Daniel Lopresti (Lehigh University), and David Parkes (Harvard University)
This paper argues for further research in AI and human cooperation in order to understand the ways in which systems of AIs and people, working together, can engender cooperative behavior. Through a set of illustrative examples, a broad research agenda for this goal is laid out incorporating aspects of AI architectures, collaborative human-AI systems, economic viewpoints, and human preferences and control.
Interdisciplinary Approaches to Understanding Artificial Intelligence’s Impact on Society
Authors: Suresh Venkatasubramanian (University of Utah), Nadya Bliss (Arizona State University), Helen Nissenbaum (Cornell University), and Melanie Moses (University of New Mexico)
Among the convenience and opportunities that AI brings to the table, these systems also produce a multitude of problems including seemingly racial or gender-biased algorithms, infringements on citizens’ privacy or freedom or deepening inequalities among different groups. This paper calls for an interdisciplinary approach that incorporates the expertise from a broad set of disciplines and application domains to gain a deeper understanding of how technology and society interact in order to avoid these negative impacts from AI technologies.
The Rise of AI-Driven Simulators: Building a New Crystal Ball
Authors: Ian Foster (University of Chicago), David Parkes (Harvard University), and Stephan Zheng (Salesforce AI Research)
Simulations are now pervasive throughout human society and the economy, providing decision makers with a remarkable crystal ball—not just for next week’s weather but also for the spread of a disease through a population. This paper lays out the importance of AI-driven simulators, describing challenges, accomplishments, and a potential research agenda in order to realize the full potential of simulation predictions.
Next Wave Artificial Intelligence: Robust, Explainable, Adaptable, Ethical, and Accountable
Authors: Odest Chadwicke Jenkins (University of Michigan), Daniel Lopresti (Lehigh University), and Melanie Mitchell (Portland State University and Santa Fe Institute)
This paper describes the history and limitations of today’s AI systems, such as brittleness, weakness against adversarial attacks, and the difficulties of system training. It names a series of recommendations and focus areas for research necessary to catalyze this new wave of AI.
For a complete list and brief descriptions of upcoming and past releases, check the CRA Quadrennial Papers page. All the CCC-contributed papers can also be found on the CCC-led White Papers page.