CCC at AAAS 2024

The Computing Community Consortium (CCC) has attended and hosted sessions at the American Association for the Advancement of Science (AAAS) Annual Meeting since 2013. Below you can find the upcoming CCC sponsored AAAS sessions for the 2023 AAAS Annual Meeting. To learn more about the event visit the webpage.

Generative AI in Science: Promises and Pitfalls

Friday, February 16, 2024, 2:30-3:30 PM MST

Synopsis:  Large generative artificial intelligence (AI) models have progressed rapidly and have seen an upsurge in popularity and use with new technologies such as Chat GPT, DALL-E 2, Stable Diffusion, and Midjourney. Only recently accessible to the public, these generative systems have quickly become widely used to produce impressive text, imagery, speech, computer programs, art, designs, and much more. Scientists have begun to explore the use of generative AI within their fields, using both general and field-specific models in areas such as weather prediction, genomics, molecular design, materials discovery, and others. While the introduction of these technologies has enabled unimaginable potential in the generation of complex and innovative ideas, it also has the potential to bring about dire consequences such as misinformation, confident failures, biases, and other problems that come with a wealth of knowledge and no common sense. Other challenges include the relative paucity of data from scientific experiments. This panel will discuss the possibilities of harnessing the power of generative models to advance science while at the same time sidestepping their serious limitations.

Panelists:

Rebecca Willett

University of Chicago

Generative Models for Scientific Discovery

Generative models are poised to play an important role in the scientific discovery process. However, their transformative power cannot be fully harnessed through the use of off-the-shelf tools alone. To unlock their potential, novel methods are needed to integrate physical models and constraints into the learning of generative models systems, design sequences of experiments or simulations for creating training data, and to account for rare and extreme events relevant to science.

Markus Buehler

Massachusetts Institute of Technology

Generative AI in Mechanobiology: Forward, Inverse, and Degenerate Problems

Deep learning can solve complex forward and inverse design problems in a range of scenarios including molecular modeling, protein analysis, and bioinspired architected materials. Facilitated by attention-based transformer models, we demonstrate its application in de novo protein design, nonlinear architected material synthesis, and the development of new bioinspired technologies such as synthetic meat. The broader impacts of this technology on the conduct of science will be discussed.

Duncan Watson-Parris

University of California, San Diego

Generative Models for Climate Science

Generative models are poised to transform the physical sciences. Generating projections of future climate change currently requires extremely computationally expensive simulations, limiting what and who can explore future scenarios. The ability to quickly generate realistic and statistically representative samples of weather from arbitrary climates would open this capability to a much broader range of stakeholders, but there are key challenges to address in this rapidly evolving field.

Moderator:

Matthew Turk

Toyota Technological Institute at Chicago

Related Resources:

Check back soon

Check back soon

Large Language Models: Helpful Assistants, Romantic Partners, or Con Artists?

Friday, February 16, 2024, 4:00 PM – 5:00 PM MST

Synopsis: Large language models (LLMs) have taken over the news for the astonishing speed at which they have been developed and the unexpected power they have. They are also creating controversy when used improperly, such as to write college assignments or generate fake news. The panel will discuss those models, whether they are addressing real needs, how to use them to support human activities, such as providing information, cleaning up written text, and generating programs, and will present real world examples and case studies. The panel will discuss ethical and fairness issues related to the use of those models and what society needs to do to ensure they will be used to benefit humanity.

Panelists:

Ece Kamar

Redmond Microsoft Research

Phase Transition in AI: Risks and Opportunities

Recent advances in large language models introduce a phase transition in the capabilities of artificial intelligence (AI) systems, in their usefulness as well as the risks they pose under adversarial use cases. In this talk, the speaker will make the case for this phase transition through presenting real-world examples and use cases, and discuss efforts to use large language models as building blocks towards responsible AI systems.

Jonathan May

University of Southern California Information Sciences Institute

Large Language Models Won’t Destroy Society, But Economic Inequality Might

Large language models (LLMs) aren’t scary, they are tools that help to communicate in fun and effective ways. What is scary is that people may create and use this tech in elitist and exclusive ways. LLMs should work for all languages, not just the profitable ones. They should avoid bias and harm intersectionality. They should use fewer resources and be available to wider swaths of society. The speaker will discuss efforts in these areas, working with researchers in social work, linguistics, and religious studies.

Hal Daumé III

Computer Science, University of Maryland

Enough With Automation, Let’s Augment

Artificial intelligence has a history of attempting complete automation, a tradition which unfortunately continues today. Instead of automating things people can do and enjoy doing, people should focus on augmenting people with tools that serve their real needs. The speaker will discuss why this is a challenge for traditional machine learning, and what sorts of new methods are needed.

Moderator:

Maria Gini

University of Minnesota

Related Resources:

Check back soon

Check back soon

How Big Trends in Computing Are Shaping Science 

Saturday, February 17, 2024, 2:30 PM – 3:30 PM MST

Synopsis: Computing has become an essential tool across all fields of science. Major trends in computing such as the end of Dennard Scaling and Moore’s Law, as well as the successes of deep learning are shaping the future of computing technologies. This session will discuss some of the possibilities and consequences of these trends. With the end of Dennard scaling in the mid 2000s, processor speeds stopped increasing. This led to multi-core computers taking off as a major way to continue to scale computing performance. However, not all computations can take advantage of parallelization. With Moore’s Law slowing down, other gains in computing performance such as specialized hardware accelerators and algorithms development will lead to gains in computing performance impacting some problems and fields more than others. How will these changes impact scientific computing? The success of deep learning has led to a huge boom in artificial intelligence and machine learning research. These techniques have found many applications in science including protein folding, environmental monitoring, classifying organisms, robotic exploration, and mathematics. How can researchers rely on these machine learning techniques and  predict what research tasks can be aided by them? Further, state of the art deep learning models are very computationally intensive, changing the landscape of who is doing research in this field. Model growth is one of the major factors in improved capability and is far outpacing the rate of computer hardware improvements. What could this mean for future developments?

Panelists:

Jayson Lynch

EECS, Massachusetts Institute of Technology

How Fast Do Algorithms Improve?

There has been significant progress in algorithms’ efficiency to solve problems. In some domains algorithmic progress has outpaced hardware developments in improving computer performance. Based on a large survey of algorithmic advances the speakers examine some of the trends in the field. Further, when comparing to known lower bounds they find that many important problems in computer science have asymptotically optimal algorithms, limiting the potential for historic gains seen in the past.

Mehmet Belviranli

Colorado School of Mines

The Decline of Computers As a General Purpose Technology

Technological and economic forces are now pushing computing in the opposite direction, making computer processors less general purpose and more specialized. This trend towards specialization threatens to fragment computing into ‘fast lane’ applications that get powerful customized chips and ‘slow lane’ applications that get stuck using general purpose chips whose progress fades.

Gabriel Manso

Electrical Engineering and Computer Science, Massachusetts Institute of Technology

The Computational Limits of Deep Learning

In recent times, deep learning has achieved remarkable feats, surpassing human performance in domains like Go, excelling in image classification, voice recognition, translation, and beyond. Nevertheless, these accomplishments have necessitated substantial computational resources. This talk delves into this dependency, shedding light on the pivotal role of escalating computing power in advancing a wide range of applications. Looking forward, it becomes apparent that persisting on this path is economically, technically, and environmentally unsustainable.

Moderator:

Neil Thompson

Massachusetts Institute of Technology

Related Resources:

Check back soon

Check back soon