Synopsis: Critical decisions are increasingly being made by machine-learning algorithms based on massive data trails that people all leave behind. Such decisions affect issues from college admissions and bank loans, to sentencing and police deployment. Concerns have been raised about the interpretability, transparency, and fairness of these algorithms. In response, an exciting mathematical theory of fairness is emerging that addresses topics such as defining fairness, ways of designing decision-making algorithms to incorporate fairness requirements, and incentivizing decision makers to be fair. Topics to be discussed in this session will provide a precise understanding of the frictions to fairness due to reasons such as mislabeled training data, use of inappropriate features, insufficient data, feedback loops, and the computational difficulty of being fair. This understanding will further inform attendees on how to achieve fair outcomes by avoiding obvious pitfalls and providing appropriate incentives.
You can find a recap of the session here on the CCC blog.
What does it mean for decisions to be made fairly? This question has become especially urgent as crucial decisions about our lives are being made by algorithmic procedures using data collected about each of us. This talk will describe different goals of fairness and how algorithms should be designed to meet these goals.
This talk will introduce machine learning. Specifically it will describe machine-learning approaches to classification and resource allocation – two tasks where fairness is an important goal.
This talk will develop a deeper understanding of the potential pitfalls of using machine-learning algorithms for tasks where fairness is a goal. Is the data used to train these algorithms appropriate for the desired goals? Even if sensitive attributes such as race and gender are explicitly excluded from the data, are they implicitly present in other features, and can we tell if the decisions made by these algorithms are unfair?
Synopsis: In the coming decades, the world population is projected to grow significantly, increasing the demand for food in the face of climate change, workforce aging and shortage, as well as environmental degradation. To ensure long-term food security, it is imperative to explore emerging computing innovations such as big data, artificial intelligence, internet of things, cloud computing with the purpose of working towards the next agricultural revolution.
Computing has already transformed agriculture. Precision agriculture uses cyber-physical systems and data science to increase yield, reducing fertilizer and pesticide runoffs. Global Agricultural Monitoring uses satellite imagery to monitor major crops for stress or recognize the failure to enable timely interventions to reduce disruptions in global food supply. This is only a start, and new compelling opportunities lie ahead. For example, big data may help synthesize new agricultural knowledge, make predictive decisions, and foster data-supported innovation.
This panel will feature the most promising computing advances to sustainably increase food production, based on the recent US Department of Agriculture’s Food and Agriculture Cyberinformatics and Tools Initiative; the congressional research service report on Big Data in U.S. Agriculture; and workshops such as the National Science Foundation’s Midwest Big Data Hub on Machine Learning from Farm to Table and Innovations at the Nexus of Food, Energy and Water Systems Data Science Workshop.
You can find a recap of the session here on the CCC blog.
SmartFarm investigates a novel, unifying, and open source approach to agriculture analytics and precision farming. It integrates disparate environmental sensor technologies into an on-premise, private cloud software infrastructure that provides farmers with a secure, easy to use, low-cost data analysis system. SmartFarm enables farmers to extract actionable insights from their data, to quantify the impact of their decisions, and to identify opportunities for increasing productivity.
Dr. Chandra will describe the Microsoft FarmBeats system providing for an end-to-end approach to enable data-driven farming. We believe that data, coupled with the farmer’s knowledge can help increase farm productivity and reduce costs. With FarmBeats we are building several unique solutions using low-cost sensors, drones, and vision and machine learning algorithms. The goal is to overcome technology adoption challenges such as limited power and Internet connectivity on farms.
Prof. Schnable’s goal is to develop predictive models that will predict crop performance in diverse environments. Crop phenotypes such as yield and drought tolerance are controlled by genotype, environment and their interactions. The necessary volumes of phenotypic data, however, remain limiting to enhancing our understanding of the interactions between genotypes and environments. To address this limitation, we are building new sensors and robots to automatically collect large volumes of phenotypic data.
Intelligent Infrastructure for Smart Agriculture: An Integrated Food, Energy and Water System, A whitepaper from Computing Community Consortium, arXiv preprint arXiv:1705.01993, 2017. https://arxiv.org/abs/1705.01993.
“A scalable system for executing and scoring K-means clustering techniques and its impact on applications in agriculture,” International Journal of Big Data Intelligence, Vol. 6, Nos. 3/4, 2019 https://sites.cs.ucsb.edu/~ckrintz/papers/centaurus-journal18.pdf.
“CCC Symposium (2017): Intelligent Infrastructure for our Cities and Communities Panel,” Computing Research Association via Youtube https://youtu.be/g3vVClxTVn4
Synopsis: Decades of artificial intelligence research have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, converse about order placement, and control cars. The ubiquitous deployment of AI systems has created a trillion-dollar industry that is projected to quadruple in three years, while also exposing the need to make AI systems fair and trustworthy, as well as more competent about the world in which they and we operate. Future AI systems have the potential for transformative impact on society and will be rightfully expected to handle complex tasks and responsibilities, engage in meaningful communication, and improve awareness through experience. There are also concerns about future work in light of AI advancements that demand improved public communication and adjustments to the education and training of the workforce in order to leverage new types of jobs being created by AI technologies.
In a recent study by leading AI experts carried out by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence, they concluded that achieving the full potential of AI technologies poses research challenges that will require significant sustained investment and a radical transformation of the AI research enterprise. This session formulates a roadmap for AI research and development over the next twenty years.
You can find a recap of the session here on the CCC blog.
Artificial Intelligence is at a critical point, where we are seeing and using AI systems regularly in daily life, but we are only seeing the tip of the iceberg. To realize the full potential of AI systems in the future, we need to not only continue, but increase the research in AI across all disciplines to discover unknown ways for AI systems to be incorporated in the future. This talk will discuss the need for continued research and some of the potential possibilities for AI in the future.
AI Research has been happening since the early 1950’s. Recent changes in the ecosystem indicate that we are at a crucial point in time where new advancements are happening regularly, based on years of research. As the many disciplines which touch on AI continue to advance, the possibilities for advancement in AI are ever increasing. This talk will cover the new paradigms affecting the AI research ecosystem and how that can affect the broader research ecosystem going forward.
Advances in AI are already having a tremendous impact in science and throughout all aspects of our society. This talk will present an overview of the activity that led to the 20-Year Community Roadmap for AI Research, along with its major conclusions and recommendations. It will highlight some of the significant technical challenges we face and the visions that drive them.
The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update, A Report by the Select Committee on Artificial Intelligence of the National Science & Technology Council: https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf
Synopsis: Democratization of information and broad interconnectivity has had a wide range of positive transformative impacts on society. Through social networks, individuals can stay connected and share information, medical professionals can reach patients, and access to news and scholarly publications both from the consumer and producer perspective has significantly increased. Concurrently, there is an increasing rise in manipulation of information leading to the spread of disinformation in a broad range of media modalities including text, imagery, and video. This session brings together experts from social science, computer science, and journalism. Panelists will discuss computational training for journalists, the development of new technology to better identify and detect disinformation before it spreads, automated fact-checking systems, and methods for propagating corrections to misinformation. The session is structured specifically to address the need for an interdisciplinary approach, and attendees will gain an understanding of the latest technologies which can be leveraged for the purpose of detecting deep fakes and revealing the truth.
You can find a recap of the session here on the CCC blog.
Journalism sometimes amplifies deceit, but journalists have come to realize that they must help their audiences be more savvy about what they watch, listen to, and read. Journalists need help from the public and CS (and others) to help improve news/media literacy.
What happens when the fake news propagators advance? You do not want to get rid of all fake news. Reading irrelevant things can be helpful. From an analyst standpoint, it is important not to narrow the aperture of what people can see.
Information Disorder: an Interdisciplinary Framework, Claire Wardle and Hossein Derakhshan via First Draft: https://firstdraftnews.org/latest/coe-report/
“Introducing the Transparency Project,” Nancy Shute, Science News: https://www.sciencenews.org/blog/transparency-project/introducing-transparency-project
The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators Online, Whitney Phillips, Data and Society: https://datasociety.net/output/oxygen-of-amplification/
The Computing Community Consortium’s (CCC) official podcast, Catalyzing Computing, features interviews with researchers and policymakers about their background and experiences in the computing community. The podcast also offers recaps of visioning workshops and other events hosted by the Consortium. If you want to learn about some of the computing community’s most influential members or keep tabs on the latest areas of interest, then this is the podcast for you.
This episode of the podcast was recorded live at the “This Study Shows” Sci-Mic stage at the 2020 AAAS Annual Meeting in Seattle, Washington. Khari Douglas interviews Dr. John Beieler, a former program manager at IARPA and currently the Director of Science and Technology in the Office of the Director of National Intelligence. In this episode they discuss working in national security and the technical challenges the intelligence community is facing.
Synopsis: It is undeniable that powerful computing has led to fundamental advances in science and engineering. Rapid and powerful computing in small-scale devices such as phones and laptop have also revolutionized the global economy and offer the promise of AI assistants, smart health systems, and augmented reality. Unfortunately, this is soon to come to a screeching halt. CMOS-based computers that enabled the growth of computing through the 20th Century have reached their limits – Moore’s law and Dennard scaling, the observed doubling of transistors in microchips and the constant power consumption of shrinking transistors, are ending. In order to continue progress in science and engineering research, it is essential to find novel computers capable of meeting the community’s future needs.
A new way of designing computation is emerging in thermodynamic computing. Borrowing from the natural world and the proposition that thermodynamics drives the self-organization and evolution of natural systems, thermodynamic computing could lead to powerful and highly efficient analog computational systems that utilize self-organization to perform calculation. Leaders in physics, computational biology, and computer science came together in a recent Computing Community Consortium workshop to outline a research agenda to make such systems. Related to the theory of thermodynamic computing, reversible computing, also offers the possibility of increasing energy efficiency, while maintaining traditional digital computing systems.
You can find a recap of the session here on the CCC blog.
The end of Moore’s law coupled with increasing compute demands of science research has led to the end times for current computer designs. But there is hope by using novel ways to compute. I will review emerging “fringe techniques,” including leveraging open-system thermodynamics; physical processes that “optimize” natural;, and computing using quantum physics, and discuss their applicability to science research.
What could be more relevant to the future of computing than thermodynamics, the science of energy and change? After all, it is thermodynamics that explains energy efficiency, explains change, and dominates the development of machine learning, electron devices and system architectures today. In this talk I review the foundations, articulate a vision, and present a model neural network that illustrates the potential for thermodynamic models of computation.
Chaotic dynamics in nanoscale NbO2 Mott memristors for analogue computing by Suhas Kumar, John Paul Strachan, & R. Stanley Williams https://www.nature.com/articles/nature23307
In January 2019, the CCC hosted a visioning workshop on Thermodynamic Computing in Honolulu, Hawaii. This episode of the Catalyzing Computing podcast features an interview with workshop organizers Tom Conte (Georgia Tech) and Todd Hylton (UC San Diego) to discuss their reasons for proposing the workshop, what thermodynamic computing is, and the potential impact that thermodynamic computing could have on future technology. Workshop participant Christof Teuscher (Portland State University) also shares his thoughts on the workshop and his work with new models of computation, including computing with DNA. Stream in the embedded player below or find the podcast oniTunes | Spotify | Stitcher | Google Play | Blubrry | iHeartRadio | Youtube
A report summarizing the discussions and conclusions from the workshop is now available here.
In January 2019, the CCC hosted a visioning workshop on Thermodynamic Computing in Honolulu, Hawaii. This episode of the Catalyzing Computing podcast features an interview with workshop organizer, Natesh Ganesh, a PhD student at the University of Massachusetts Amherst who is interested in the physical limits to computing, brain inspired hardware, non-equilibrium thermodynamics, and emergence of intelligence in self-organized systems. He was awarded the best paper award at IEEE ICRC’17 for the paper A Thermodynamic Treatment of Intelligent Systems. I also speak with workshop participant Gavin Crooks, formerly a Senior Scientist at Rigetti Quantum Computing who developed algorithms for near term quantum computers. Gavin is a world expert on non-equilibrium thermodynamics and the physics of information. Stream in the embedded player below or find the podcast on iTunes | Spotify | Stitcher | Google Play | Blubrry | iHeartRadio | Youtube.
A report summarizing the discussions and conclusions from the workshop is now available here.
Synopsis: “The Debrief” is an opportunity for a twenty minute public interview of a scientific session’s speakers by a burgeoning journalist for both a physical and virtual audience. Interviews will be held on our Expo Stage. Nadya Bliss (Arizona State) and Dan Gillmor (Arizona State) will present the key take-aways from the Detecting, Combating, and Identifying Dis and Mis-information session.
Watch the full video of the debrief of Youtube here.