Great Innovative Ideas

Great Innovative Ideas are a way to showcase the exciting new research and ideas generated by the computing community.

Building Ethically Bounded AI

Francesca Rossi of IBM Research and Nicholas Mattei of Tulane University

The following Great Innovative Idea is from Francesca Rossi of IBM Research and Nicholas Mattei of Tulane University. They were one of the Blue Sky Award winners at AAAI 2019 for their paper Building Ethically Bounded AI.

The Idea

The more AI agents are deployed in scenarios with possibly unexpected situations, the more they need to be flexible, adaptive, and creative in achieving the goal we have given them. Thus, a certain level of freedom to choose the best path to the goal is inherent in making AI robust and flexible enough. At the same time, however, the pervasive deployment of AI in our life, whether AI is autonomous or collaborating with humans, raises several ethical challenges. AI agents should be aware and follow appropriate ethical principles and should thus exhibit properties such as fairness or other virtues. These ethical principles should define the boundaries of AI’s freedom and creativity. However, it is still a challenge to understand how to specify and reason with ethical boundaries in AI agents and how to combine them appropriately with subjective preferences and goal specifications. Some initial attempts employ either a data-driven example-based approach for both or a symbolic rule-based approach for both. We envision a modular approach where any AI technique can be used for any of these essential ingredients in decision making or decision support systems, paired with a contextual approach to define their combination and relative weight. In a world where neither humans nor AI systems work in isolation but are tightly interconnected, e.g., the Internet of Things, we also envision a compositional approach to building ethically bounded AI, where the ethical properties of each component can be fruitfully exploited to derive those of the overall system.

In this paper, we define and motivate the notion of ethically-bounded AI and give a survey of the two predominant approaches present in the litterature: the data-driven or bottoms-up approach and the rule drive or top-down approach.  We give examples of how these approaches can fail in the real world, e.g., computer game players that exploit particular rules rather than play the game and provide two concrete examples of work to build ethically bounded AI that we have been involved in.  In the final part we outline future challenges and research directions for the AI community to focus on including: how do we study and work with groups of agents that have conflicting opinions of what is ethical, future challenges as more intelligent agents come into our lives through, e.g., the internet of things, and what the role of organizations like ACM, IEEE, AAAI, and CCC have in leading the discussion and research directions within AI, ethics, and society.

Impact

The role of AI in our daily lives is only expanding.  With this fact comes the realization that we must engage both research into how to most effectively build AI technologies that align with our values but that we also invite as many communities as possible to join in the multi-stakeholder conversation around what are the best principles and practices to use when building these systems.  This paper draws attention to these issues in a research context, asking concrete research questions centered on how we align the decisions of AI systems with the preferences and ethical priorities of a (set of ) users.

This paper is just one aspect of our overall efforts to bring visibility to issues in the ethics and artificial intelligence space. Francesca sits on the board of the Partnership on AI  and Nicholas serves as the AI, Ethics, and Society officer for ACM:SIGAI. We were both involved in organizing the first ACM/AAAI Conference on AI, Ethics, and Society and have written about ethics education for AI practitioners (here) and (here).

Other Research

We conduct other research on a variety of topics including preference handling, computational social choice, constraint reasoning, multi-agent systems, data science, and AI ethics.

Researcher’s Background

Francesca Rossi is the 
IBM AI Ethics Global Leader and a Distinguished Research Staff Member at IBM Research.  Previously, she was a Professor of Computer Science at the University of Padova, Italy for 20 years. 
Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behavior of AI systems, in particular for decision support systems for group decision making. She has published over 190 scientific articles in journals and conference proceedings, and as book chapters. She has co-authored a book and she has edited 17 volumes, between conference proceedings, collections of contributions, special issues of journals, and a handbook. 
She is a fellow of both the worldwide association of AI (AAAI) and of the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI), an executive councilor of AAAI, and the Editor in Chief of the Journal of AI Research. She is a member of the scientific advisory board of the Future of Life Institute (Cambridge, USA) and a deputy director of the Leverhulme Centre for the Future of Intelligence (Cambridge, UK). She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners. She is a member of the European Commission High Level Expert Group on AI.

Nicholas Mattei is an Assistant Professor of Computer Science at Tulane University. His research focuses on the theory and practice of artificial intelligence; largely motivated by problems that require a blend of techniques to develop systems and algorithms that support decision making for autonomous agents and/or humans. He is the founder and maintainer of PrefLib.org, a Library of Preference data, and most of his projects and leverage theory, data, and experiment. He is the AI, Ethics, and Society officer for the ACM Special Interest Group on Artificial Intelligence (ACM: SIGAI). He was previously a Research Staff Member at IBM Research AI in Yorktown Heights, NY. Before that he spent 4 years as a Senior Researcher at Data61/NICTA and UNSW in Sydney, Australia and 2 years as a programmer and embedded electronics designer for nano-satellites at NASA Ames Research Center.

Links

IBM Research Blog post on this work: https://www.ibm.com/blogs/research/2019/01/ethically-aligned-ai/

IBM Research Blog post on similar work: https://www.ibm.com/blogs/research/2018/10/ai-agent-societal-values/

Popular press coverage of this work: https://www.fastcompany.com/90255740/ibm-explores-the-intersection-of-ai-ethics-and-pac-man

Popular press coverage of contained work: https://venturebeat.com/2018/07/16/ibm-researchers-train-ai-to-follow-code-of-ethics/

 

Archive of Great Innovative Ideas >