Embedding Ethical Principles in Collective Decision Support Systems
The following Great Innovative Idea is from Francesca Rossi from the University of Padova. Rossi and her colleagues Joshua Greene (Harvard University), John Tasioulas (King’s College London), Kristen Brent Venable (Tulane University), and Brian Williams (Massachusetts Institute of Technology) published a paper called Embedding Ethical Principles in Collective Decision Support Systems which was one of the winners at the Computing Community Consortium (CCC) sponsored Blue Sky Ideas Track Competition at the 30th Association for the Advancement of Artificial Intelligence (AAAI) Conference on Artificial Intelligence (AAAI-16), February 12-17, 2016 in Phoenix, Arizona.
The Innovative Idea
Many AI systems are designed to work in real-life scenarios where ethical considerations are an important issue. Think of self-driving cars, elder care assistive technology, and social robots. Designing and building ethic-compliant systems will possibly impact all these application domains.
I work on symbiotic environments for group decision making, where the environment (such as the meeting room) is essential in providing support for the group of people who need to make a decision. I also work on computational social choice, designing innovative frameworks to aggregate preferences coming from different sources in order to obtain a collective decision. Finally, I am interested in providing AI systems based on statistical or machine learning approaches. which are typically opaque, the capability of explaining what they do.
I am a computer scientists. After my PhD in CS, I always worked in academia, teaching and doing research. My research interests evolved over time, from logic programming to constraint solving, concurrency theory, programming languages, soft constraints, preferences, multi-agent systems, and finally preference aggregation.