Artistic depiction of great innovative ideas

Great Innovative Ideas are a way to showcase the exciting new research and ideas generated by the computing community.

Using Human Cognitive Limitations to Enable New Systems

Vincent Conitzer

The following Great Innovative Idea is from Vincent Conitzer, Kimberly J. Jenkins University Distinguished Professor of New Technologies Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. Conitzer was one of the winners from the Computing Community Consortium (CCC) sponsored Blue Sky Ideas Track Competition at AAAI HCOMP 2020. His winning paper is called Using Human Cognitive Limitations to Enable New Systems.

Motivation

My original interest in this line of thinking came from problems associated with a single person being able to create multiple accounts.  This can allow them to vote on the same content multiple times, making online votes meaningless; indefinitely take advantage of a free trial period, resulting in free trial periods of the full version not being given at all; repeatedly misbehave on, for example, social media; place shill bids on items they are selling in an auction; collude with themselves in an online game; etc.

Assuming we want to maintain some degree of anonymity, is there anything we can do?  Here is an idea: perhaps it is possible to create a test that anybody can pass once, but nobody can pass twice [1].  If so, the problem is solved: simply require users to pass the test before getting an account.  But such a test may appear impossible.  How could taking a test that you pass make you unable to pass the test again in the future

Approach

The approach to this problem (and other problems discussed in the Blue Sky paper [2]) is to take advantage of human cognitive limitations to design these apparently impossible systems.  For the above problem, the idea is to take advantage of the fact that people cannot simply wipe their memories.  Thus, taking the test once may result in some memories in the user that interfere with taking the test again later.  This makes more sense if the test is in fact a memory test that is not the exact same test every time.  Specifically, I tried, on human subjects, a design in which the subject was presented with some pictures of people’s faces, and then later had to pick these faces out of a larger set of faces.  Each time the test is run, the larger set remains the same, but the subset presented initially varies.  Thus, you might expect the second time you take the test to be harder than the first: now you have to remember which faces you saw this time, ignoring the fact that you have already seen all of the faces at some point.

This particular design did not perform so well.  Subjects’ performance did not degrade significantly the second time they took the test.  Also, there is too much variance across people in how well they perform on this test, making it impossible to set a score threshold that anyone can reach once but not twice.  Another design performed better, but still not well enough for practical use.  I am hopeful that there is another design that works much better, but there are easier versions of the problem as well.  For example, less ambitious is to design a test that nobody can pass twice at the same time — which would be useful when voting online for something in a very brief window of time (say, for the player of the game, at the end of the game).  In joint work with Garrett Andersen [3], we show that such a test is actually quite easy to design, by requiring the subject to keep track of a box that is moving among other boxes.  Trying to do two of these tests at once seems effectively impossible for a human being, since we’re not able to track two things in different places at once.  An experiment confirmed this.

General Agenda

The key insight in the above examples is that we achieve something that would be impossible without human cognitive limitations such as our inability to forget at will or to track entities in multiple places at once.  This raises the question of what else we can achieve based on human cognitive limitations.  The Blue Sky paper gives another example: authenticating oneself online by playing a video game.  This is potentially useful to prevent users from passing on their login information to someone else — because we can’t simply tell another person how to play a video game and have them be just as good at it.  I suspect that there are many other examples where we can achieve some property of a system by making a reasonable assumption that there is something specific that humans cognitively cannot do.

Other Research

Most of my research is actually on quite different topics.  Much of my work has been on the intersection of AI and economic theory, especially game theory.  For example, how can an AI system act strategically in an environment with other strategic agents that have different objectives?  [E.g., 4]  More recently, I have also become interested in philosophical aspects of AI.  This includes ethical questions: how should we choose the objectives that AI systems pursue?  [E.g., 5]  It also includes foundational questions: while we like to think of AI systems as “agents,” in real systems it can be difficult to assess where one AI agent ends and another begins.  If so, how should the system act?  [E.g., 6]

Researcher’s Background

I am the Kimberly J. Jenkins University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University.  I received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.

My website: https://users.cs.duke.edu/~conitzer/

References

[1] Vincent Conitzer. Using a Memory Test to Limit a User to One Account. The 10th International Workshop on Agent Mediated Electronic Commerce (AMEC-08), Estoril, Portugal. Appears in LNBIP 44, Agent-Mediated Electronic Commerce and Trading Agent Design and Analysis, pp. 60-72.
[2] Vincent Conitzer. Using Human Cognitive Limitations to Enable New Systems. In the Eighth AAAI Conference on Human Computation and Crowdsourcing (HCOMP-20), Blue Sky Ideas track, Hilversum, the Netherlands (virtually), 2020.
[3] Garrett Andersen and Vincent Conitzer. ATUCAPTS: Automated Tests That a User Cannot Pass Twice Simultaneously. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI-16), pp. 3662-3669, New York City, NY, USA, 2016.
[4] Vincent Conitzer. Computing Game-Theoretic Solutions and Applications to Security. In Proceedings of the 26th National Conference on Artificial Intelligence (AAAI-12), pp. 2106-2112, Toronto, ON, Canada, 2012.
[5] Rachel Freedman, Jana Schaich Borg, Walter Sinnott-Armstrong, John Dickerson, and Vincent Conitzer. Adapting a Kidney Exchange Algorithm to Align with Human Values. Artificial Intelligence, accepted 2020. DOI:10.1016/j.artint.2020.103261
[6] Vincent Conitzer. Designing Preferences, Beliefs, and Identities for Artificial Intelligence. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) Senior Member / Blue Sky Track, pp. 9755-9759, Honolulu, HI, USA, 2019.