Yesterday, Rolling Stone released part one of a special report on the artificial intelligence revolution. The article opens with a quote from Pieter Abbeel, a researcher at UC Berkeley and one of CRA’s 2016 CRA-E Undergraduate Research Faculty Mentoring Awardees. Pieter is one of three winners of the inaugural award which recognizes individuals for providing exceptional mentorship, undergraduate research experiences, and, in parallel, guidance on admission and matriculation of these students to research-focused graduate programs in computing. His successful mentoring of undergraduates focuses on early identification of students, individual encouragement to pursue research, weekly research meetings, discussion of research skills, ongoing advice about graduate school, and help during the graduate application process. He currently advises and mentors 15 undergraduates. In 7 years on the faculty at UC Berkeley, the research opportunities he provided motivated 33 of his undergraduate mentees to pursue graduate programs in computing, with the majority pursuing or having received a Ph.D.
Pieter specializes in robotics and machine learning and more specifically on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (reinforcement learning). The article gives the reader a basic history of the origins of artificial intelligence and directions the field is heading. He welcomes the reporter to his lab, which he refers to as “robot nursery school” because he is developing techniques inspired by child psychology methods to teach robots to think intelligently.
Industrial robots have long been programmed with specific tasks…But in recent years, breakthroughs in machine learning – algorithms that roughly mimic the human brain and allow machines to learn things for themselves – have given computers a remarkable ability to recognize speech and identify visual patterns. Abbeel’s goal is to imbue robots with a kind of general intelligence – a way of understanding the world so they can learn to complete tasks on their own.
The article uses several analogies and examples to make artificial intelligence relatable for its readers, most of which don’t have background in computing.
All this is spooky, Frankenstein-land stuff. The complexity of tasks that smart machines can perform is increasing at an exponential rate. Where will this ultimately take us? If a robot can learn to fold a towel on its own, will it someday be able to cook you dinner, perform surgery, even conduct a war? Artificial intelligence may well help solve the most complex problems humankind faces, like curing cancer and climate change – but in the near term, it is also likely to empower surveillance, erode privacy and turbocharge telemarketers. Beyond that, larger questions loom: Will machines someday be able to think for themselves, reason through problems, display emotions?
Despite how it’s portrayed in books and movies, artificial intelligence is not a synthetic brain floating in a case of blue liquid somewhere. It is an algorithm – a mathematical equation that tells a computer what functions to perform (think of it as a cooking recipe for machines). Algorithms are to the 21st century what coal was to the 19th century: the engine of our economy and the fuel of our modern lives. Without algorithms, your phone wouldn’t work.
What’s new is that scientists have developed algorithms that reverse this process, allowing computers to write their own algorithms.
The article also highlights thoughts from Eric Horvitz (Microsoft), a former CCC Council member. “The big question for humanity is, is our experience computational? And if so, what will a better understanding of how our minds work tell us about ourselves as beings on the planet? And what might we do with the self-knowledge we gain about this?”
The second part of the series will explore how artificial intelligence will impact the world of self-driving cars and the future of warfare. Look for it on March 9.