Computing, Cognition, and the Future of Knowing: How Humans and Machines are Forging a New Age of Understanding


IBM Research is a Lab and Center member of CRA. This article is the first in a series of our industry member profiles.

John KellyIt’s not surprising that the public’s imagination has been ignited by artificial intelligence since the term was first coined in 1955. In the ensuing 60 years, we have been alternately captivated by its promise, wary of its potential for abuse, and frustrated by its sometimes slow development.

But like so many advanced technologies that were conceived before their time, artificial intelligence has come to be widely misunderstood—co-opted by Hollywood, mischaracterized by the media, and portrayed as everything from savior to scourge of humanity. Those of us engaged in serious information science and in its application in the real world of business and society understand the enormous potential of intelligent systems.

The future of this technology—which we believe will be cognitive, not “artificial”—has very different characteristics from those generally attributed to AI, spawning different types of technological, scientific, and societal challenges and opportunities, with different requirements for governance, policy, and management.

Cognitive computing refers to systems that learn at scale, reason with purpose, and naturally interact with humans. Rather than being explicitly programmed, these systems learn and reason from their interactions with us and from their experiences with their environment. They are made possible by advances in a number of scientific fields over the past half-century, and are different in important ways from the information systems that preceded them.

Those systems were deterministic; cognitive systems are probabilistic. They generate not just answers to numerical problems, but hypotheses, reasoned arguments, and recommendations about more complex—and meaningful—bodies of data.

What’s more, cognitive systems can make sense of the 80 percent of the world’s data that computer scientists call “unstructured.” This enables them to keep pace with the volume, complexity, and unpredictability of information and systems in today’s world.

None of this involves either sentience or autonomy on the part of machines. Rather, it consists of augmenting the human ability to understand—and act upon—the complex systems of our society. This augmented intelligence is the necessary next step in our ability to harness technology in the pursuit of knowledge, to further our expertise, and to improve the human condition. That is why it represents not just a new technology, but the dawn of a new era of technology, business, and society: the Cognitive Era.

The success of cognitive computing will not be measured by a Turing test or a computer’s ability to mimic human behavior. It will be measured in more practical ways, like return on investment, discovering new market opportunities, curing diseases, and saving human lives.

Here at IBM, we have been working on the foundations of cognitive computing technology for decades, combining more than a dozen disciplines of advanced computer science with 100 years of business expertise.

We are now seeing firsthand its potential to transform business, government, and society. We have seen it turn big data from an obstacle to an opportunity, help physicians make early diagnoses for childhood diseases, and suggest creative solutions for building smarter cities. And we believe that this technology represents our best—perhaps our only—chance to help tackle some of the most enduring systemic issues confronting our planet, including cancer, climate change, and an increasingly complex global economy.

The World’s First Cognitive System

In February 2011, the world was introduced to Watson, IBM’s cognitive computing system, which defeated Ken Jennings and Brad Rutter at “Jeopardy!”

It was the first widely seen demonstration of cognitive computing, and it marked the end of the so-called AI winter. The programmable systems that had revolutionized life over the previous six decades could not have made sense of the messy, unstructured data required to play “Jeopardy!”

Watson’s ability to answer subtle, complex, and pun-laden questions made it clear that a new era of computing had arrived. Since “Jeopardy!,” Watson has tackled increasingly complex data sets and developed understanding, reasoning, and learning that go far beyond deciphering information. Indeed, the goal of cognitive computing is to illuminate aspects of our world that were previously invisible—in particular, patterns and insights in unstructured data—allowing us to make more informed decisions about more consequential matters.

The true potential of the Cognitive Era will be realized by combining the data analytics and statistical reasoning of machines with uniquely human qualities, such as self-directed goals, common sense, and ethical values.

This is what Watson was built to do and, in fact, is already doing. For example, banks are analyzing customer requests and financial data to realize insights to help them make investment recommendations.

Companies in heavily regulated industries are querying Watson to keep up with ever-changing legislation and standards of compliance. And oncologists are testing ways in which cognitive systems can help interpret cancer patients’ clinical information and identify individualized, evidence-based treatment options that leverage specialists’ knowledge and experience.

Implications and Obligations for the Advance of Cognitive Science

The Cognitive Era is the next step in the application of science to understand nature and improve the human condition. In that sense, it is a new chapter of a familiar story, and the controversy surrounding artificial intelligence is merely the latest example of the age-old debate between those who believe in progress and those who fear it. Within the scientific community—as opposed to the media and popular entertainment—the verdict is in. There is broad agreement on the importance of pursuing a cognitive future, along with recognition of the need to develop the technology responsibly.

Specifically, we must continue to shape the effect of cognitive computing on work and employment. Like all technologies, cognitive computing will change how people work. It will help us perform some tasks faster and more accurately. It will make many processes cheaper and more efficient. It will also do some things better than humans, which has been the case since the dawn of civilization.

What has always happened is that higher value is found in new skills, and humans and our institutions learn to adapt and evolve. There is no reason to believe it will be different this time. Indeed, given the exponential growth in knowledge, discovery, and opportunity opened up by the Cognitive Era, there is every reason to believe that the work of humans will become ever-more interesting, challenging, and valuable.

About the Author
As IBM senior vice president, Cognitive Solutions and IBM Research, Dr. John E. Kelly III is focused on the company’s investments in several of the fastest-growing and most strategic parts of the information technology market. His portfolio includes IBM Analytics, IBM Commerce, IBM Security and IBM Watson, as well as IBM Research and the company’s Intellectual Property team. He also oversees the development of units devoted to serving clients in specific industries, beginning with the April 2015 launch of IBM Watson Health.

IBM-ResearchAbout IBM Research
IBM Research is comprised of more than 3,000 researchers, with 13 labs located in six continents. Scientists from IBM Research have been recognized with six Nobel Prizes, 10 U.S. National Medals of Technology, five U.S. National Medals of Science, six Turing Awards, 19 inductions in the National Academy of Sciences, and 20 inductions into the U.S. National Inventors Hall of Fame. For more information about IBM Research, visit www.ibm.com/research.