The Washington Post’s Politics Columnist (and resident contrarian) Robert Samuelson has an interesting Op-Ed in yesterday’s edition dealing with the fact that the U.S. is producing “a shrinking share of the world’s technological talent.” After noting that there’s a pay disparity between science and engineering PhDs and other “elites” like MBAs, doctors and lawyers that probably leads to the production disparity, Samuelson rightly points out that the simple fact that other countries are producing more S&E PhDs doesn’t mean that we necessarily lose.
Not every new Chinese or Indian engineer and scientist threatens an American, through outsourcing or some other channel. Actually, most don’t. As countries become richer, they need more scientists and engineers simply to make their societies work: to design bridges and buildings, to maintain communications systems, and to test products. This is a natural process. The U.S. share of the world’s technology workforce has declined for decades and will continue to do so. By itself, this is not dangerous.
The dangers arise when other countries use new technologies to erode America’s advantage in weaponry; that obviously is an issue with China. We are also threatened if other countries skew their economic policies to attract an unnatural share of strategic industries — electronics, biotechnology and aerospace, among others. That is an issue with China, some other Asian countries and Europe (Airbus).
What’s crucial is sustaining our technological vitality. Despite the pay, America seems to have ample scientists and engineers. But half or more of new scientific and engineering PhDs are immigrants; we need to remain open to foreign-born talent. We need to maintain spectacular rewards for companies that succeed in commercializing new products and technologies. The prospect of a big payoff compensates for mediocre pay and fuels ambition. Finally, we must scour the world for good ideas. No country ever had a monopoly on new knowledge, and none ever will.
Putting aside the fact that Samuelson apparently unwittingly puts his finger on the need for producing more US-born and naturalized S&E Phds — after all, given current agency practices, they are essentially the only ones able to do the defense-related research that will preserve “America’s advantage in weaponry” — he’s generally right on. The simple fact that other countries are producing S&E PhDs at rates higher than U.S. production isn’t the worry. The worry is when America’s global competition uses that newly-developed capacity for innovation and technological achievement to target sectors traditionally important to America’s strategic industries. IT is one such crucial sector.
As Samuelson points out, one way to insure the U.S. remains dominant, especially in a sector like IT, is to make sure the U.S. continues to attract the best minds in the world to come study and work here. Unfortunately, as we’ve noted frequently over the last couple of years, the environment for foreign students in the U.S. is not nearly as welcoming as it once was.
Another is to nuture and grow our own domestically-produced talent in the discipline. But the challenges here are also tall. The most recent issue of the Communications of the ACM contains a very interesting (and on point) piece (pdf) about whether the computing community in the U.S. needs to do a better job of evangelizing what’s truly exciting about the discipline to combat dropping enrollment rates and dropping interest in computing. The piece by Sanjeev Arora and Bernard Chazelle (thanks to Lance Fortnow for pointing it out on his excellent Computational Complexity blog), identifies the challenge:
Part of the problem is the lack of consensus in the public at large on what computer science actually is. The Advanced Placement test is mostly about Java, which hurts the field by reducing it to programming. High school students know that the wild, exotic beasts of physics (black holes, antimatter, Big Bang) all roam the land of a deep science. But who among them are even aware that the Internet and Google also arose from an underlying science? Their list of computing “Greats” probably begins with Bill Gates and ends with Steve Jobs.
We feel that computer science has a compelling story to tell, which goes far beyond spreadsheets, java applets, and the joy of mouse clicking (or evan Artificial Intelligence and robots). Universality, the duality between program and data, abstraction, recursion, tractability, virtualization, and fault tolerance are among its basic principles. No one would dispute that the very idea of computing is one of the greatest scientific and technological discoveries of the 20th century. Not only has it had huge societal and commercial impact but its conceptual significance is increasingly being felt in other sciences. Computer science is a new way of thinking.
A recent study by the Pew Internet Project demonstrates that American teenagers are tied to computing technology: 89 percent send or read e-mail; 84 percent visit websites about TV, music or sport stars; 81 percent play online games; 76 percent read online news; 75 percent send or receive instant messages. Yet that increasing use of technology doesn’t appear to make them any more interested in studying the science behind the technology. Maybe that’s not surprising — the fact that most teenagers probably have access to and use cars doesn’t appear to be swelling the ranks of automotive engineers. Maybe there’s a perception among bright teenagers that computing is a “solved” problem — or as John Marburger, the President’s science advisor put it at a hearing before the House Science Committee early in his tenure, maybe it’s a “mature” discipline now, perhaps not worthy of the priority placed on other more “breakthrough” areas of study like nanotechnology. I think Arora and Chazelle do a good job of debunking that perception, demonstrating that computing is thick with challenges and rich science “indispensible to the nation” to occupy bright minds for years to come.
But the perception persists. Computing has an image problem. Fortunately, the computing community isn’t standing still in trying to address it (though maybe it’s only just stood up). At the Computing Leadership Summit convened by CRA last February, a large and diverse group of stakeholders — including all the major computing societies, representatives from PITAC, NSF and the National Academies, and industry reps from Google, HP, IBM, Lucent, Microsoft, Sun, TechNet and others (complete list and summary here (pdf)) — committed to addressing two key issues facing computing: the current concerns of research funding support and computing’s “image” problem. Task forces have been formed, chairmen named (Edward Lazowska of U of Washington heads the research funding task force; Rick Rashid of Microsoft heads the “image” task force), and the work is underway. As the summary of the summit demonstrates, no ideas or possible avenues are off the table…. We’ll report more on the effort as it moves forward.
As Arora, Chazelle and Samuelson all point out, the challenges are tall, but the stakes for the country (never mind the discipline) are even higher.