“Computer Freedom and Privacy 2004” and Privacy R&D


I’m back from the 2004 edition of ACM’s Computer Freedom and Privacy Conference, held this year at the Claremont Hotel in Berkeley, California. This is the second time I’ve attended, and I’ve enjoyed it each time. The conference’s focus on the intersection between technology and civil rights brings together a fascinating blend of personalities — from EFF Founder John Gilmore to Rachel Brand of the Office of Legal Policy at the Department of Justice to Bill Scannell, of DontSpyOn.us to Nualla O’Conner Kelley, Chief Privacy Officer, Department of Homeland Security. The sessions are always lively and thought-provoking.
A few issues seemed to get the most attention at this year’s conference — the perils of “Direct Recording Electronic” (DRE) voting systems, government profiling using TIA-like systems, and civil liberties issues surrounding Google services. Of these, I was particularly frustrated by the government profiling discussions. A number of speakers made the point (though Doug Tygar probably made it most emphatically) that the government spends a disproportionate amount of its IT privacy and security research funding on security rather than privacy. Given the current state of funding for federal cyber security R&D (see previous blog entry), that’s a sobering thought. But the frustrating part for me is that many of the same people at CFP who are now clamoring for more federal R&D for privacy related research were among the loudest voices calling for cancellation of DARPA’s TIA project (I’m not including Tygar in this, as I don’t know where he stood on TIA). Let me explain.
DARPA’s Total Information Awareness (pdf) project was an attempt to “design a prototype network that integrates innovative information technologies for detecting and preempting foreign terrorist activities against Americans.” In order to do this, DARPA was funding research into a range of technologies including real-time translation tools, data mining applications, and “privacy enhancing technologies” including development of a “privacy appliance” that would protect the identities of all individuals within any of the databases being searched until the government had the appropriate court order to reveal them. At CFP, Philippe Golle, from Xerox’s Palo Alto Research Center, described one such project at PARC (led by Teresa Lunt), that DARPA agreed to fund for 3 years as part of TIA. The plan was to create a “privacy appliance” that owners of commercial databases of interest to the government could deploy that would control government access to the databases using inference control (deciding what types of queries — individually or in aggregate — might divulge identifying information), access control and an immutable audit trail to protect individual privacy. Really neat stuff.
Anyway, the idea that the government might one day deploy a TIA-like system before all of the privacy and security challenges had been sorted out and thereby imperil American civil liberties and security was worrying to a great many people and organizations, including CRA. However, there seemed to be a number of different approaches among the various people and organizations to deal with the concerns. There was a vocal contingent that believed Congress should cancel TIA outright — the threat the research posed was greater than any possible good. CFP participant Jim Harper, of Privacilla.org, addressed this approach directly at the conference, saying the reason groups like his try to kill government programs when they’re still in R&D and small is because they’re too hard to kill when they get big.
CRA had a more nuanced view, I believe, that argued that the challenges that needed to be overcome before any TIA-esque system would ever be fit for deployment were large and that CRA would oppose any deployment until concerns about privacy and security were met. However, we also argued that the research required to address those concerns was worthy of continued support — the problems of privacy and security (as well as the challenge of ever making something like TIA actually work) were truly difficult research problems…”DARPA hard” problems — and so we opposed any research moratorium.
Unsurprisingly, the “nuanced” position failed to carry the day once Congress got involved. At about the same time Congress was deciding TIA’s fate, stories broke in the press about DARPA’s FutureMAP project — which attempted to harness the predictive nature of markets to glean information about possible terrorist activities — and JetBlue airline’s release of customer data to the Defense Department (in violation of their privacy policies) that helped cement opinion that DARPA was out of control. It also didn’t help that the TIA program resided in DARPA’s Information Assurance Office, headed by the controversial Adm. John Poindexter. TIA’s fate was sealed. Congress voted to cut all funding for the program and eliminate the IAO office at DARPA that housed it.
However, Congress also recognized that some of the technologies under development might have a role to play in the war against terrorism. They included language in the appropriations bill (Sec 8131(a)) that allowed work on the technologies to continue at unspecified intelligence agencies, provided that work was focused on non-US citizens. As a result, much of the research that had been funded by DARPA has been taken up by the Advanced Research Development Agency, the research arm of the intelligence agencies. Because it’s classified, we have no way of knowing how much of TIA has been resurrected under ARDA. We also have no way of overseeing the research, no way of questioning the approach or implementation, no way of questioning the security or privacy protections (if any) included. In short, those who argued in support of a research moratorium just succeeded in driving the research underground.
Finally, one thing we do know about current TIA-related research efforts is that PARC’s work on privacy-enhancing technologies is no longer being funded.

“Computer Freedom and Privacy 2004” and Privacy R&D