Technology and Its Discontents


Technology has been a popular scapegoat in recent years, shouldering the blame for everything from Columbine (all those violent video games) to the economy’s recent nosedive (all those nefarious dotcoms). So it was scant surprise when technology was labeled a minor culprit in the horrors of September 11. When word leaked that Osama bin Laden’s suspected minions likely encrypted their electronic messages, communicated via free e-mail accounts, and even made their fateful airline reservations on, the hand-wringing commenced. If only the networked world were better policed, congressmen and Fox News talking heads contended, then perhaps the twin towers would still stand today.

A few noted cyber-libertarians, such as Electronic Frontier Foundation founder (and ex-Grateful Dead lyricist) John Perry Barlow, were quick to defend the Internet’s wild and woolly character, stressing that an unregulated cyberspace is essential to public discourse. Free-software guru Eric Raymond circulated a long-winded, pro-civil-liberties spiel to like-minded technologists, though his zealous insistence that more lenient handgun laws would have prevented the hijackings made the missive seem kooky and ill-timed.

Patriotic bluster drowned out the online idealists. Wartime means hardship, and electronic privacy suddenly seemed like a needless luxury. Congressmen assured the public that civil liberties would be preserved as we prepare for a long, potentially bloody campaign. “There will be . . . inconvenience,” said Republican Representative Dick Armey of Texas, in a comment typical of the post-attack session. “But we will not violate people’s basic rights as we make this nation more secure.” Still, legislators did not hesitate to bolster the cops’ power to monitor electronic communications. On September 13, Congress passed the Combating Terrorism Act of 2001 (CTA), which lowers the legal standards necessary for the FBI to deploy its infamous Carnivore surveillance system. Carnivore, recently renamed DCS1000 in a public-relations maneuver, is a computer that the Feds attach to an Internet service provider; once in place, it scans e-mail traffic for “suspicious” subjects—which, in the current climate, could be something as innocent as a message with the word “Allah” in the header.

Next up on the congressional agenda is the far broader Mobilization Against Terrorism Act (MATA), which would empower any U.S. attorney to order a Carnivore installation without first obtaining a court order. It would also let American prosecutors use electronic evidence gathered abroad, even when that evidence was not gathered in accordance with the Fourth Amendment’s guarantees against unreasonable search and seizure. “These are the kinds of things that law enforcement has asked us for,” explained Republican senator Jon Kyl from Arizona, a co-sponsor of the CTA. “This combination is relatively modest in comparison with the kind of terrorist attack we have just suffered.”

Some technologists, however, say that CTA and MATA invite misuse, no matter what politicians say. Among the first to urge caution was Matt Blaze, an AT&T research scientist. In an open letter dated September 12, Blaze fretted, “I fear that we will be seduced into accepting what seem at first blush as nothing more than reasonable inconveniences, small prices to pay for reducing the risk that terrorism happens on our soil again, without assessing fully the hidden costs to our values. . . . Like it or not, computers and networks, as much as our Constitution, are now endowed with the power to either protect us from or make us more vulnerable to evils like unreasonable search and censorship.” Blaze was also one of 42 computer scientists to publicly support In Defense of Freedom (, a 10-point declaration that urged lawmakers to protect civil liberties despite the anti-terrorist fervor. (In Defense of Freedom also attracted the support of over 150 organizations, ranging from the left-wing People for the American Way to the ultra-conservative Eagle Forum.)

Yet precious few in the intelligence community sympathize with Blaze’s concerns. Military types have been blasting cyberspace’s feral nature for years, fretting over the cover it can provide for rogue groups like Bin Laden’s Al Qaeda. At a 1999 colloquium on information-age conflict, Kenneth Minihan, a retired air force general, warned of the Internet’s openness and referred to privacy concerns as a “side show” that complicates national security. And just six days before the attacks, Ronald Dick, director of the FBI’s National Infrastructure Protection Center, accused civil libertarians of in effect aiding and abetting criminals: “Quite simply, the balance described in the Constitution, which provides the government with the capacity to protect the public, is eroding. In its place, the privacy of criminals and foreign enemies is edging toward the absolute.”

Much of the public, ever fearful of technology, seems to agree. According to one post-attack poll, for example, 72 percent of Americans favor new anti-encryption laws. That sentiment no doubt delights Republican senator Judd Gregg of New Hampshire, who is drafting legislation that could revive the discredited concept of “key escrow.” If key escrow becomes law, vendors of cryptography products will be forced to hand over their keys—the decoding mechanisms—to a trusted third party, like an accounting firm. In an emergency, the government can then use a court order to obtain the keys from the third party and decode whatever Internet traffic they wish. A real world analogy would be a law that compels you to trust your apartment keys to a local bank, who in turn can hand those keys to the police if you are suspected of wrongdoing.

Gregg may not be content to stop at key escrow. In a Senate speech immediately after the attacks, he hinted that he wanted future cryptography products to be outfitted with “back doors”—secret access points that only the National Security Agency could enter. The suggestion incenses cryptographers, who point out that intentional security holes would be exploited by parties other than the NSA. “Building systems where the ability for third parties to audit is a requirement—particularly third parties without participants knowing that auditing is taking place—is inherently dangerous,” says Matt Curtin, a privacy expert and founder of the security consulting firm Interhack. “A system where the government can read communication is a system where terrorists can read communication.”

This is not the government’s first attempt to rein in cryptography. In 1997, a House committee approved a bill that would have banned the manufacture, distribution, or import of any encryption product that didn’t include a back door. The bill never reached the full House, but the FBI has been keen to revive the matter ever since. Yet given the increasingly international flavor of the software market, Bruce Schneier, author of Applied Cryptography, believes such a crusade is misguided. “There are probably 1000 products that use strong cryptography in 100 countries,” he says. “Banning them in the U.S. won’t affect any of those. And people forget, cryptography also helps the good guys.” It is encryption, after all, that makes secure transactions over the Internet possible, and the back-dooring of those products would mean a golden age for digital criminals. No crypto, no

Pure cryptography is not the only privacy-enhancing tool that might face the wrath of security-minded legislators. “We were probably poised to have much better privacy protections, and I think this is going to create a lot of resistance,” says Jamie Love, executive director of the Consumer Project on Technology. He foresees a backlash against programs that enable anonymous Web browsing, or perhaps an end to anonymous surfing on public-access terminals; the hijackers, after all, made use of library-based PCs in Florida. There is considerable worry in the world of anonymous remailers, which cleanse messages of identifying information before forwarding them to their intended recipients. Immediately following the attacks, Len Sassaman, a prominent remailer operator, posted a message to his fellow operators, explaining a common anxiety: “I don’t want to get caught in the middle of this. I’m sorry. I’m currently unemployed and don’t have the resources to defend myself. At this point in time, a free-speech
argument will not gain much sympathy with the Feds, judges, and general public.”

Privacy watchdogs also predict a mainstreaming of biometrics in response to September 11. Already familiar to fans of spy thrillers, biometric technology measures physical characteristics—hand geometry, iris patterns—in order to authenticate a person’s identity. Such a system was recently installed at London’s Heathrow Airport, where selected transatlantic travelers can bypass conventional customs queues by having their eyes scanned. If an iris scanner could be correlated with a database of suspected terrorists, perhaps the hijackers would have been nabbed before carrying out their ghoulish plans. Instead, they were able to evade detection with forged or stolen paper documents, the kind of fake IDs that are within the reach of even the pettiest thieves.

“What our technology can do is it can eliminate badges and PIN numbers and those kinds of devices that are easily lost or stolen,” says Tom Colatosti, president of Viisage Technologies, which makes a facial recognition system. Both the Tampa police and Viisage came under fire this past January, when the company’s face scanner was used to check the Super Bowl crowd for known felons. But in light of the attacks, the criticism has been replaced with a keen interest from safety-conscious corporations. “In a typical day, we would get one or two calls—for the most part, we were calling people, trying to interest them in our technology,” says Colatosti. “But certainly this week it’s been hundreds and hundreds and hundreds, from every corner of the globe and [about] every imaginable application.”

It is the archiving of biometric data that especially troubles privacy advocates. If biometric systems become ubiquitous in airports and office buildings, the government could soon have a database of everyone’s physical markers. “We focus too much on the initial acquisition of information, and too little on what happens to that information after it’s been collected,” says Harold Krent, a professor at the Chicago-Kent School of Law. He believes that privacy laws must be formulated that mandate the destruction of biometric data after a certain period, lest that information be abused by overzealous authorities.

But for the moment, such concerns are bound to sound like ivory-tower prattling to many Americans. And most cyber-libertarians understand that they’ll have to compromise, at least in the short run. “Privacy advocates are going to have to say, Things have changed in terms of what’s realistic,” says Love. “It’s not whether or not the government is going to have the right to snoop and things like that. I think it’s going to be things like, What are the accompanying safeguards that minimize the amount of problems that predictably happen?” The geeks accept that they’ll have to contend with some Orwellian flourishes in their techno-paradise, but for how long? When will the terrorist threat finally be declared over, and things can return to “normal”?

Perhaps never. Laws made in crisis mode seldom vanish once the wartime footing ends. In response to rumors that TWA Flight 800 had been downed by terrorists, for example, Congress passed a law that made it easier to expel legal aliens. But when mechanical failure was revealed to be the culprit, the law remained on the books. Even if Al Qaeda is somehow dismantled in the coming years, one suspects that technology’s carefree days were also a victim of September 11.