Make Robots Not War

Some Scientists Refuse to Get Paid for Killer Ideas

As American warfare has shifted from draftees to drones, science and the military in the United States have become inseparable. But some scientists are refusing to let their robots grow up to be killers.

Clusters of scientists shut the laboratory door on the military half a century ago in reaction to the horrors of atomic bombs, and again decades later in disgust with the Vietnam War. But today such refuseniks are rare and scattered—in large part, they say, because so many of their colleagues doing basic research are addicted to military money.

"I would rather the military run out of reasons to keep existing, and I don't want them to have any credit for something I have accomplished—which they clearly would if they gave me the money," says Steve Potter, a neuroscience researcher in Atlanta whose astonishing robotic creations would make a 21st-century general drool—if the general could get his hands on them.

Imagine a swarm of robots seizing control of the airspace and waters of a besieged port city while amphibious automatons roll up the shoreline to knock out pockets of resistance. The attack is brilliantly coordinated, and each of the robots is an astonishingly effective killer because it learns faster and has more flexible responses than any mere machine. The secret? At its core are real animal neurons—living brain cells—wired into advanced circuitry.

Potter's team at the Laboratory for Neuroengineering, shared by Emory University and Georgia Tech, might be best able to deliver on that wild vision. He's already created the Hybrot, a machine controlled by rat neurons sealed in a patented dish spiked with micro-electrodes. You can actually see those cells growing more complex and hairy with dendrites as they learn and interact with the outside world. The work could spawn an entirely new class of adaptable robot combatants. But there's a hitch: Potter won't take a penny from the military. Sure, the Department of Defense might crib from his published research, but Potter wants to grasp new knowledge without bloody hands.

Technological dominance already equates to short-term military victory, and in coming years advanced technologies could also more tightly secure occupations against guerrilla warfare and terrorism like what we see in Iraq today. Or at least top brass and congressional leaders alike are betting heavily on that belief.

On August 19, the American Association for the Advancement of Science (AAAS) reported that the U.S. House of Representatives '04 budget would pump $126 billion into federal research, $8.4 billion over '03—90 percent of that increase is specifically earmarked for the Defense and Homeland Security departments. Moreover, with that many dollars chasing (and tempting) researchers in fields like robotics and nanotechnology, the perception is that it's almost impossible to forgo military support and still remain competitive.

"I think because there's so much military funding in robotics, compared to other kinds of computer science or arts and sciences, that you're going to have a reaction. You're going to have people take this attitude," says Illah Nourbakhsh, a well-known roboticist at Carnegie Mellon University who also snubs military financing. "But there are so many more people in robotics who do take the money."

The push comes from George W. Bush himself. "We must build forces that draw upon revolutionary advances in the technology of war," he told navy graduates.

Some of the most visible fruits of this emphasis are forecast in a Pentagon planning paper, "Joint Vision 2020." One third of U.S. combat aircraft will be unmanned by that year, the report predicts. Ground and sea forces will also rely heavily on robots. Earlier this year the navy and marines held their biannual Kernel Blitz exercise off the California coast, deploying robotic submarines paid for by the Office of Naval Research (ONR).

"Operators will be assisted by decision aids that allow them to focus on the operational art of war, leaving the implementation details to the unmanned element of this synergistic blend of man and machine intelligence," testified Tony Tether, director of the Defense Advanced Research Projects Agency in testimony before the Senate Subcommittee on Emerging Threats and Capabilities, part of the Committee on Armed Services. DARPA has been raked over the coals lately for outlandish programs like Total Information Awareness and the terrorism gambling ring, but it also boasts of military technology programs that spun off benefits to daily life like the Internet and the Global Positioning System. It can invest in research for the long term when private companies are eyeing the next financial quarter, so its programs are more ambitious than any on earth. (See related story, page 40.)

"DARPA's goal," Tether said, "is to create chips that reason and adapt, enable smarter sensors, and achieve human-like performance." One of the far-out initiatives DARPA funds is called Brain Machine Interfaces, and both DARPA and ONR supported work that led to monkeys being able to control robotic arms by using brain signals coursing through probes implanted in their heads.

But the Hybrot's creator, Potter, has a slip of paper stuck onto his bulletin board that would prove a major buzz kill. Slightly paraphrased from Australian philosopher John Passmore, by way of Carl Sagan, the note reads: "If a scientist or a philosopher accepts funds from some such body as the ONR, then he's cheating if he knows his work will be useless to them, and must take responsibility for the outcome if he knows it will be useful."

Next Page »
My Voice Nation Help
Sort: Newest | Oldest