“Let’s calibrate,” says soft-spoken choreographer Trisha Brown, and the “wired” subset of her nine-member New York-based troupe spills onto the floor of a rehearsal hall at Arizona State University in Tempe. These four dancers each wear 16 “markers” on their heads and upper bodies, silvery-white balls that act as reflectors for 16 infra-red cameras positioned around the stage space, near the ceiling. The cameras send signals 100 times per second to a team at a bank of computers in the back of the long, narrow room.
Brown has defined the cutting edge of American dance since her first experiments at the Judson Dance Theater in the ’60s, and has trained generations of the most limpid and intelligent dancers we have. For years she worked without music; now she directs operas and blends her very contemporary, gesture-driven choreography with live jazz and the baroque. The artists she’s collaborating with in Tempe, mostly about half her age, announce that she has “a digital sensibility.”
In this mysterious new piece, how long does the subject linger at the edge of the volume . . . . , the movement of the dancers generates digital signals that visual artists, working in real time, transform into cloudlike or striated shapes—imagine the strings of a parachute, or of marionettes, afloat in space—playing on a scrim at the front of the stage. Composer Curtis Bahn, who teaches at Rensselaer Polytechnic Institute in Troy, NY, and lectures on electronically extended instruments, dance, and performance technologies, uses the same data to create the accompanying sound, transforming raw material from acoustic instruments. When the dancers slow down, he reckons, the music will get busy.
The 30-minute piece, which had its premiere April 9 at ASU’s Galvin Playhouse, has been nearly two years in the making. This Thursday and Saturday it becomes one of the first dance events at the new Rose Theater at Jazz at Lincoln Center. It’s the centerpiece of a vast experiment called the motion-e project, requiring 28 people at the technical end, and underwritten by the National Science Foundation, the National Endowment for the Arts, Motion Analysis Corporation, and ASU’s Public Events. The seven-figure budget is intended to develop more than just ethereal visual designs and music for dance; rehab devices for stroke patients and ways to engage children with the creative side of mathematics will also emerge from ASU’s ongoing collaboration between artists and engineers.
The collaborative team, mostly male, hunches over computers at a console resembling the flight deck on the Starship Enterprise, over which looms a large video projector. The motion-e group, under the direction of Colleen Jennings-Roggensack, the director of ASU Public Events, and composer Thanassis Rikakis, director of A.S.U.’s Arts, Media and Engineering program, includes motion capture pioneers Paul Kaiser and Shelley Eshkar, and their collaborator, artist and artificial intelligence researcher Marc Downie of M.I.T’s Media Lab, who take the visual data—sketches of human bodies in motion—and vary them. They’re weaving, says Downie, “one thread per dancer. Trisha Brown is a genius at navigating the algorithmic sensibility and turning it into an emotional experience.” Though she’s 68, Brown impresses Downie with her interest in a very contemporary idea: “what happens when an algorithmic idea meets what a human body can do, what a person can remember.” She’s been working with such concepts, strictly in flesh and blood, for decades, notably in her 1979 Accumulation with Talking Plus Watermotor, in which she performs two solo dances simultaneously while telling a story which has nothing to do with either.
The title of Brown’s work, part of her 35th-anniversary season, derives from a remark a technician made early in the complex collaboration between the much-honored choreographer and the behind-the-scenes artists, musicians, and engineers.
“The graphics are living, always changing, and will never be the same,” says Eshkar. “The cameras see; the computers and software extract meaningful data. . . . For the viewer, there should be a powerful sense that the graphics are partnering the choreography. We want the technology to be invisible . . . transparent. ”
Oddly, with all the high-tech paraphernalia and programming surrounding the project, Brown’s 30-minute piece still depends for its visual effects on the scrim, a transparent layer of fabric between the audience and the dancers on which the manipulated motion-capture images are projected. And watching dance through a scrim subtly alters the experience, making it more akin to watching video; it dulls sensation, interfering with the immediacy of the experience. Rikakis, who went to ASU from Columbia’s Computer Music Center, acknowledges that his team is “10 to 20 years away from embedding digital sensing and projection in a human activity, from art making to everyday living, without encumbering or influencing that activity. Then we’ll be able to sense movement and sound without the mover wearing markers, and to project sound and image dynamically on all types of surfaces and shapes. That will be the point when hybrid physical-digital systems reach their full expressive potential.”