By Anna Merlan
By Anna Merlan
By Julie Seabaugh
By Jon Campbell
By Albert Samaha
By Anna Merlan
By Alex Distefano
By Scott Snowden
For New York City schools reeling from their dismal scores on the newly toughened state exams, another tectonic shift is on the way.
As a winner in President Obama's "Race to the Top" competition, New York State is directing part of its $700 million windfall toward the Partnership for Assessment of Readiness for College and Careers, a 26-state consortium that has set itself the daunting task of completely redefining the way K-12 students are evaluated. On the coalition's agenda, according to the state education officials and think-tank leaders who are leading the effort, are a shift from paper tests to computer-based ones, and long-form essays and research projects in place of fill-in-the-dots multiple choice. The ultimate goal: to create, by 2014, "an assessment system that will ensure students graduate college- and career-ready from high school."
"College readiness" is a holy grail that has long eluded education reformers as they try to repair a system that—as Waiting for Superman is currently reminding moviegoers—falls far behind such educational superpowers as Finland in both high school and college graduation rates. But education experts disagree as to whether the quest to build a better testing mousetrap is long overdue, or a multimillion-dollar boondoggle.
The complaint that today's standardized tests are a poor chisel for carving out college students is a common refrain, even among those who oversee them. Massachusetts education commissioner Mitchell Chester, who heads the PARCC governing board, notes that while his state's current test is considered admirably tough—so much so that Massachusetts schools have found it difficult to compete with those in more lenient states on federal No Child Left Behind rankings—"it is in many ways a very traditional assessment that you sit down and take in a one- or two-hour time frame near the end of the school year. And as such it has limitations."
One is that the skills many college administrators complain are lacking in their incoming students—the ability to conduct a research project, say, or evaluate a problem and figure out the most efficient path to a solution—are not easily gauged in an hour or two of sitting at a desk and coloring in ovals. To remedy this, PARCC plans on incorporating open-ended essays and take-home research projects. "In some cases, it will require students to take several days to complete an assignment," says Chester.
The consortium also plans to spread its assessments out in a series of mini-tests throughout the school year. "We hope it's not viewed as 'the big bad test at the end of the year' that you have to change everything you're doing to prepare kids for," says Matt Gandal, the executive vice president of Achieve, the D.C.-based education think-tank that was hired on as project manager for PARCC. "It's not about test prep—hopefully this is about an assessment system that actually helps them along the way."
PARCC is currently casting a wide net for input, consulting not only K-12 testing experts but college faculty and administrators. Gandal admits this will be "not easy, because the two systems are not used to working together." The hoped-for payoff, though, could be worth it. "At the end of high school, there will be a set of tests that students take that will tell them whether they're ready for college," he vows. "If you score college-ready, you should be able to walk in the door at any two- or four-year institution and not require remedial courses."
Skeptics of standardized testing are, predictably, skeptical. "I wouldn't be surprised if they can create a somewhat better standardized test—I expect they will," says Monty Neill, the director of the Massachusetts-based National Center for Fair and Open Testing. But, he fears, "the overall results are going to be nowhere near the time and the money that's put into it."
The main problem, insists Neill, is that in a high-stakes testing world—and the PARCC test would remain high-stakes, both in determining which students get into college and in setting schools' "accountability" scores—it would remain too tempting for teachers and school administrators to game the system. For example, Neill points to the Massachusetts state test, which includes a single essay question on its language-arts exam. The main achievement so far, he says, is that "there's a huge amount of effort to have kids incessantly practice the so-called five-paragraph essay. So kids learn to write that, but they can't write anything else."
As for PARCC leaders' promises of multi-day research projects and other open-ended assignments, Neill worries that these won't be feasible without making the tests prohibitively expensive to grade. (Chester acknowledges the challenge, but says PARCC hopes that economies of scale and the use of "artificial intelligence" to score tests can keep costs manageable.) And, Neill adds, there's an even bigger inherent obstacle: If assignments vary too much with each test, it's impossible to establish consistent scores that can be compared from year to year; if they don't vary enough, they become just another canned exercise to do test prep for. "If you can establish score consistency, you can rapidly have these things turn into something that you pick the lock on," he says. "It's in the end academically pretty pointless."