The MTA released a website dashboard on Wednesday that offers new statistics aimed at increasing the agency’s transparency and making the subway’s performance easier for laypeople to understand beyond “I was two hours late to work today.” This is a welcome change for frustrated riders, and arguably the biggest visible impact of Joe Lhota’s time as chairman so far. But it also highlights just how far behind other systems the MTA has fallen.
The dashboard is set up so that we immediately see a bunch of pretty graphs. The average subway rider will want to peruse Major Subway Incidents (top left) and Customer Wait Time and Train Travel Time (scroll down a bit). For all three, up is bad and down is good. Making sense of them beyond that is another story.
As I’ve written for this website many times before, the MTA’s old performance reports were only marginally helpful in understanding the subway. As the agency itself acknowledged, the old metrics, such as Wait Assessment and On-Time Performance, measured trains instead of people. By contrast, the new measures show “impacts of actual customers, weighing toward where the customers are in the system and how they’re using the system,” Peter Cafiero, chief of operations planning at NYC Transit, told reporters during a presentation on Monday.
The old metrics for on-time performance were presented as a percentage of trains that met a confusing set of criteria defined by the agency’s antiquated schedule. They revealed nothing about the actual number of customers affected, or how long they had been trapped inside of a tunnel or on a sweaty, crowded platform.
So the new statistics don’t have these problems, right? Well, about that.
Consider the new stat Major Subway Incidents, defined as an incident that delays 50 or more trains. It is, without a doubt, a better measure of the kind of catastrophic commutes straphangers have nightmares about than anything the MTA had before. But that’s because the MTA didn’t have anything like it before. So while we now know how many “incidents” delay 50 or more trains — an arbitrary benchmark, it appears — the MTA still isn’t counting passengers. Now, it’s counting “incidents,” which is defined by…number of trains.
And incidents that delay 625 trains or 50 will still count exactly the same. For the average person trying to use the dashboard to determine how bad subway service has been, there’s no way to tell whether July’s 54 Major Incidents — which included the July 17 meltdown on the 1, A, B, C, D, E, F, and L trains — affected more or fewer customers than June’s 81.
For whatever reason, the MTA decided not to adopt a measure like the London Underground’s Lost Customer Hours (LCH), which approximates the number of hours commuters lost because of incidents. Instead, the MTA chose to provide two separate numbers: Additional Platform Time and Train Travel Time. Each measures a different type of delay, one where customers are crammed into an air-conditioned box, and the other where they’re packed onto a sweltering platform. (The MTA says this approach “provides a better idea of the performance of a subway line from a customer’s prospective, since it reflects the amount of time a customer waits for a train to arrive at the station” and uses actual metro swipe data as opposed to computer models.)
Combining the two, we can get an approximation of London’s LCH. And while the listed delays may seem manageable — platform time hovers around 1.2 minutes and train travel time around 1.5 minutes — two trips a day would total more than 15 million approximate lost customer hours for the month of July, or 522,000 lost customer hours every day. For comparison, the London Underground has an average of just over 2 million LCHs per month, something like 13 percent of the MTA’s. London’s underground has fewer passengers than New York’s subway, but even so, in March (the last data period available) the average London Underground customer waited just 52 seconds longer than scheduled, or a mere 32 percent of the MTA’s wait time for July.
Whether or not these aversions to more direct performance comparisons are intentional, this isn’t the only place the dashboard tweaks numbers to portray them in a more flattering light. Take Mean Distance Between Failure (MDBF), a measure of how many miles, on average, a subway car runs before it malfunctions and causes delays. Though subway cars’ average has fallen from 172,700 miles in 2011 without a breakdown to 115,000 miles today, the graph is scaled to top out at 800,000 miles — which allows the MTA to use the same scale for the brand-new, less failure-prone R188 cars running on the 7 line, but also flattens the failure rates of older cars to obscure their shrinking lifespans.
MDBF numbers for the R68As how declined by 25% over 12 months, but again the scale seems gradual thanks to the upper bounds. pic.twitter.com/lg0lr5kJ9c
— Second Ave. Sagas (@2AvSagas) September 27, 2017
Scaling the MDBF chart to a top level of 800K miles obscures the drop from 119K to 115K in 12 months quite effectively. pic.twitter.com/TQl3k1kzF1
— Second Ave. Sagas (@2AvSagas) September 27, 2017
There are very few ways to measure the MTA’s performance that make it look good, and this was always the danger the authority faced in providing customers with more information. While the dashboard is far from perfect — as of now, it doesn’t work so well on mobile, and the graphs cannot be adjusted or scaled differently — at least the MTA did spend time and money to illustrate how service is getting worse. If nothing else, it’s something to occupy you the next time you’re experiencing a Major Incident.