From Forbes, August 27, 2015:
In late August, Consumer Reports announced that the Tesla Model S was such a good car that it had broken the highly influential organization’s ratings scale with a road test score of 103 out of 100, and reaped a PR bonanza for both Tesla and themselves. If you missed the story, you must have been in a coma. Tesla’s stock price received a nice bump. Two months later, Consumer Reports had to retract its recommendation for the high-performance electric car, as the Model S scored worse than average (and very nearly much worse than average) in the latest round of its reliability survey. Tesla’s stock price received an ugly haircut. Most of the data for the survey was actually collected back in April. Consumer Reports could and should have known that they’d have to withdraw their recommendation in October well before August. Given this, should they have so enthusiastically publicized the outstanding test performance of the Model S in August? Thinking more broadly, what responsibility to car reviewers have to draw on the full breadth of their knowledge, and not only what information has been publicly released, when making car recommendations?
First, the details on Consumer Reports’s annual cycle, as few people have been aware of it. For years they’ve mailed (or emailed) their car reliability survey to subscribers in April, publicly released the results six months later, in late October, and then repackaged the results (now based on data nearly a year old) with their latest road test scores for an April annual auto issue (actually published in late February). This year for the first time all of their surveys were conducted online–no more paper sent via snail mail. This came at a cost: their sample size declined from well over a million to a still impressive 740,000. But fully electronic data should deliver substantial benefits in the cost and speed of data processing and analysis. From my experience conducting TrueDelta’s survey, I’m aware that they could have starting calculating scores based on the raw data back in April–a middling server could have accomplished the task in a couple of minutes. Going fully online can also dramatically shrink the lag between when the surveys start and when the results were released. At TrueDelta, this lag is at most two months. CR did not alter their cadence this year, but I’d be surprised if they do not in future years. Given that the reliability of cars changes as they age, minimizing this lag means minimizing the amount of time you could be providing accurate recommendations, but in some cases are not.
While late responses and checking the data for errors would lead to changes between any preliminary results and the official results released six months later, the changes would not have been enough to move Tesla’s score from within 20 points of the average (necessary for a CR buy recommendation) to the 43 points below average where it ended up. In the great majority of cases, even with sample sizes much smaller than CR’s, the change between initial and final results will be minimal. This is why TrueDelta starts permitting members to preview the next set of results, based on raw data, just a couple weeks after the surveys start, about a month before they new results are virtually final, and about a month-and-a-half before they are publicly released.
I don’t influence millions of car purchases the way Consumer Reports does. I certainly don’t influence stock prices. But through the site and through my personal communications (I get emails..) I do influence more than a few, and I take each one seriously. If I think it is likely that a car’s reliability rating will probably be changing in the future, should this influence my recommendations? Or should I base these recommendations only on the information TrueDelta has already publicly released? In the Tesla case, what should Consumer Reports have done? Should it have waited?