Tens of millions of people have chosen a car based on a brace of red dots from Consumer Reports. I don't care for the
dots--they hide at least as much information as they convey.
Still, I assumed that the data behind the dots was solid. Then I took their survey.
Of the survey's 19 questions, one is the source of the dots, number 13:
"If you had any problems with your car in the last year (April 1, 2005 through March 31, 2006) that you considered SERIOUS because of cost, failure, safety or downtime, click the appropriate box(es) for each car. INCLUDE problems covered by warranty. DO NOT INCLUDE 1) problems resulting from accident damage; or 2) replacement of normal maintenance items (brake pads, batteries, mufflers) unless they were replaced much sooner or more often than expected."
The various systems and their major components are then listed, with a single checkbox next to each system.
What if someone had multiple problems with a single system, such as "electrical"?
What if they had to return to the shop multiple times for the same problem? There's just one box to check.
TrueDelta's survey measures problem rates as well as trips to the shop.
Respondents must remember issues that happened over a year ago. Not only this, but they need to remember whether incidents near
the cutoff happened in March or April. Most people's memories aren't this sharp.
Respondents who err on the safe side and report problems that might have happened within the timeframe, and do this year after year, will likely report some problems twice.
To place less strain on memories, TrueDelta asks about repairs the following month and describes the last reported repair.
Passing the Buck
Consumer Reports' dots only reflect "serious problems." They never precisely define the term. Instead,
Question 13 leaves it up to each respondent to determine which problems are serious enough to report, an invitation to bias.
In contrast, TrueDelta's survey precisely defines what counts as a reportable problem, and what does not.
Without clear guidelines, extraneous influences intrude. First, there's the respondent's general opinion of the car. Things gone right can make
things gone wrong seem less serious. Why else would some people keeping buying vehicles with "much worse than average" ratings?
Second, the reliability of cars past shapes expectations. If the previous car lost a transmission, then a defunct alternator may not seem serious. Unless the current car is the same brand. Then a burned-out turn signal may seem serious.
Third, did the dealer provide excellent service, maybe even provide a free loaner? If so, even a problem that required a tow might seem less than SERIOUS.
Brake Pads and Batteries
Question 13 says maintenance items don't count "unless they were replaced much sooner or more often than expected."
Not only does this lump maintenance and repair items together, with no way to separate them later, but it again falls to the respondent
to decide what counts. To include premature replacement of wear items, the survey should specify how long they should last.
How long should brake pads last? Expectations are going to vary. Also, brake pad life is heavily affected by driving style, driving conditions, and other things that have nothing to do with reliability. And batteries? How many times were the lights left on? How much gunk was allowed to build up around the terminals?
TrueDelta's survey has a simple solution for the complexities of brake pad and battery mortality: it excludes wear items.
Time for an Alternative
A much better survey is possible. And TrueDelta is using it. If you want better reliability information, participate in TrueDelta's research and help make it happen.
Thanks for reading.
Michael Karesh, TrueDelta
First posted: June 27, 2006
Last updated: November 16, 2006