I’ve been very critical of the survey question Consumer Reports uses to gather its vehicle reliability data, most notably in this editorial. They’re aware of my critiques, and I’ve been wondering whether they’d improve their survey as a result.
Well, this year’s survey is out, so I have my answer.
Last year’s question:
If you had any problems with your car in the last year (April 1, 2005 through March 31, 2006) that you considered SERIOUS because of cost, failure, safety or downtime, click the appropriate box(es) for each car. INCLUDE problems covered by warranty. DO NOT INCLUDE 1) problems resulting from accident damage; or 2) replacement of normal maintenance items (brake pads, batteries, mufflers) unless they were replaced much sooner or more often than expected.
This year’s question, changes in bold:
If you had any problems with your car in the past 12 months that you considered SERIOUS because of cost, failure, safety or downtime, click the appropriate box(es) for each car. INCLUDE problems covered by warranty. DO NOT INCLUDE 1) problems resulting from accident damage; 2) recalls; or 3) replacement of normal maintenance items (brake pads, batteries, mufflers) unless they were replaced much sooner or more often than expected.
The explicit exclusion of recalls mirrors TrueDelta’s practice, though we have owners report recalls but identify them as such. This way they can be excluded in the current analysis but potentially included in future analyses.
The larger change is the elimination of the explicit date range in favor of “in the past 12 months.” My wife, who is studying to be an actuary, feels the new wording is dangerously imprecise. Some people will respond in early April. Others will respond in late June. Their survey period covers nearly three months. So it’s not clear what “year” will be reflected in Consumer Reports’ next set of results.
But I cannot be too critical in this case.
While conducting research for this blog entry, I rediscovered one of my posts on Consumer Reports’ own forum. (Yes, I sometimes post there.) The relevant part of the thread (subscription required) from last November begins:
Michael Karesh: [T]he verdicts for the Fit, Yaris, GM SUVs, and others [i.e. early 2007 models] are based on incorrectly filled in surveys, since the data [sic - meant date] range should have excluded most or even all of those owned at the time of the survey. Isn’t it risky to use such data, since there are two possibilities, neither of them good:
Group A. the respondent didn’t read the directions (that include an instruction to only report repairs up to 3/31/06)
Group B. the respondent read the instructions, so while they reported owning the car did not report repairs that it required, since these occurred after 3/31
Jerry Josz: We used the 2007 model year information because it was available. As stated in the earlier reply [I'd asked about this issue once before] “Even with the short time on the road and the fact that these vehicles were outside of the survey period, we feel that the information is of interest to our subscribers and therefore we presented it. For some models, such as the Honda Fit, Toyota Yaris, and Toyota FJ Cruiser, we would have predicted the same rating based on the track record of the manufacturer.”
Actually, because these spring intro models are so new Consumer Reports has a history of reporting overly optimistic reliability ratings for them. Last year the then-new Honda Ridgeline had a reliability rating that was literally off the charts. This year the Ridgeline’s reliability score declined all the way to average.
But that’s another matter which TrueDelta will also have to address: how many months must a car be owned before owner-supplied reliability information is valid?
The issue at hand is that Consumer Reports was using clearly faulty responses as the basis for some reliability verdicts. In response to their response, I made a suggestion:
Michael Karesh: But if you’re not going to enforce the date range, and want to have data on early intro models, why have it? Why not just say “during the past year?” Sure, it’s much less precise, but this is the amount of precision in the resulting data given the policy of including data that clearly falls outside the date range.
And in a later response:
Michael Karesh: There are always a few early intro vehicles, and CR each year provides reliability predictions for a few of them the next November. Since this was the first year I’d filled out the survey, I’d never realized that this required relying on data from people who didn’t read, misread, or ignored the instructions. Again, I suspect you’d be better off bringing the instructions further in line with how respondents actually use the form.
And this is exactly what they have now done. It’s not the best solution, but it’s better than what they were doing. If you’re going to permit people to break the rules, might as well toss the rules.
Unfortunately, the primary flaw in the question remains: it still lets each individual respondent decide what counts as SERIOUS. This permits all sorts of individual biases to intrude, and yields messy data. In contrast, TrueDelta’s survey precisely defines what needs to be reported (i.e. nearly everything that isn’t routine maintenance).