I've written before about the limitations of J.D. Power's approach to conducting and reporting reliability. This piece focuses
more narrowly on the 2006 Initial Quality Study (IQS), which adds a new wrinkle.
This is the second time the IQS has been redesigned to encompass a larger number of potential issues. Once again,
the issues included have been expanded beyond defects that can be fixed to designed-in annoyances that must be endured. For example, a few years
ago a dysfunctional cupholder contributed to MINI's poor showing.
While the 2006 IQS asked about an even larger number of potential design issues, it also provides subscores for "design quality" and "production quality." Though a step in the right direction, an even better strategy would separate the two entirely. Currently, they are combined in the overall IQS scores, making it unclear what these scores represent. Aside from having much different implications, design quality is far more subjective.
This wouldn't be so bad if the overall IQS didn't receive almost all the attention in press coverage. But it does. And it's the only score that will be mentioned in the winners' ads. Consumers will see or hear these ads and wrongly assume that IQS is a direct indicator of defect rates.
Impact on rankings
How much does including design quality distort the results? Comparing the rankings based on overall IQS and then production quality alone discovers many dramatic changes in relative position:
BMW, up 24 spots to #3
Buick, up 14 spots to #8
MINI, up 13 spots to #16 (barely below the average)
--apparently they still haven't provided a decent cupholder
GMC, down 13 spots to #22
Nissan, down 10 spots to #22 (there are ties)
Mercedes-Benz, up 9 spots to #16
Subaru, up 9 spots to #19
Dodge, down 8 spots to #27
Eight others change position by at least five spots. So out of 37 brands, the rankings for 16 are heavily affected by the inclusion of design quality. BMW and Mercedes are hit especially hard.
Small absolute differences
Beyond the cloudiness of the revised IQS, the way it is reported and spun continues to put too much emphasis on rankings, which diverts attention from the small size of the absolute differences.
Looking at defect rates alone, it's enlightening to check the number of brands near the average: 22 of 37 brands fall within one-tenth of a problem per car of the 0.64 PPC average, and 30 of 37 fall within two-tenths. Of the seven beyond this range, only one, Lexus, is on the top, and even it only betters the average by 0.22 problems per car.
The best brand has 0.42 problems per car, while the worst has 1.10, a difference of only 0.68 problems per car. Even much of this narrow range results from a few low-scoring brands. The difference between Brand #3 and Brand #32 is a scant 0.27 problems per car. Put another way, a car from Brand #32, compared to a car from Brand #3, has a one-in-four chance of having a single additional problem. Which is likely to be the only problem, and which you still have a nearly 50-50 chance of having with the best brand.
Pumping up problem counts
It's becoming clearer why J.D. Power lumps design quality into the IQS: without it the absolute differences become small. And the smaller the differences get,
the less people will care about IQS. And the less people care about IQS, the less manufacturers will be willing to pay lavishly the
detailed findings and for the right to advertise top scores.
It's not necessary to rely on inference to come to this conclusion. Speaking with Automotive News about the revised IQS, J.D. Power exec Joe Ivers asserted, "We're not just pumping up problem counts. It's a more precise measurement." (June 12, 2006 p.53)
Nice how that "just" managed to slip out.
Want better vehicle reliability information? Participate in TrueDelta's research and help make it happen.
Thanks for reading.
Michael Karesh, TrueDelta
First posted: June 9, 2006
Last updated: November 16, 2006