Quote:
Originally posted by EdT82SC
I don't think they're ratings are skewed or misleading. They are just using a rating system designed for Camrys and minivans to a sports car. The sports cars don't quite fit. The information is still valid, but just not relevent. Like the joke about the guy floating around in the baloon lost asks the guy on the ground where he is, and the guy on the ground says, "In a baloon".
|
http://www.truedelta.com/pieces/shortcomings.php
Seven Serious Problems with Consumer Reports
I admire Consumer Reports as much as anyone. For decades they were the only source of vehicle reliability information. And even today they are the best source.
But even the best is not nearly good enough. In at least seven ways Consumer Reports' data collection methods or modes of presentation mislead or underinform consumers.
1. "Serious problems"
Consumer Reports' ratings are based on the number of "serious problems" reported by its members. I have searched in vain through their annual auto issues for a definition of what counts as a serious problem.
2. Relative ratings
Consumer Reports rates each model relative to the average vehicle. As a result, the absolute number of problems a vehicle will experience remains unclear. Does an "above average" vehicle "never break?" Is a "below average" vehicle "always in the shop?"
In the absence of hard numbers, people tend to assume that the best vehicles are better than they are and that the worst vehicles are worse than they are. I recently had a vigorous discussion with the owner of a Japanese SUV. As proof of his vehicle's superior reliability, he noted that it had been the highest rated brand in Consumer Reports' 2005 auto issue. This rating was based on 2004 vehicles, which were less (usually much less) than a year old at the time. His brand's cars had had eight "serious problems" per hundred vehicles. While this was less than half the eighteen problems per hundred domestic brand vehicles, the absolute difference was just one-tenth of a serious problem per car. Another implication: few (if any) vehicles are likely to have even one serious problem this early in their lives.
This did not--and does not--strike me as anything to get wound up over. The real problem: very few people who glance through the magazine think about the absolute numbers behind the relative ratings.
3. Ranges
Consumer Reports' rates models on a five-point scale from "much worse than average" to "much better than average" using their well-known red and black dots. More than half of domestic brand vehicle models earn an "average" rating, while many Hondas and Toyotas earn an "above average" rating. (With the average getting ever better, "much better than average" ratings have been becoming increasingly rare.)
"Average" means within twenty percent of the average, so 80 to 120 on an index with 100 being average. "Better than average" ranges from 121 to 140. So if one vehicle is "average" and another is "better than average," then the difference between them can range anywhere from a single point--totally insignificant--to 60 points--very significant. The red and black dots appear simple to understand, but they conceal far more than the convey. As a result, many readers of the magazine understand far less than they think they do.
4. Only averages
The reliability of all vehicles has been steadily improving. Currently, even the average eight-year-old domestic brand model is reported (on page 17 of the 2005 auto issue) to have fewer than one-and-a-half "serious problems" per year. Yet most people would not buy such a car because they fear it will have "lots of problems."
While perceptions are undoubtedly distorted by Consumer Reports' emphasis on relative ratings, another factor is likely involved: people are afraid of getting a lemon, an unusually troublesome car or truck. Even if the average is the same for two models, the chances of getting a lemon could be far higher for one than the other. People might fear that even as the average rate of problems for domestic vehicles comes down the odds of getting a lemon remain uncomfortably high.
Based on Consumer Reports' reported results there's no way to know one way or the other, as they only report averages. To my knowledge, they have never discussed the odds of getting an unusually good or bad example of a particular model.
5. Survey (in)frequency
Consumer Reports sends out an annual survey asking people to report problems that occurred during the entire previous year. This is too long a period to expect people to accurately remember what happened.
6. Stale information
Consumer Reports mails out surveys each spring, then first reports the results the following November. As a result, when a new vehicle is introduced in the fall its reliability isn't reported until over a year later. This is a long time to wait for someone interested in a hot new design; by the time its reliability is known it will no longer be hot.
In a related issue, the vehicles reported on aren't as old as Consumer Reports suggests. For example, while "three-year-old vehicles" are, on average, three years old at the time the auto issue appears, they were only about two years old when the problems were reported, and only about one year old at the beginning of the period being reported upon.
7. Fossilization
The last serious problem at least partially explains the others: Consumer Reports, once an innovator, has ceased to innovate. They have been reporting results much the same way for decades. The year-long lag between the surveys and the auto issue is likely an artifact of the past, when computers and the Internet were not around to speed the process. The same goes with continuing to rely on an annual survey.