Here’s Why Some Popular Car Models Don’t Make Consumer Reports’ Safety List







One of the great sources of safety information about vehicles is Consumer Reports. With its in-house testing and feedback from actual car owners, the publication has been able to give a stamp of approval to a number of different vehicles and brands for how safe they can be. However, sometimes when you cross reference CR with other groups that make similar safety ratings, you may see certain models that rank highly on those other lists not represented among the best of the best for the organization. That’s because Consumer Reports has very specific guidelines as to what qualifies a vehicle to make that list that other publications don’t necessarily have to abide by.

For instance, no full-size SUVs or pickup trucks rank among the most safe vehicles by CR. Sure, there may be some models that are quite safe for their class, but the publication’s senior director of auto testing, Jake Fisher, determined that these vehicles “take longer to stop and don’t handle as nimbly as smaller vehicles.” Because of this, they’re more likely to crash, particularly one “that a small vehicle could have avoided.” This doesn’t stop CR from giving full-size SUVs and pickups high safety ratings, but they just can’t be at the very top level.

Consumer Reports also gives far more weight to crash tests performed by the Insurance Institute for Highway Safety (IIHS) than the National Highway Traffic Safety Administration (NHTSA). It believes the IIHS’s tests to be far more representative of real world crashes, as it performs six primary crash tests compared to the NHTSA’s four. So, something with a perfect crash rating from the NHTSA may not be as well-rated by the IIHS.

How feature standardization and usability influence Consumer Reports’ standings

There are a number of brands that Consumer Reports rates very highly. For instance, there’s Subaru. CR has ranked the Japanese automaker as the top overall brand on the market, and while it gave its vehicles good safety ratings, it misses out on the publication’s top safety ratings. That isn’t to say that Subaru’s vehicles aren’t worthy of that top score, but CR’s guidelines put a premium on safety feature standardization. Subaru has implemented terrific advanced safety eyesight features, but these do not come standard on every trim of its vehicles. To reach that top level, CR requires these kinds of features — like backup cameras or pedestrian detectors — be available to all drivers. Although Subaru is the most notable example, there are other ostensibly safe models, like the Honda Civic, that fall into this bucket too.

Safety features aren’t the only things that Consumer Reports highlights as important. Convenience capabilities like climate control also contribute to safety. If these features are easily usable by the driver, they are more safe in Consumer Reports’ eyes. The publication singles out Volvo as a brand that scores poorly in feature usability, so even if the physicality of its cars are very safe, drivers can be more easily distracted trying to figure out the controls to these “conveniences” while behind the wheel, leading to unsafe driving conditions. Mercedes-Benz and Volkswagen are other brands that CR has found to have overly complicated infotainment and climate control systems. Of the 10 safest vehicles spotlighted by the publication, Mazda produces three of them. Clearly, it has found the sweet spot of usability, safety feature standardization, and crash test approval to satisfy the automotive team over at Consumer Reports.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A new class-action lawsuit, filed on Monday by three teenage girls and their guardians, alleges that Elon Musk’s xAI created and distributed child sexual abuse material featuring their faces and likenesses with its Grok AI tech.

“Their lives have been shattered by the devastating loss of privacy, dignity, and personal safety that the production and dissemination of this CSAM have caused,” the filing says. “xAI’s financial gain through the increased use of its image- and video-making product came at their expense and well-being.”

From December to early January, Grok allowed many AI and X social media users to create AI-generated nonconsensual intimate images, sometimes known as deepfake porn. Reports estimate that Grok users made 4.4 million “undressed” or “nudified” images, 41% of the total number of images created, over a period of nine days. 

X, xAI and its safety and child safety divisions did not immediately respond to a request for comment.

The wave of “undressed” images stirred outrage around the world. The European Commission quickly launched an investigation, while Malaysia and Indonesia banned X within their borders. Some US government representatives called on Apple and Google to remove the app from their app stores for violating their policies, but no federal investigation into X or xAI has been opened. A similar, separate class-action lawsuit was filed (PDF) by a South Carolina woman in late January.

The dehumanizing trend highlighted just how capable modern AI image tools are at creating content that seems realistic. The new complaint compares Grok’s self-proclaimed “spicy AI” generation to the “dark arts” with its ease of subjecting children to “any pose, however sick, however fetishized, however unlawful.”

“To the viewer, the resulting video appears entirely real. For the child, her identifying features will now forever be attached to a video depicting her own child sexual abuse,” the complaint reads.

AI Atlas

The complaint says xAI is at fault because it did not employ industry-standard guardrails that would prevent abusers from making this content. It says xAI licensed use of its tech to third-party companies abroad, which sold subscriptions that led abusers to make child sexual abuse images featuring the faces and likenesses of the victims. The requests ran through xAI’s servers, which makes the company liable, the complaint argues.

The lawsuit was filed by three Jane Does, pseudonyms given to the teens to protect their identities. Jane Doe 1 was first alerted to the fact that abusive, AI-generated sexual material of her was circulating on the web by an anonymous Instagram message in early December. The filing says she was told about a Discord server by the anonymous Instagram user, where the material was shared. That led Jane Doe 1 and her family, and eventually law enforcement, to find and arrest one perpetrator.

Ongoing investigations led the families of Jane Does 2 and 3 to learn their children’s images had been transformed with xAI tech into abusive material.





Source link