How Tesla Tries To Keep The Media On Autopilot

autopilot

Today I appeared on Bloomberg television to discuss Tesla’s latest earnings, as I have after the electric car maker’s last few quarterly reports, but this time things were somewhat different. Minutes before we went live, the show’s host Emily Chang told me that she would be asking me about a correction that Tesla had requested to my most recent Bloomberg View post about the new Autopilot 2.0 hardware suite announcement. My initial draft had said that “several” people had died in Teslas with Autopilot enabled, and at the request of a Tesla representative my editors and I agreed to clarify that only two deaths were tied to the controversial driver assist system. I am always happy to make factual corrections to my writing, but because I had limited time to explain the complex circumstances around this particular issue I thought I would write a post laying out the particulars of this case.

When I say that two people have died in Teslas using Autopilot mode, what I mean is that there are two instances in which Tesla does not deny that deaths occurred when their vehicles were in Autopilot mode. The first is the case of Josh Brown, whose Tesla slammed into a turning semi truck in May of this year, kicking off investigations by the NHTSA and the NTSB. The second is the case of Gao Yaning, who died in January of this year when his Tesla drove into the back of a street sweeper near Handan, China. Tesla acknowledged in a company blog post that Josh Brown’s car was in Autopilot mode during his fatal crash, and that the car’s radar failed to recognize the truck that Brown’s vehicle hit. Tesla has not confirmed that Gao’s vehicle was in Autopilot mode when it crashed, saying that the collision destroyed the antenna that beams vehicle data back to its servers.

What sets the Gao case apart is not that Tesla was unable to remotely retrieve crash data from the vehicle (this has happened in at least one other instance), but that the Gao family has denied Tesla physical access to the vehicle’s onboard data logs. Without access to that data Tesla has been unable to defend its Autopilot system’s performance and blame driver error for the crash as it has done in numerous other crashes involving its vehicles in Autopilot mode. The decision by Gao’s family to deny Tesla access to the data logs in the wrecked vehicle was significant because it helps illustrate the fact that Tesla significantly stacks the deck in its own favor when it comes to interpreting the data in its vehicles and explains why we can only say that two deaths have been tied to its Autopilot system.

As Bozi Tatarevic explains in his excellent post “Who Owns Your Vehicle Crash Data” at The Truth About Cars, crash data recorders (known as Event Data Recorders, or EDRs) are regulated in the United States by the federal regulation 49 CFR 563. A key principle of this regulation is that the data recorded by an EDR in any car sold in the United States must be readable by a commercially-available tool, allowing an automaker’s claims about that data to be independently verifiable. Tesla is the only automaker whose vehicles record crash data that does not comply with that standard, meaning only Tesla can decipher and interpret the data its vehicles generate. Tesla gets away with this by claiming that its data recording system is not technically an EDR as defined by this regulation, allowing it to comply with the letter of the law while obviously flouting the spirit of the law. Compounding this troubling situation, Tesla also does not comply with the Driver Privacy Act of 2015‘s clarification that all vehicle data is the property of the vehicle owner, not the manufacturer. According to Green Car Reports, Tesla’s privacy policy –which all owners must sign when they purchase a vehicle from the company– states that Tesla may collect and use vehicle data as it sees fit.

I recently noted in a piece at Quartz that Tesla’s collection and use of this data has gone beyond what any other auto manufacturer has done:

In several instances, the electric car maker has publicly released characterizations of data captured by onboard sensors in order to fight the perception that its driver assistance system is unsafe.

This may surprise some users: Tesla’s Terms of Use (TOU) does not explicitly state that the company will release information captured by its vehicles when there is no clear legal need for it. That doesn’t mean they can’t, however, it all comes down to how you interpret the wording. Whether or not Tesla’s public disclosures of vehicle data fall under a reasonable interpretation of its TOU is a matter of legal interpretation. As a practical matter, however, the company’s failure to clearly disclose that it could publicly release characterizations of owner driving data could potentially lead to a backlash from privacy-minded consumers, especially as Tesla attempts to bring its technology to a mass market.

In other words, Tesla’s actions clarify what its Terms Of Use do not: Tesla uses the data that should belong to its vehicles’ owners against those very owners in order to defend the safety record of its Autopilot system. Just as importantly, Tesla does not offer owners the opportunity to independently access and interpret the data that should be their property. What’s more, Tesla’s unchallenged and unverified claims about Autopilot’s involvement in crashes is further stacked against owners by a “heads I win, tails you lose” approach to interpreting that data. The Wall Street Journal captures just how helpless owners are in Autopilot-involved crashes with the following deeply telling anecdote:

Arianna Simpson, a venture capitalist in San Francisco, said the Autopilot in her Model S “did absolutely nothing” when the car she was following on Interstate 5 near Los Angeles changed lanes, revealing another car parked on the highway.

Her Model S rammed into that car, she said, damaging both vehicles but causing no major injuries.

Tesla responded that the April crash was her fault because she hit the brakes right before the collision, disengaging Autopilot. Before that, the car sounded a collision warning as it should have, the car’s data show.

“So if you don’t brake, it’s your fault because you weren’t paying attention,” said Ms. Simpson, 25. “And if you do brake, it’s your fault because you were driving.”

To summarize, Tesla blames owners for Autopilot-involved crashes whether they fully put their trust in Autopilot or intervene when they think it is about to fail, refuses to provide the relevant data or a means of interpreting it to owners, provides no opportunity for third-party verification of their interpretation of the data and aggressively publicizes their unverified interpretations of the data in order to blame owners for crashes. Given these extraordinary circumstances, it’s a minor miracle that Tesla has allowed even two fatal crashes to be blamed on its Autopilot system. The deck is completely stacked against owners who might blame Autopilot for a crash unless regulators happen to move swiftly to investigate the incident, as in the Brown case, or the owners deny Tesla access to vehicle data, as in the Gao case. Moreover, Tesla put this deeply unfair and unaccountable data collection regime in place in spite of US regulations whose purpose appears to be protecting consumers from precisely this kind of tilted playing field.

This situation might be defensible if Tesla had a solid record of good-faith arguments in defense of Autopilot, but alas this is not the case. As I have documented both here and at The Daily Beast and others have argued elsewhere, Musk and Tesla have consistently made highly dubious claims about Autopilot’s safety record. Given Tesla’s clear record of misleading representations and arguments about its controversial Autopilot system, the lack of accountability around its claims about individual Autopilot-involved crashes create a deeply troubling picture. Though there is no hard evidence that Tesla is systematically misrepresenting the data from the various crashes involving its cars, there is undeniable evidence that it is willing to bend the truth on this issue and it has clearly created a situation in which it is impossible to independently fact-check the claims it makes about Autopilot and Autopilot-involved crashes.

This pattern has continued through the corrections Tesla requested to my most recent piece. Tesla’s representative requested a number of corrections that were easily dismissed, asserting for example that “Tesla has NOT cut prices” and that regulators have not cracked down on Autopilot. Of the two corrections that were ultimately made, the one that has not been discussed in this piece –which reflects this representative’s assertion that Tesla’s recent self-driving video depicts a trip from Fremont, CA to the company’s headquarters in Palo Alto– is patently false. The actual route took place from one side of the Stanford campus to the other (although not through it), a fact that has been reported by Paulo Santos at SeekingAlpha and a pseudonymous Redditor, and was subsequently confirmed to me by a local who witnessed Tesla filming the video. Anyone who believes that Tesla actually cares about accurate reporting need only look at the corrections they requested to be disabused of that notion.

This in no way absolves me of my responsibility to ensure that my reporting and analysis is 100% factually accurate, but I hope the public does begin realize how complex the situation is and how hard Tesla has made it to accurately report on this issue. The fact that I have now expended some 1,500 words explaining the context around the removal of a single word from my last column should help demonstrate how difficult it is to write about Autopilot’s safety issues without simply parroting Tesla’s talking points. At the end of the day, the lesson I take away from this experience is that I can’t be too careful about every word in my writing. But the other lesson here is that only Tesla has the power to address the suspicions that swirl around Autopilot, both by choosing its messaging more carefully and by creating more transparency and accountability around its vehicle data. Until it does –or until legislators and regulators make its data collection system subject to the EDR regulations that other automakers comply with– suspicion, misunderstandings and miscommunication will continue to infect the conversation around this important public safety topic.

This site automatically detects and reports abuse