The last several years have seen Tesla emerge as a true leader in automotive innovation. In the race to achieve a fully self-driving car, Tesla and its founder, Elon Musk, have made bold claims that this technology would be available in its cars far sooner than in similar models from its competitors and that this same technology would be much safer than traditional, human driven vehicles. Musk’s supreme confidence in Tesla’s product was on full display when he announced in 2015 that the company’s Model S sedan would come equipped with self-driving technology, “Autopilot,” that was still in its beta testing phase. He also proudly proclaimed that the Autopilot technology was practically “twice as good” as a human driver and that someone could drive the 800 miles between San Francisco and Seattle without so much as touching the controls.
However, in the wake of two recent crashes involving Tesla vehicles allegedly engaged in Autopilot mode, one of which involved a fatality, it would be wise to take a step back from Musk’s bravado to evaluate the true limits of the current technology and where, realistically, the quest for an autonomous driving vehicle can go from here.
The first such incident concerns the death of Tesla owner Joshua Brown, who passed away on May 7 after his Autopilot-controlled Model S failed to notice a tractor-trailer turning in front of the vehicle and ran right into the side of the trailer. It remains unclear as to whether this crash was the result of a defect in the Autopilot technology or user error on the part of Mr. Brown. Indeed, Mr. Brown has taken a tremendous amount of the blame for failing to keep both of his hands on the wheel as prescribed by the car’s user manual (he was watching a Harry Potter movie at the time of the crash), but who could blame his confidence in the technology given Musk’s prior claims? It’s not a stretch to conclude that Brown’s reliance on Tesla’s promises lulled him into a false sense of security that the Autopilot technology isn’t meant to provide.
The second crash involved a Tesla SUV equipped with Autopilot that flipped on the Pennsylvania Turnpike after striking barriers on both sides of the road. Tesla is currently disputing the notion that Autopilot is to blame for the crash, but the driver, Albert Scaglione, has stated that the car was in Autopilot mode at the time of the wreck.
Even though these investigations are still on-going, the legal questions surrounding Tesla’s decision to make this technology available to consumers while still in beta testing are intriguing, to say the least. Most innovation in the automotive industry undergoes rigorous, long-term testing before it is released to the public. In fact, this is what we’ve seen from most auto manufacturers working on the same kind of technology- the research and development process takes a long time, and new features are slowly rolled out as the technology improves (we’ve gradually moved from collision warning systems and cruise control adaptation to semi-automated braking and automatic parallel parking features in recent years). The decision to expose drivers to a dangerous technology that arguably wasn’t fully developed could lead to substantial liability for Tesla. In fact, I would be shocked if Joshua Brown’s family hasn’t already contacted an attorney to evaluate their legal options. It certainly appears as though this type of loss was foreseeable and that Tesla didn’t exercise reasonable care in both releasing an untested, dangerous product on the roadways and failing to warn its consumers about the true limitations of the Autopilot feature. Furthermore, Tesla didn’t make the details of the Brown incident public for over two months, suggesting that the embarrassment over the failure of their heralded Autopilot technology was overriding their concerns for customer safety.
Even more curious is the fact that Tesla appears to have doubled down on its commitment to the Autopilot technology and will not be disabling this feature in the Model S sedans. Rather than remove the technology entirely, this week the Wall Street Journal reports that Tesla instead has chosen to focus more on increasing its efforts to “educate customers on how the system works.” Tesla may be utilizing this strategy to avoid admitting that it released the Autopilot technology prematurely, thereby giving off the impression that it admits liability for the harms the Autopilot technology has caused.
Whatever happens with Tesla’s latest misfortunes, it seems clear that these recent events will deter other automakers from being too eager to be the first in line to roll out a truly autonomous car. The chilling effect from the public response may push the time-table back many years, but hopefully we will see huge advancements in the technology in the mean time that will help prevent tragedies like the death of Joshua Brown. But even if the technology allows us to achieve a genuine self-driving car in the next ten or twenty years, the myriad legal and ethical implications of those developments remain to be sorted out in our legislatures and courtrooms.