"Letting users risk their lives to run data for themselves - Tesla's autopilot's 'core value'?"

Author: Su Qingtao

This article was written in November 2018 but unfortunately got banned at that time. Recently, facing the unreliable Tesla autopilot, I couldn’t help but look back at this article and publish it after some modifications.

Part One

In March 2013, Musk went to Google to meet Larry Page.

At that time, due to poor sales of the Model S and other reasons, Tesla’s stock price plummeted, suppliers’ payments were delayed, and the company was unable to pay salaries. Tesla was one step away from bankruptcy. So Musk shamelessly reached out to his old friend Larry Page, hoping that Google could fully acquire Tesla at a price of $11 billion, while the company was only valued at $6 billion at that time.

Musk also proposed a condition: Google had to promise not to dissolve Tesla after the acquisition, and let Musk continue to serve as CEO for at least eight years. How could Page, who received such a surprise gift, be willing to miss it? Therefore, Page accepted all the conditions proposed by Musk.

If Tesla were really acquired by Google at that time, perhaps the Tesla autopilot / Autopilot would not have shone later. However, “history cannot be assumed” – when the signing was almost complete, the sales of the Model S suddenly surged, Tesla’s cash flow problem was solved. As a result, Musk suddenly regretted it, and the acquisition fell through, with few people remembering it in the future.

Two months after the Google acquisition of Tesla failed, in May 2013, Musk told Bloomberg that he was “considering the possibility of cooperating with Google to develop automation driving”. However, the cooperation did not succeed in the end, mainly because Google chose the high-cost lidar route, which was not suitable for mass production while Tesla needed a lower-cost solution.

From the beginning, Tesla’s autopilot was developed based on cameras/millimeter-wave radar technology, with low cost and mass production in mind. Its rapid development and criticism were mostly directly or indirectly related to this choice.

In October 2014, Tesla began to collaborate with Mobileye to install the camera-based Autopilot 1.0 system on the newly produced Model X and Model S. Users needed to pay $2,500 for this “optional accessory”. However, at that time, Autopilot 1.0 had not yet been activated, and it was not until a year later, when the software was updated to version 7.0, that Autopilot 1.0 hardware was activated.On October 24, 2015, Musk announced on Twitter that customers who had ordered the Autopilot feature when purchasing Tesla vehicles would be able to enjoy the autonomous driving feature in all markets except Japan. This progress was nearly a year ahead of Tesla’s original plan, which Musk had announced on Twitter on September 18, 2013, when he said that users would be able to experience semi-autonomous driving “within three years.”

However, Musk soon encountered setbacks.

In November, the transportation authority in Hong Kong demanded that Tesla disable the Autopilot feature for vehicles sold in Hong Kong, and the feature could not be re-enabled until both the automatic steering and autoparking functions were deemed reliable through official verification. Hong Kong Tesla car owners suddenly found that their Autopilot, which they paid $2,500 for, was unusable.

In the first ten months of 2015, 2,221 Model S vehicles were sold in Hong Kong’s market, and the density of Tesla Supercharging stations reached 10 every 20 minutes, making it easy to understand why Musk viewed Hong Kong as the “beacon” of the electric vehicle market. With Hong Kong’s market struggling, the big boss naturally had to “highly value” it and “personally handle it.”

In January 2016, Musk urgently “sought an audience” with then Hong Kong Chief Executive Leung Chun-ying, hoping to resolve the issue. As expected, Leung gave Musk face, and by March, Hong Kong’s market had “released” Autopilot 1.0. Japan’s market also “released” Autopilot in January 2016.

On January 20, 2016, less than three months after the feature was released, Autopilot caused Tesla’s first automatic driving accident in the world, in Hebei Handan Port of China JingHongAo expressway. Before the accident, the owner, Gao Yaning, had activated Autopilot and was leisurely listening to music when he crashed into a road sweeper truck ahead, without any warning.

However, this accident did not attract too much attention at the time, and the “first place” title was taken away by the fatal accident that occurred in Florida, USA in May of that year.

After the fatal accident in May 2016, Tesla upgraded the Autopilot system–but only changed the relative positioning sensor from a camera to a millimeter-wave radar, without using a more reliable lidar sensor.In 2017-2018, Tesla also had multiple accidents in autopilot mode, causing at least three deaths in 2018 alone. The technical reasons for the accidents were mostly concentrated on “perception failure”, with “millimeter-wave radar unable to recognize static obstacles” being a particularly common problem.

According to industry insiders, most of Tesla’s accidents in autopilot mode could be prevented by adding one or a few lasers. However, Musk has always believed that full automatic driving can be achieved only with cameras and millimeter-wave radar, and that laser radar is an unnecessary “crutch” as long as the algorithm is mature enough.

Facing the possible safety hazards of the absence of laser radar, Musk passionately declared at an internal all-hands meeting: “Instead of waiting ten years to perfect Autopilot and then releasing it to the public, we should prevent accidents and save lives now. It’s cowardly to avoid making these contributions to the world to evade legal responsibility.”

Musk said: “We’re not going to do the easy things. We’re going to do the right things.”

However, while this “toxic soup” may work for brainwashing grassroots employees, it has little effect on some independent minds. This is evident from the high turnover rate of Autopilot team leaders.

Let’s list some of Autopilot’s team leaders:

  • Project founder Sterling Anderson
  • Software Vice President Chris Lattner
  • Software Vice President Jinnah Hosein
  • Software Vice President David Nistér
  • Software Manager Sameer Qureshi
  • Project Manager Jim Keller
  • Engineering Vice President Stuart Bowers
  • Autopilot Vision Director/AI Director Andrej Karpathy

Zheng Xiaokang summed up in an article on “Garage 42” that these people have two things in common: they are all top talents in the field of robotics/computer vision/hardware development that span academia and industry, and their previous careers have not been involved in autonomous driving.

Zheng Xiaokang then made a “wild guess”: Musk’s insistence on his “Camera first” landing strategy for autonomous driving is so extreme and radical that even true professional technical experts are not very interested in joining the Autopilot team to take on this impossible challenge. Therefore, Musk had to boldly hire talents with strong abilities but limited knowledge of autonomous driving, but it is clear that he himself underestimated the challenge of achieving autonomous driving with cameras as the main sensors.# Translation

As of November 2018 when this article was written, all of the first six members of the original team tasked with developing Camera First for autonomous driving had left Tesla. It seems they understood that focusing on Camera First for autonomous driving was a big mistake – both in terms of technical difficulties and the potential harm it posed to innocent lives.

There are two examples that prove this point:

  • Sterling Anderson, the founder and former Autopilot head at Tesla, left the company to partner with the former CTO of Google’s self-driving car project, and founded Aurora, a self-driving company that uses lidar in its technology. Meanwhile, the third Autopilot head, Jinnah Hosein, joined Aurora in March 2018.
  • Sameer Qureshi joined Lyft as its head of L5 autonomous driving in May 2018, and Lyft is taking the lidar route.

It’s clear that whether or not to use lidar is a major consideration at Tesla.

However, the departure of a group of “bookworms” certainly did not put a stop to the ambitions of Elon Musk.

Certainly, this means the current solution has its safety problems. But in comparison to the rapid learning capabilities obtained from vast amounts of data, “a few human lives” are hardly worth consideration, right? In reality, accidents can serve as “great opportunities” for Tesla’s growth and learning. After a May 2016 accident, Tesla introduced its advanced Autopilot 2.0 system.

The founders of all autonomous driving companies claim to have a “life-saving” passion (in the ideal situation, autonomous driving can drastically reduce loss of life in comparison to human-driven cars). At least this is what they vocalize. However, before realizing that vision, they may have to experience a painful “period of turmoil.” In the early stages of experimentation, some innocent people may be casualties to this focus on “saving lives”.

Perhaps like Uber in the beginning, one company’s value may be, “overcoming hardship to save more lives by sacrificing the lives of a few individuals.”

Don’t think of this as mere malicious guesswork. Even Musk himself has endearingly referred to Tesla’s drivers as “Expert Trainers” of Autopilot.

What is an “Expert Trainer”? It can be translated to “Training Expert” with more solemn tone; or more mildly,In Tesla’s value system, being called a “test subject” is actually a kind of honor. In the second half of 2018, Tesla issued a call to encourage employees to become “test subjects” for the fully autonomous driving feature with a prize of $13,000.

Even Elon Musk himself has been called the “Tesla chief test subject” due to his frequent and bold use of Autopilot while driving and conducting video conferences at the same time.

However, being a “test subject” for employees and Musk himself is considered “knowingly violating rules”, and they receive hefty rewards and great honor as a result. On the other hand, Tesla’s customers are often exploited without proper information and don’t receive any monetary reward. It is not fair to make customers pay for Musk’s passion.

The situation becomes worse when a “test subject” unfortunately gets into an accident and their family sues Tesla, the company will always find a thousand excuses to avoid liability. The most common reason is: “You should have read the user manual carefully. This is just an ‘assisted driving system,’ and the driver remains the primary responsible party. The driver’s hand was not on the steering wheel before the accident according to the vehicle data.”

While Tesla might be justified from a strict liability perspective, this practice would be ethically questionable when we consider it from a human perspective. In the early days of Tesla’s marketing, it mostly emphasized “autonomous driving” and barely mentioned that it was “automated assisted driving.” In a demo video on its website, the driver never touched the steering wheel. Tesla wanted to make a lasting impression on potential customers with this kind of advertising.

Of course, Musk also has a “conservative” side. For example, at the end of 2015, he discovered that some early Autopilot users were posting videos on Toutbe of themselves shaving or sitting in the back seat “watching the show.” Musk was terrified. In version 7.1 software update, Tesla forbid some functions that could be abused by users.

However, since Autopilot’s slogan is “freeing hands,” it cannot ban all primary features. Otherwise, it would lose its selling point. For users who trust this feature, they are likely to send text messages and listen to music on the car without any hesitation.Up until now, almost all of the self-driving accidents that have occurred with Tesla were caused by drivers “over-trusting” the system. Take the Florida incident, for example: the car was in self-driving mode for 37.5 out of the 41 minutes prior to the accident, and for 37 of those 37.5 minutes, the driver’s hands were not on the steering wheel.

However, the notion of “over-trusting” the system is a ridiculous one. Shouldn’t drivers have trust in the self-driving capabilities of the system? If they don’t trust it, then what is the value of the system? But if they do, then at what point is that trust considered “appropriate,” and at what point is it considered “overwhelming”?

Once a self-driving system has demonstrated good performance and problem-solving ability under most circumstances, human drivers naturally become complacent and “over-trusting.” Moreover, having to keep their hands on the steering wheel all the time while the vehicle performs driving tasks is not only boring but also more tiring than driving themselves. If drivers are bored and tired, and they already trust the system, why not let them play with their phones? Therefore, the requirement that “drivers must keep their hands on the steering wheel” is against human nature.

Yes, you read that right: not only is Level 3 self-driving against human nature, but based on some Tesla accidents and the ensuing controversy, it seems that even Level 2 self-driving that allows drivers to take their hands off the steering wheel might be inhumane. As the system’s self-driving level becomes higher, the more drivers depend on it, and the higher the probability of accidents caused by over-trusting.

Certainly, Tesla is also considering how to constrain the “weaknesses of human nature” in its product design. If the system detects that the driver has not taken control in time and is ineffective after a reminder, it will force the system to exit and stop safely on the side of the road. But in reality, by the time the system responds, the accident has already occurred.

If Tesla drivers who might have accidents in Autopilot mode can choose not to use Autopilot, “other road users” who may be “collided” by Tesla have no freedom of choice.

Another less-discussed issue is that Tesla seems to be cunningly silently allowing users to “knowingly break the rules”.

For example, everyone knows that Autopilot is only allowed on highways, and urban roads are its “forbidden zones.” This is the same as Cadillac’s Super Cruise self-driving feature. But a big difference between the two is that once Super Cruise leaves the highway, users cannot activate it at all, whereas Autopilot can still be used even when it is not in the right situation.Even though Autopilot is not suitable for driving on urban roads (after all, Autopilot 100 cannot even recognize traffic lights, bicycles or pedestrians, and has caused two rear-end collisions with motorcycles), Tesla will not stop you if you insist on driving on urban roads.

Many of the accidents involving Tesla in China occurred on urban roads. One car owner once complained after the accident: “Since Autopilot cannot be used on urban roads, why didn’t you disable it once you detected it when I was driving?”

From Tesla’s perspective, the user’s demand may be too young, too naive – your “out-of-the-box” usage can help provide valuable data that “honest people” who follow the rules cannot provide, so why should they stop you? However, once an accident happens, Tesla will say: “You are violating the rules and are responsible for it!”

Human nature has weaknesses, which is not reliable. What is frightening is how to deal with the weaknesses of human nature. From a safety perspective, Tesla’s product design should do everything possible to limit the “weaknesses of human nature” of car owners; but from the perspective of data accumulation, using the “weaknesses of human nature” of car owners (implicitly or even implicitly encouraged by Tesla) is a more desirable strategy.

A Tesla owner who once drove in downtown Beijing and caused a rear-end collision due to the “millimeter wave cannot recognize static obstacles” once angrily scolded: “Are you asking us to risk our lives to run data for you?”

Do you still remember the famous Levandowski? After Tesla’s famous accident happened in May 2016, Levandowski, who had established OTTO, once sadly said to the team members: “I regret that the first accident caused by automatic driving did not happen in our company and was taken out by Tesla.”

Levandowski and Uber founder Kalanick were both accused of lacking respect for life. Now it seems that Tesla, which “saves what at the cost of sacrificing what” and “lets users risk their lives to run data for themselves,” is not inferior to Uber in terms of values.

Back when Levandowski joined Uber, there were many technical elites from Carnegie Mellon University’s Computer Science Department who left because of conflicting values. Referring to Uber as an example, we can see that the high turnover rate of Autopilot project managers is not only due to disagreement with the Camera First technical route, but also due to differences in values.The practice of treating users as guinea pigs is unacceptable, even the US government cannot condone it. On May 23, 2018, the American Consumer Association and the Automotive Safety Center jointly wrote to the Federal Trade Commission, requesting an investigation into Tesla’s marketing strategy of deceiving consumers and causing them to overestimate the capabilities of Autopilot.

The letter strongly criticized the product name of Autopilot, the descriptive language used by Tesla on their website, videos, and various messages and comments released by Tesla and Musk through the media.

Seriousness

What is more serious than the departure of several executives and the dissatisfaction of regulatory agencies is that, over the past few years, in order to inspire users’ faith in Tesla’s autonomous driving, Musk has constantly made various empty promises. However, he did not follow through on most of these promises.

Today, Musk’s promises or comments on autonomous driving have little credibility, and saying that he is “bankrupt” in terms of credibility is not an exaggeration.

In the development of fully autonomous driving, Musk has made countless promises, but has been repeatedly rebuked.

For example, in October 2016, Musk said that he had a small goal, that is, to make Tesla vehicles drive from Los Angeles to New York without any human intervention by the end of 2017, and show off to his peers-although the distance between the two cities is 4,500 kilometers, this has already become the standard for fully autonomous driving! However, in February 2018, when asked by the public who had been eagerly following this issue, Musk replied, “Sorry, I was too busy ramping up Model 3’s production capacity last year and didn’t have time for that.”

In February, Tesla delayed the demonstration of this long-distance journey by “6 months”, and as of November when this article was written, 9 months had passed and there was still no news. According to industry insiders, this grand demo may not start until Autopilot software version 10.0 is released.

If the frustration in time can still be understood and accepted by users, the expansion of some functional issues is tantamount to “suspected fraud”.

For example, in March 2015, when the software was not yet ready and the Autopilot function had not been officially activated, Musk once said that the application scenario of the system is “from on-ramp to off-ramp on highways” and “major roads”. However, after it was put into use, the real application scenario has only been limited to highways, and the so-called major roads were not implemented until 2020.# Tesla’s Autopilot Upgrades: The Pros and Cons of Hardware Updates

Tesla has been upgrading its Autopilot hardware since 2016. In October of that year, Tesla emphasized that Autopilot 2.0’s hardware was already sufficient to support future fully autonomous driving. However, in August 2017, Tesla upgraded its hardware to version 2.5, which mainly added an additional soc (Nvidia’s Parker) and some redundant circuits.

With Autopilot 2.0’s hardware capability being sufficient, why was there a need for another upgrade, especially since the upgrade was free for existing customers? It can only be assumed that Tesla was not confident in Autopilot 2.0’s hardware capabilities and needed an upgrade to put their minds at ease.

Despite the upgrade, Tesla continued to strongly emphasize that “Autopilot 2.0 is sufficient, and hardware replacement is not necessary.” This caused confusion among customers. Indeed, we are reminded of a poem: “My body has already surrendered, but my mouth is still acting strong.”

Since Tesla did not heavily promote the upgrade, it went mostly unnoticed by the public, resulting in fewer customers upgrading their hardware.

The most widely-known “free hardware upgrade” occurred in early August 2018. At a financial conference, Tesla disclosed their progress in developing a self-designed chip and announced a new hardware solution, Hardware 3.0, which will have performance levels “an order of magnitude” better than the Drive PX 2 used in Autopilot 2.0.

Musk also stated that Hardware 3.0 would replace the Drive PX 2 after being proven, and would not only be used on new cars, but also on cars with Drive PX 2 purchased after October 2016. These owners would enjoy the “free hardware upgrade” service, which was indeed fulfilled, but the efficiency was extremely low.

After this news was released, many people praised Tesla. During this period of rapid technological development, users often face the dilemma of “early purchase, early obsolescence or replacement.” Tesla’s “free hardware upgrade” represents a brand-new concept: early purchase, early enjoyment; and even if new technologies emerge in the future, customers can continue to enjoy the benefits without additional investment.

However, Tesla’s senior users did not buy into this policy, which most people believed to be a “benefit.” The team that criticized Tesla for “letting users risk their lives to help you run data,” criticized the policy.For owners of Autopilot 2.0, the “free upgrade” is not really a “benefit” as they had already paid for full self-driving years ago. In the end, they were simply told that the original hardware was unable to achieve fully autonomous driving, which is clearly deception. Isn’t a free chip replacement the least they could do?

Takata replaced all airbags for free, but Takata is out of business now. Should Tesla owners accept the free automatic driving board replacement? If they refuse the replacement, can they sue Tesla for commercial fraud?

This statement makes perfect sense. In addition, the car owner added:

“Patchwork after the fact is very scary. OTA actually has legal and technical risks. It’s just that the risk is relatively small if the hardware is not changed, and everyone is used to it. However, upgrading hardware on a large scale is a challenge for automotive safety regulation. This red line must be adhered to.”

“All the technical parameters of a car are calibrated when it leaves the factory. If hardware upgrades are allowed at will, then retrofitting will be completely open, and the safety of the vehicle cannot be guaranteed!”

In short, this “free upgrade” not only failed to win the appreciation of customers, but was also criticized. However, the embarrassing situation in transportation is that Musk’s excessive promises failed to manage customer expectations well.

In the first half of 2020, Tesla was reported to be developing the next-generation autonomous driving chip. So, should they have another “free hardware upgrade”? If they do, does that mean they have acknowledged that the current FSD chip cannot meet the demand for fully autonomous driving?

It is strongly recommended that in the future, before offering empty promises, Musk should recite this principle three times: “Do not trust lightly, so people will not let me down; do not make promises lightly, so I will not let people down.”

This article is a translation by ChatGPT of a Chinese report from 42HOW. If you have any questions about it, please email bd@42how.com.