How Tesla's Self-driving Cars Work
Tesla uses an array of cameras to provide data for its machine-learning algorithms

Last month, Tesla hosted an event for its investors, revealing their progress in autopilot. Currently, most Tesla vehicles can improve the driver's ability. It can take over the cumbersome task of maintaining lanes on the highway, monitoring and matching the speed of surrounding vehicles, and even summoning you when you are away from the vehicle.

These features are impressive and can even save lives in some cases, but it is still far away from a fully self-driving vehicle. Drivers are required to intervene regularly to ensure they are aware and able to take over when needed.

Automakers such as Tesla are facing three major challenges that need to be overcome to successfully replace human drivers.

The first is to establish a safety system. In order to replace the human driver, self-driving cars need to be safer than human drivers.

So how do we measure it? We cannot guarantee that no accidents will occur. Murphy's Law is in play everywhere.

We can start by quantifying the safety of human drivers. In the United States, the current mortality rate is about the number of deaths per million hours of driving.

This includes the stupidity and collapse of humans when they are drunk or looking at their phones, so we can keep our vehicles at a higher standard.

We don't have enough data to calculate the exact statistics here, but we know that Uber's self-driving vehicles require one person to intervene every 19 kilometers, which means it will fail every 13 miles.

This made Uber's clash with an unfortunate pedestrian, even more shocking.

Proponents of self-driving vehicles quickly blamed pedestrians for stepping on the front of the car in low light conditions, but we can't let our desire to advance technology to make excuses for it.

The vehicle uses a lidar sensor and does not require illumination. However, even after the human driver - who was not paying attention - noticed the imminent impact, the car did not attempt to slow down.

Based on the data from Uber, the car first observed the pedestrians with its radar and lidar sensors 6 seconds before the crash. At this time, it was going at a speed of 70 kilometers per hour. It continues at this speed.

With the convergence of pedestrian and vehicle paths, computer classification systems have difficulty identifying objects in their views, jumped from an unidentified object to a car and then a bicycle with no certainty in the trajectory path of the object.

1.3 seconds before the collision, the vehicle recognizes that emergency braking needs to be performed, but it did not use it since it was programed not to do so if the deceleration exceeds the square of 6.5 m/s. Instead, the human operator should intervene, but the vehicle was not designed to warn the driver.

Taking into account our previous statistics, it looks a shocking design! By engaging the steering wheel and breaking, the driver tried to intervene for a second before impact, the vehicle speed reaches 62 km/h. It is too late to save this person.

Here, the internal computer apparently has no instruction to handle the uncertainty.

A person may slow down when they encounter something on a road that cannot be clearly identified, but the program will continue until it recognizes the threat, which is too late.

Even with high-resolution lidars, it is difficult to identify an object and predict its path. So how can we improve security? A large part of this is in the hardware itself and the programming that goes into it.

Tesla introduced a new dedicated computer, a chip optimized for running neural networks, which Elon called the first in its class. When a customer purchases a complete autopilot upgrade, it is designed to be retrofitted into an existing vehicle.

They are therefore of similar size and have the same power as existing self-driving computers with a power of 100 watts.

A huge performance jump and processing power is required to analyze the shots in the sensor suite that each new Tesla has.

It is currently composed of 3 front cameras, all mounted behind the windshield. One is a 120-degree wide-angle fisheye lens that provides situational awareness and captures traffic lights and objects into the travel path.

The second camera is a narrow-angle lens that provides longer-range information for high-speed travel, such as a highway, and the third is the main camera, located between the two applications.

There are also four additional cameras on sides of the vehicle to check if any other vehicle unexpectedly enters the car's lane and provide the information needed to safely enter the intersection and change lanes.

The last camera is located at the rear and can double as a parking camera, but it also saved multiple Tesla from being rear-ended.

Finding the right sensor fusion has been the subject of debate among competing autopilot companies.

Musk recently said: "Anyone who relies on lidar sensors which work in a similar way to a radar, but uses light instead of radio waves is doomed to fail and it is a fool's job."

Lidar has a very high resolution, which means it can provide highly detailed information about its detected material. Working in low light and high light conditions, it is capable of measuring speed, has a good range, and works moderately in inclement weather conditions. However, its biggest weakness is why Musk said that sentence.

The sensor is expensive and cumbersome and this is the second challenge in building a self-driving car:

Building an affordable system that ordinary people are willing to buy.

Lidar sensors are the large sensors you see on Waymo, Uber and most competitive autonomous driving technologies.

After all, SpaceX used it in its dragon-eye navigation sensor, So Musk knew very well the potential of the lidar. But for Tesla, its weakness is too much. They focus on building not only cost-effective vehicles but also a beautiful looking vehicle.

Waymo, a subsidiary of Google's parent company Alphabet, sells its lidar sensors to any company that does not meet its self-driving taxi service plan.

When they started in 2009, the unit cost of lidar sensors was about $75,000 but they have managed to reduce the cost to $750 by manufacturing the units over the past decade.

The Waymo vehicle uses four lidar sensors on each side of the vehicle.

It makes the total cost for these sensors, a sum of $30,000 for third parties which is not far from the total cost of the basic model of a Tesla Model 3.

This pricing is clearly not in line with Tesla's mission to "accelerate the world's transition to sustainable transportation." This problem made Tesla to move toward a cheaper sensor fusion config.

The pros and cons of the three sensor types

How Tesla does this without a lidar:

The radar sensor of Tesla Model 3 works well under all conditions. The sensor is small and inexpensive, capable of detecting speed, and its range is suitable for a short distance and long distance detection.

What they fall in is the low-resolution data they provide, but by combining it with the camera, this weakness can easily be taken care of. With excellent range and resolution, it provides color and contrast information for reading street signs and is very small and inexpensive.

Tesla is still a bit weak in proximity detection, but using two cameras in stereo allows the camera to work as our eyes to estimate distance.

When it comes to fine-tuning distance measurements, Tesla can use its ultrasonic sensors, which are small circular sensors that wrap around the car.

This allows achieving reliable performance without relying on large and expensive sensors, but Tesla is weak on redundancy with only one forward radar. If it fails, there is no second radar sensor to rely on.

This is a cost-effective solution, according to Tesla, whose hardware has enabled their vehicles to drive on their own. Now that they only need to continue to improve the software, Tesla can make it work.

Training neural network data is key.

33% of all driving with Teslas use autopilot. This data is also extended when the autopilot is put into use. It also receives data in areas where there is no autopilot, such as a city street.

Considering all the unpredictability of driving, a lot of training on machine learning algorithms is needed, and this is where Tesla Data offers its advantages.

Tesla's machine vision is really good, but there is a big gap in their abilities. The software places bounding boxes around the objects it detects and classifies them into cars, trucks, bicycles, and pedestrians.

It marks the relative speed of each vehicle and the lane occupied by the lane. It highlights the drivable area, marks the lane dividers and sets the expected path between them. Currently, these data allow the autopilot to run on the highway, but it often struggles in more complex scenes.

Sometimes it is struggling to judge whether the skater is a bicycle or a pedestrian, and when there is a gap in the lane divider, it may drive to the wrong side. Of course, Tesla is not only aware of these issues, but also gradually improves its software through firmware updates adding functions such as stop line recognition.

In addition, the recent self-driving chipset will greatly improve the processing power of the computer. This allows Tesla to continue adding functionality without compromising the information refresh rate.

But even if they manage to develop perfect computer vision, it is another obstacle to how the vehicle is programmed to handle each scene. This is not only an important part of building a safe vehicle but also a practical autonomous vehicle. Programming for security and usability often conflicts. This is our third challenge. 

Driving is a dangerous operation in nature, and it is a very difficult task to program infinite scenarios that may occur while driving.

It's easy to say "follow the rules of the road, you will do well", but the problem is that humans do not completely follow the rules of the road.

Take a simple four-way stop as an example. The road rules make this look easy. The first person arriving at the crossroads has the right of way and in the case where both cars arrive at the same time. The vehicle on the right has the right to pass. The problem is that no one follows these rules.

When Google started testing its driverless cars in 2009, it was just one of the problems they encountered. When it reaches one of these 4-way intersections, people push forward and try to enter the intersection before turning.

Such scenes can be seen everywhere, requiring programmers to break the legal text and be a bit aggressive.

Tesla can gradually update the software as it masters each scene.

Sometimes computers need to make difficult decisions, and sometimes they may need to make decisions that endanger the lives of their occupants or people outside the vehicle.

This is only a natural by-product of a dangerous mission in nature, but if we continue to improve our technology, we will begin to see a sharp drop in road traffic accident mortality, while significantly reducing taxi services costs and saving many people from the economic burden of buying a vehicle.