Introduction to Autonomous Driving Technology (II) - Becoming a Self-Driving Engineer Without Coding Skills.

In the previous sharing, I introduced the sensors and controllers used in both Baidu Apollo 1.0 and Apollo 1.5, which leads to one conclusion: the more complex the functionality, the more sensors are needed and the higher the performance requirements for the controllers.

In today’s sharing, I will combine the modules that Baidu Apollo 1.0 and Apollo 1.5 have opened up to talk about my understanding of the Baidu Apollo technology framework. I will also tell you how to become a self-driving engineer without writing code.

First, let’s take a look at the Baidu Apollo technical architecture diagram.

Apollo Technical Architecture Diagram

As we can see, except for the modules that work in the cloud, the remaining modules need to run on the vehicle in real-time. Let’s analyze the function of each layer.

Open Software Platform

The most important software layer of the self-driving system.

This layer includes the lowest-level RTOS (Real-Time Operation System), the runtime framework required for the second layer of software, and various submodules on the upper layer, including map engine, localization, perception, planning, control, end-to-end, and human-machine interface.

Reference Hardware Platform

The hardware layer that the self-driving system depends on.

This layer not only includes the controller, GPS/IMU, HMI device (actually a display), and LiDAR but also the camera, radar, and black box (whose function is still unknown) that will be opened in January 2018.

Drive-by-wire Vehicle

The bottom layer of the wired control, without which the car cannot be controlled.

Those who have watched the Baidu Apollo press conference should know that Apollo 1.0 can achieve closed-loop autopilot on a fixed site, and Apollo 1.5 can achieve automated lane keeping day and night functionality.

Let’s take a look at the relationship between “functionality implementation” and “module opening” from the perspective of what Apollo 1.0 and Apollo 1.5 have opened up (excluding the cloud, only discussing the on-vehicle side).

Apollo 1.0 Closed-Loop Autopilot on a Fixed Site

First, let’s take a look at the following diagram, which shows the modules opened up by Apollo 1.0 in red.Apollo1.0 Closed Site Tracing Autonomous Driving

To achieve this function, I need a drive-by-wire vehicle with an open bottom layer first;

The controller and HMI (Human Machine Interface) are necessary for the automatic driving program to run smoothly;

To achieve the tracing function, it is crucial to solve the problem of knowing my location. Therefore, the GPS/IMU modules are selected for positioning;

After the hardware is equipped, the software must also be considered. The operation system (RTOS) and runtime framework are essential for the software to run properly;

Finally, the submodules of the software layer are the localization module for processing GPS/IMU data.

Once the localization and the tracked trajectory are obtained, it is time for control.

Engineers need to control the autonomous driving system during the tracking process, so the HMI plays an essential role in the process.

That completes the selection of necessary modules for engineers’ perspective.

Apollo1.5 Fixed Lane Day and Night Autonomous Driving

We also analyze the “Fixed Lane Day and Night Autonomous Driving” function realized by Apollo 1.5 with the same operation. Blue blocks are added modules in the following figure.

Apollo1.5 Fixed Lane Day and Night Autonomous Driving

Fixed lanes mean that lane information is necessary.

Apollo uses the lane information provided by high-precision maps. With high-precision location information (longitude and latitude) and the longitude and latitude range of the lane I need to travel (this may not be very professional, mainly for easy illustration. We will explain the relationship between location and maps in detail later), the autonomous driving vehicle can know where it should travel.

Therefore, a module for handling high-precision map data, the Map Engine, is necessary.

Knowing which lane to drive in, what should the car do if there are obstacles in the lane?

At this time, sensors are required to perceive these obstacles. Actually, cameras, LiDARs, and radars can all detect obstacles. Apollo 1.5 opens the lidar module.

With the sensors available, corresponding perception software is essential.

Once obstacles are detected, the vehicle will brake or accelerate accordingly, and change lanes if necessary. Therefore, planning for the behavior of the autonomous vehicle is necessary.## Apollo 2.0: Simple Urban Autonomous Driving

The Apollo 1.5 version also introduced End-to-End, another method that uses deep learning to achieve “fixed-scenario autonomous driving,” but we won’t discuss it here.

The following image is Apollo 2.0, which was announced by Baidu in January 2018. I marked the newly added modules with a purple background.

Apollo 2.0: Simple Urban Autonomous Driving

Can you use the ideas I provided earlier to think about why cameras and radar are necessary for “simple urban autonomous driving”?

Conclusion

The sharing above is actually the work that a “self-driving system engineer” needs to do. They need to deduce the “architecture” from the “requirements” and then decide what kind of hardware to install on the car.

If you are not good at coding but want to work in the field of autonomous driving, it is worthwhile to learn more about sensors and system architecture.

Well, this sharing basically gives everyone an understanding of Baidu Apollo’s technical architecture. In the future, I will provide more precise analysis of the sensors used in the Apollo plan in this column, and I look forward to your reading~

This article is a translation by ChatGPT of a Chinese report from 42HOW. If you have any questions about it, please email bd@42how.com.