In-depth analysis: Anxiety about intelligent driving in the automotive industry, can the Lightboat Navigation System solve it? How to solve it?

Author: Chen Nianhang

Recently, two interesting events have happened in the autonomous driving industry:

The first one happened on the other side of the ocean. Argo AI, the L4 autonomous driving unicorn that was once favored by two international automotive giants (Ford and Volkswagen), suddenly announced its bankruptcy. Ford CEO Jim Farley stated that the reason for shutting down Argo AI was that the commercial prospects for L4 autonomous driving (mainly referring to Robotaxi) were uncertain, the investment was large and the returns were limited, so they decided to stop burning money, concentrate resources on developing L2 and L2+ autonomous driving, and promote mass production of front-loading. This has attracted widespread attention from domestic autonomous driving practitioners.

Should L4 continue to burn money? Can unmanned driving be realized? When will it be realized? Observers have raised a series of soul-searching questions.

The second event occurred in China. On the first day of November, the autonomous driving technology company QCraft officially renamed its autonomous driving solution “Driven-by-QCraft” to “Qingzhou Chengfeng”. It covers many aspects such as autonomous driving vehicle software, vehicle hardware, and data automation loop, and can provide QCraft’s latest perception, PNC (planning and control) and other capabilities. It is a very competitive autonomous driving front-loading mass production plan, and its main feature is urban navigation assisted driving (City NOA).

# Automotive Industry Embracing Automated Driving

Ford, as a representative of the OEM, has officially embraced the production of pre-installed automated driving technology. Also, as a supplier representative, Qeeka Home has entered the embrace of pre-installed automated driving technology. In a certain sense, automobile companies and automated driving companies are engaging in a “two-way rush” in the industry.

Putting aside the debate about “soul” and “physical body”, it is currently the beginning of large-scale popularization of automated driving functions, which requires both OEM and supplier sides to work together to expand the cake. The matter of dividing the cake can be discussed later.

Nowadays, OEMs greatly need intelligent driving suppliers. Jidu Auto, BYD, and Dongfeng “Voyah” are in need of Baidu Apollo; SAIC, ZhiJi, and GM are in need of Momenta; JHAD and Autocraft are in need of Huawei ADS; Volkswagen needs Horizon. These automakers will surely not only use one supplier, but rather a multi-scheme strategy.

L2 or L4? The trend is getting clearer.

In the past few years, the commercialization of automated driving technology has split into two paths: one is to directly develop L4 automated driving technology for Robotaxi and other autonomous driving scenarios; the other is to produce pre-installed intelligent driving technology for passenger cars, which is L2+ automated driving.

In the market, several Robotaxis have removed the safety driver and started to charge for their services. The pilot system, navigation-assisted driving system, and automatic parking system on various production cars are becoming more widely accepted, becoming an important consideration for buyers.

Especially recently, navigation-assisted driving functions have started to enter cities, and this group of L2+ assisted driving technologies has entered the main field of Robotaxi. The two forces are intersecting.

The computing platforms and sensors used by City NOA are actually gradually converging with those of Robotaxi. Although the cost is slightly higher, it is still within a controllable range, and their capabilities are also similar, including stable lane keeping, recognition of traffic lights, automatic left and right turns, and automatic detours. The biggest difference is the presence of a driver in the main driving position, but this is just a formality.

It seems that companies that start from front-loading production have already taken the lead in the commercialization of autonomous driving, and they are not lagging behind in terms of technology compared to companies that start from Robotaxi.

In the past two years, many L4 companies have also gradually entered the front-loading production business of autonomous driving, either fast or slow, in order to obtain a portion of the cash flow and collect more road driving data, such as Baidu Apollo, Momenta, QCraft, Xpeng In-depth, WeRide, and HiPhi Drive, etc. They have also not given up on deploying Robotaxi fleets, which is equivalent to walking on two legs and no longer relying solely on Robotaxi. Among them, the business focus of some companies is increasingly inclined towards front-loading production, and Momenta is the most typical, having received cooperation orders from many OEMs.

Ford’s tough decision to shut down Argo AI actually reflects that it is still necessary to focus on the front-loading production of autonomous driving systems at this stage. OEMs currently need L2, L2+ autonomous driving systems, not L4. In this respect, the group of L4 companies in China that have transformed ahead of schedule is quite visionary.

However, for software companies to achieve system-level mass production of vehicles, it is by no means an easy task. Algorithm stability and maturity, algorithm adaptability, vehicle-level computing hardware, algorithm and hardware compatibility, cost compression, and team and toolchain support are all challenges facing algorithm companies.Extreme Cost Control and Compression of Development Cycle, the Current Production of ADAS is so “Cool”, Even with “Hardware Pre-embedding” that has Helped Many OEMs and Suppliers Buy Time, Consumers Will Definitely be Unwilling to Pay for Features that Will Not Be Used for 1-2 Years after Taking Delivery. The Suppliers Must Ensure Fast and Good Delivery of the System.

Whoever can do all of this well, building on strengths and making up for weaknesses, will have a better chance of standing out.

Taking advantage of the release of the “Riding the Wind with a Light Boat” plan, let’s take a look at what qualities a self-driving technology supplier needs to have in order to do well in L2+ systems, urban NOA, and front-loading production services, based on the blueprint of Lightweight Boat Intelligent Navigation.

“Dual-engine” driving, “Riding the Wind” releasing

In the past year, Lightweight Boat Intelligent Navigation has completed the adjustment of its business development strategy, expanding from only L4 to “Dual-Engine” L4 and front-loading production.

Lightweight Boat Intelligent Navigation was founded in 2019, and initially advanced its R&D goal towards L4-level autonomous driving technology. The related car models that are grounded mainly include the unmanned minibus Robobus, as well as the Robotaxi fleet that is undergoing testing and operating in cooperation with T3. In the beginning, they also aimed high to accumulate their own self-driving software and hardware algorithms, construct their own database, scenario library, model library, and build a complete self-driving algorithm factory.

# 2020, QCraft launched the first generation of autonomous driving solution “Driven-by-QCraft” focusing on complex traffic scenarios in urban areas, capable of adapting to various complex road conditions on public roads in cities, and can be efficiently deployed on various types of vehicles.

After nearly three years of accumulation, when these technologies gradually matured and had high versatility and portability, QCraft decided to explore new business opportunities and enter the mass production market for advanced autonomous driving systems.

In May 2022, QCraft launched the latest “Driven-by-QCraft” vehicle-grade mass-produced autonomous driving solution, assisting vehicle companies in achieving comprehensive implementation of urban NOA solutions. This solution has strong software-to-hardware adaptability, allowing it to operate from L2+ to L4 autonomous driving.

Essentially, QCraft has embarked on a path starting from L4, accumulating technical capabilities (including perception, PNC technology, data-driven, algorithmic modeling, etc.), expanding its mass production business for front-end installation, and feeding back L4 business and front-end installation businesses with technology. This forms a “dual-engine” approach to autonomous driving development, with the goal of empowering OEMs with useful autonomous driving capabilities and ultimately letting consumers use such products.

On November 1, 2022, QCraft officially named its autonomous driving solution “Driven-by-QCraft” as “Chenfeng”, which is a complete set of software and hardware solutions, adapted to computing platforms such as Horizon Journey 5 and NVIDIA Orin, as well as different combinations of laser radar and camera perception hardware. This algorithm includes core technologies such as fusion perception, predictive planning and control, data-driven, and algorithmic modeling.The arrival of “Light Boat Riding the Wind” has made the positioning of Light Boat Intelligent Navigation more clear: an autonomous driving technology company that makes new Tier 1 intelligent cars.

Light Boat Intelligent Navigation believes that city NOA is the ceiling of assisted driving and the threshold for autonomous driving. Therefore, their current goal is to enable more car owners to enjoy the convenience brought by city NOA.

Light Boat Intelligent Navigation cures the algorithm anxiety of automotive enterprises

How to solve algorithmic problems?

Since city NOA is the ceiling of assisted driving, the technical difficulty is also the ceiling.

How difficult is it?

XPeng Motors’ Vice President of Autonomous Driving, Wu Xinzhou, gave a set of data to quantify the difficulty: “XPeng’s urban NGP code base is six times larger than the highway NGP, the perception model quantity is four times larger, and the code volume of prediction/control/planning (PNC module) is 88 times larger. From this set of data, the PNC module is the most difficult part of automatic driving in the city. “

Why is it difficult?

  • First of all, the complexity of traffic participants, such as trucks, cars, tricycles, engineering vehicles, municipal vehicles, pedestrians, automatic vehicles, electric vehicles, and so on, all mixed together.
  • Secondly, the complexity of traffic rules and behaviors. City roads contain various permanent and temporary traffic lights from various places, as well as roundabouts, complex interchanges, etc. In urban areas, there are often a large number of pedestrians, bicycles, and electric vehicles that do not follow traffic rules.
  • Moreover, the limited coverage mileage of high-precision maps in urban areas requires a large amount of visual information to process various intersections, road signs, lane lines, and other static traffic facilities.
  • From the perspective of the vehicle itself, sensors and computing platforms also need to meet the vehicle regulations, which brings cost and power consumption limitations. Therefore, stronger algorithmic framework optimization and engineering capabilities are required to solve these problems.Because of these challenges, some carmakers have been promoting autonomous driving for years, but still can only achieve high-speed navigation-assisted driving or have not done well on functions such as adaptive cruise control (ACC), lane keeping assist (LCC) and others, even on most models. On the other hand, new car companies now launched more user-friendly L2+ assisted driving features, but their usage scenarios are limited. Most autonomous driving can only be used in 10\% of high-speed situations, and once entering urban areas, the autonomous driving experience is far from consumers’ expectations. Moreover, such features are now priced dearly, so few people can afford them.

Take Tesla, XPeng Motors, and other front-runners for example, so far their FSD Beta users are only 160,000, while XPeng has sold more than 200,000 cars, and the users who opt for the NGP function will not exceed 30\%.

Nowadays, carmakers or suppliers who can truly mass-produce urban NOA are few and far between, and companies like XPeng and Huawei are currently only conducting pilot projects in a single city, still a long way from large-scale deployment. Not to mention there are so many traditional OEMs in the Chinese market, how can they have useful and user-friendly assisted driving systems on their future models? It is what they are anxious about now.

We should also see the positive side, after years of development, the performance, cost of sensors, as well as computing platforms’ computing power, power consumption, and cost have already reached a good mass production time point. Currently, many smart electric vehicles are hardware-convergent, and the differences are just how many chips, how many lidar, and how many 8 million-pixel cameras are used. With XPeng, NIO, Ideal, and other carmakers taking the lead in mass-producing these hardware modules, it also proves that their car-rule-level verification challenges have been conquered.

imageSince the hardware is not a problem now, the focus of the competition falls on the software. The software determines the upper limit of the autonomous driving system’s ability, and also determines whether consumers are willing to use and love these functions. Even if a bunch of high-performance hardware is stacked on the car, if the software is not good, it is still just a decoration. Good ingredients still require a chef who understands cooking to cook them.

As the head chef of NOA, a cooking city, Zhongke Intelligent Navigation is also very clear about the importance of autonomous driving software algorithms. Therefore, they have been building capabilities in this area in the past few years.

What is the mystery behind Zhongke Intelligent Navigation’s autonomous driving algorithm?

At the Zhongke Intelligent Navigation Technology Workshop, they proposed a development concept for autonomous driving algorithms: based on data, precise perception, and PNC (planning and control).

As we all know, the main modules of autonomous driving include perception, positioning, planning, and control. Based on this, the concept proposed by Zhongke can be simply understood as:

  • The progress of autonomous driving algorithms is strongly dependent on the amount of data and requires large-scale high-quality data for feeding.
  • The perception aspect requires extremely high accuracy, whether it is laser radar or vision, whether it is two-dimensional or three-dimensional, it needs to be precise and fast. Once perception is done well, the ability will not be too bad.
  • Finally, PNC, which is strongly related to user experience. When to change lanes, when to overtake, at what speed to overtake, all depend on the PNC technology capabilities, which directly determines whether the functional experience is good or bad. Doing PNC well is even more difficult.

Therefore, basically, Zhongke Intelligent Navigation’s autonomous driving algorithm is built around the core abilities of perception, PNC, and big data-driven capabilities to create its own characteristics.

(1) Perception fusion: Timing interleaved fusion and unified big model.From the perspective of perception technology, Qingzhou is currently on the route of fused perception, equipped with lidar, cameras, and millimeter-wave radar, and using high-precision maps for localization. Qingzhou believes that pure vision is difficult to meet the needs of China’s urban NOA, and corner cases still need lidar to solve.

Qingzhou’s solution only uses a solid-state lidar, combined with 11 cameras and 5 millimeter-wave radars, which balances performance and cost. Of course, if the host factory insists on using two lidars, Qingzhou will not refuse, as the redundancy of more systems will be stronger.

Some may ask, how can one lidar deal with blind spots? Will there still be defects compared to the two or three lidar solutions currently used by XPeng Motors and other companies?

Qingzhou Zhixing has already solved this problem technically.

One lidar is definitely used for forward-looking purposes, and the known field of view angle is currently 120 degrees. This performance is compatible with Qingzhou Zhixing’s “time-space fusion algorithm.” During the vehicle’s movement, the area scanned by the forward-facing lidar will be memorized by the system and supplemented and fused with the point cloud data of the side and rear pure visual information as the vehicle moves forward, ensuring sufficient awareness of the forward and rearward areas.

Regarding the perception of the vehicle’s rearward, it usually occurs in reverse scenes, but the related functions can be well achieved with radar and cameras without the need for an additional lidar.

In this regard, Qingzhou has a core technology that has played a significant role, which is the “time-space fusion algorithm.”

Before introducing the “time-space fusion algorithm” in detail, let’s take a look at how other players in the industry do it.

Mainstream fusion solutions include three types: front fusion (data-level fusion), middle fusion (feature-level fusion), and back fusion (target-level fusion).

Different fusion schemes have their own advantages and disadvantages. Only effective fusion results can provide reliable information for downstream tasks and provide guarantees for making safe predictions and decisions for vehicles.

  • Front fusion can achieve higher quality fusion results, but it has strict requirements for time synchronization and spatial calibration of different sensors.
  • The back fusion has stronger decoupling, but it relies on experienced and technically proficient engineers to write rules based on experience. At the same time, it requires a large amount of simulation and actual driving tests to obtain wider coverage, and its scalability is limited. The engineering input has low marginal benefits.

In the industry, Fei Fan Automotive proposed the so-called “full fusion perception” some time ago, but in fact, it only includes the first two steps of “front fusion” and “back fusion”, and “middle fusion” is not implemented, so it cannot be called a true “full fusion” yet.

The complete fusion perception algorithm of Light Boat Wisdom Navigation includes “front, middle, and back fusion”, which are carried out in an interleaved manner in terms of time sequence. Through the fusion of multiple sensors such as laser radar, millimeter-wave radar, and vision, the perception model can fully utilize different sensor information in different stages, allowing different sensor advantages to complement each other (multi-modal perception information), avoiding the loss of single-mode information, realizing earlier sharing of multi-sensor information, achieving more optimal sensor fusion results, and avoiding various types of false detections and missed detections. It has high accuracy and strong robustness.The most important thing here is the inclusion of “Chinese Fusion,” which refers to feature-level fusion. The reason why QZ Intelligent Navigation is able to do this is that, with the iteration of the BEV (Bird’s Eye View), they discovered that feature fusion at the point cloud of a LIDAR and the visual image level has become more realistic, and QZ has applied this technology to achieve better fusion results.

In addition, in terms of “temporal fusion,” QZ definitely doesn’t rely solely on single-frame results for perception. While they used to rely on traditional tracking techniques to utilize time sequence, this kind of utilization itself had a lot of information loss. With temporal fusion, QZ can fully utilize the information in time sequence to achieve better final perception results.

To match the “front-middle-back interleaved fusion” perception algorithm, QZ Intelligent Navigation has also created its own neural network model, OmniNet. This network model is similar to Tesla’s HydraNet and XPeng Motors’ XNet, but OmniNet is most unique in that it introduces the fusion of LIDAR perception information (HydraNet and XNet are pure visual fusion models) and exclusively supports “temporal fusion.”

OmniNet enables efficient multi-task computation by integrating visual, mmWave radar, and LiDAR data through forward fusion and BEV spatial feature fusion. It unifies formerly independent computing tasks by sharing the backbone and memory network, and ultimately outputs the results of different perception tasks simultaneously in both image and BEV spaces, providing richer output for downstream prediction and planning control modules.

In addition, compared to traditional solutions, OmniNet can save 2/3 of computing resources in practical applications and can be deployed on mass production computing platforms. This is achieved through a unified timing and multi-modal feature fusion model, which allows backbone network portions of different models to be shared, avoiding a large amount of redundant computation and eliminating the need for developing different network models for different tasks.

As a result, the Light Sail perception solution can flexibly adapt to different vehicle sensor configurations without requiring separate model training for different hardware configurations, resulting in lower transfer costs. This is also why the “Light Sail Ride the Wind” supports different sensor schemes and various L2+ to L4 autonomous driving capabilities.

In summary, OmniNet enables more efficient and effective fusion perception (especially for accurately and stably identifying over-length vehicles, irregularly shaped vehicles, and cross-camera truncated objects), can adapt to different sensor schemes, and does not require too much computing resources, making it suitable for deployment in vehicles.

Introducing the large-scale model of OmniNet is also very beneficial for Light Sail Intelligent Navigation to adopt a data-driven algorithm development mode, which realizes efficient algorithm iteration through a data-closed loop, reduces model maintenance costs, and makes the results more reliable, efficient for solving some of the long-tail problems in autonomous driving.The traditional development mode teaches you what you learn, while the data-driven mode supported by OmniNet actually enables the system to have the ability to learn and grow on its own. Regardless of any future traffic scenarios, vehicles can identify and respond to them.

The multi-sensor temporal interleaving fusion algorithm, combined with the unified neural network OmniNet, makes the perception fusion technology of QZDJ very unique in the industry, and the overall perception effect is also outstanding. With such a perception system, it is certainly not afraid of the challenges posed by urban NOA scenarios. Based on the test videos released by QZDJ, it can completely meet various complex and extreme perception needs.

After talking about perception, let us look at the more difficult PNC (Planning and Control) module.

(2) The code volume of the PNC module for urban NOA is dozens of times higher than that for highway NOA.

When it comes to PNC (Planning and Control), everyone knows that it involves planning and control, but there is much more to it.

The PNC module of QZDJ includes the vehicle-side core module and support module:

  • The vehicle-side core module includes navigation, prediction, decision-making, planning, and control.
  • The vehicle-side support module includes HMI (human-machine interaction), environmental perception, map positioning, etc.
  • The offline module of PNC can collect, query, and test data, train models, analyze model and algorithm effects through simulation, and then feed back the model and algorithm effects to the vehicle-side module.

In PNC, “planning” actually refers to the autonomous driving system helping the vehicle to plan a traveling trajectory, whether it is straight, turning, overtaking, or bypassing. This involves adjusting the direction and speed of the vehicle’s steering wheel, which is the “control” in PNC.The control of vehicle’s longitudinal and lateral movement, as well as its speed, directly affects the passenger’s experience, making the PNC module an important component that determines user experience.

Of course, everyone is also concerned about traffic efficiency. After all, if a car is driving slowly in front of you, you’ll definitely overtake it. Under autonomous driving conditions, will it be able to overtake slow cars in a timely manner? This is also strongly related to the PNC module, so how well this module is designed will affect the traffic efficiency of the vehicle.

Safety, good riding experience, and high traffic efficiency are the goals that the PNC module should strive for. Especially in urban NOA, which faces complex traffic environments with many cars and people and complex traffic behavior, it is difficult for the PNC module to perform well. If it is done well, it will be very competitive.

In terms of PNC, Zhong Zhihui should be the first in the industry to adopt the “spatiotemporal joint planning” algorithm.

What is “spatiotemporal joint planning”?

A simple explanation is that the system considers both space and time when planning vehicle trajectories, rather than first solving the path and then solving the speed to form a trajectory. Upgrading “lateral-vertical separation” to “lateral-vertical integration”, it can directly solve the optimal trajectory in the x-y-t (i.e. plane and time) three-dimensional space.

Human drivers are actually using “spatiotemporal joint planning” every day. For example, when accelerating to overtake, a little bit of throttle is added during the lane changing process for a smooth overtaking. This algorithm developed by Zhong Zhihui is built to be more like human driving and smooth response to various traffic scenarios.

In the most typical “ghost head” scenario, “spatiotemporal joint planning” allows the autonomous driving vehicle to not be confined to a fixed path, and can calculate a best avoidance trajectory in real time, better ensuring safety.

The opposite of “spatio-temporal joint planning” is “spatio-temporal segregation planning,” which is currently a more commonly used PNC strategy in the industry. This strategy separates “trajectory planning” into two sub-problems: path planning and speed planning. Path planning corresponds to lateral control, i.e. the steering wheel; speed planning corresponds to longitudinal control, i.e. braking or acceleration, and this decision-making mechanism is often referred to as “lateral-longitudinal separation.”

This approach heavily relies on hand-crafted rules to adjust vehicle behavior, and also heavily relies on a large amount of road testing to validate algorithms, so it has some inherent flaws. Hou Cong, CTO of QCraft, commented on “spatio-temporal segregation planning”: “In the spatio-temporal segregation algorithm, the deficiency in path planning can be compensated by adding rule-based predictions of the future speed of the vehicle. However, this prediction is theoretically flawed because the vehicle’s optimal speed has not yet been planned when the prediction is made.”

QCraft’s choice of “spatio-temporal joint planning” was not a spur-of-the-moment decision. At the beginning of the company’s founding, they conducted surveys in several first-tier cities in China and found that the road environment in Chinese cities was overly complex. Considering long-term development, they decided not to use the industry mainstream “spatio-temporal segregation planning,” but to go a more difficult but more suitable route.

The difficulty of “spatio-temporal joint planning” lies in its large computational complexity and numerous parameters, making implementation and subsequent optimization very difficult. QCraft has invested a significant amount of R&D resources to refine this algorithm.

To address the challenge of high computational performance required by this algorithm, QCraft’s PNC strategy is to calculate more trajectories when there is sufficient computing power, select the optimal trajectory, and make full use of multi-core parallel computing. When computing power is limited, the generated trajectories will be correspondingly reduced but also ensure safe and stable driving.

Of course, in practice, the performance of the actual vehicle still needs to be verified through firsthand experience. I have previously experienced QCraft’s Robotaxi early test cars and felt that they have good handling when maneuvering around vehicles on the side of the road.Before PNC, there is an important module called prediction. The system needs to predict the intentions of other traffic participants in order to better plan and control trajectories.

Light Boat has also independently researched the prediction model Prophnet, which has achieved excellent results in some international competitions.

Let’s take a look at how well Light Boat’s predictions are made with a few numerical indicators:

  • Light Boat’s smart navigation system can provide 10 seconds of long-term intention and trajectory prediction. The main model predicts at least three trajectories with probability, and the average error between the maximum probability trajectory and the truth is 3.73 meters. In short, the predictions are very accurate.
  • The main model can simultaneously support the prediction of 256 targets, and the overall inference time is less than 20 milliseconds, which can meet the needs of real-time operation.
  • The model supports the Horizen Journey 5 and NVIDIA Orin dual platforms. Light Boat’s prediction model operator is also the first in China to adapt to the Horizen Journey 5 BPU.
  • ……

A set of prediction + PNC algorithms enables Light Boat’s autonomous driving vehicles to respond well to the functional requirements of China’s urban NOA.

(3) Data drives not only perception algorithms but also PNC algorithms

Everyone knows that high-quality data is very important for the development and optimization of autonomous driving systems, and “data is the new oil”.

Feeding a large amount of data to the system allows it to constantly enrich its “knowledge”, recognizing things that it did not previously understand, particularly at the perception level. Undoubtedly, big data has significantly improved autonomous driving perception.

In the PNC module, big data is also very important. It is necessary to feed the system with a large amount of data in order to elevate from a “novice driver” to an “experienced driver”.## Translation

Any self-driving car company has a system of data collection, storage, cleaning, labeling, and simulation training, and QZ is no exception, and also does it very thoroughly.

In recent years, QZ has established a “driving data warehouse”, which can automatically label actual driving data and “shadow mode” (when the software runs in the background in “shadow mode” during human driver driving). These labels are very rich, with as many as hundreds or thousands of them, including road information (road level, type, lane category, etc.), coordinate environmental information (surrounding obstacles, traffic density, pedestrians, whether other vehicles cut in, etc.), self-driving car information (self Car speed, position), and driving data obtained from human drivers in shadow mode. From this, it is possible to know when the human driver brakes, changes lanes, or turns on the turn signal at a certain time, which is very valuable for improving algorithm capabilities.

For example, if the QZ autonomous driving test vehicle encountered a situation where a large truck cut in and did not respond well, QZ could call up all similar scenes in the data warehouse, make them into a scenario library for simulation testing. Through simulation testing, the existing problems of the algorithm can be verified. After modifying the algorithm or retraining the model, if the model performs well, the results can be used in the car; the car can be driven better, thus completing a data loop.

This is a typical example of data-driven application in the PNC module.

Where does QZ’s data come from?

Currently, it is mainly self-collected. Here we have to mention QZ’s L4-level Robotaxi business.

QZ's L4-level RobotaxiCurrently, QZ Intelligent Navigation has been conducting normalized operations on public roads in 10 cities across China, including Suzhou, Shenzhen, Wuhan, and others, with over 100 minibus vehicles in operation accumulating abundant driving data for QZ.

As of the end of 2021, QZ Intelligent Navigation has tested millions of kilometers of urban roads, which has generated a large amount of sensor data such as L4-level lidar, cameras, and millimeter-wave radars, as well as long-term accumulated driving behavior data. These data will continue to expand in scale.

After the “QZ Ride” officially goes into production, QZ Intelligent Navigation will have another source of data, just like companies such as Tesla, XPeng Motors, and Mobileye, that have large-scale production cars collecting actual driving data on the roads.

3. Following XPeng’s Lead in Mass Production, Two Different Paths Lead to the Same Goal on the Road

When it comes to urban NOA, it is unquestionable that XPeng Motors is one of the fastest and most effective companies. For QZ, competing with XPeng’s urban NGP in the field of urban NOA is undoubtedly a challenge, but it is also an opportunity to compare with the best.

In fact, QZ and XPeng have already had strategic cooperation in the development of autonomous driving technology and features.

  • QZ started with L4 Robobus and Robotaxi, and now focuses on “QZ Ride” as an important business to create urban NOA for car makers.
  • XPeng Motors started with basic driver assistance features such as automatic parking on G3, and then developed XPilot, high-speed NGP, and memory parking on P7, and dual lidars/urban NGP on P5. Now they are also planning to launch their own Robotaxi.

XPeng hopes to equip its own vehicle models with user-friendly urban NOA features, while QZ Intelligent Navigation hopes to offer user-friendly urban NOA for more OEM vehicle models. They are one OEM and one supplier, but their ultimate goal is to achieve the vision of autonomous driving.From the perspective of the entire automobile industry, the self-driving development of XPeng can only benefit XPeng itself, while the self-driving development of QZ Intelligent Navigation can benefit any automaker and save them from repeating the same work.

In the future, QZ Intelligent Navigation aims to prove its technical strength in pre-installation mass production and empower more automakers, so that these automakers’ vehicle products can also have very useful autonomous driving functions. At least in the field of assisted driving, they can compete with leading players like Tesla and XPeng.

“QZ Ride the Wind” should be a cure for the anxiety of intelligent driving in the automobile industry.

Helping Automakers Breakthrough with Intelligent Driving

Providing automakers with mass-produced autonomous driving solutions is undoubtedly a correct and profitable strategic plan, after all, Mobileye and Momenta are at the forefront.

But to be a good supplier of autonomous driving technology, comprehensive capabilities are crucial.

There are three important points involved.

1. First is strong technological capability: In addition to its hard strength in perception, regulation, and data-driven aspects, soft power is also required. This part of soft power includes building an ecosystem of partners.

QZ has already accumulated a number of upstream and downstream partners in the intelligent automotive field, including Nvidia and Horizon Robotics in the computing platform field; Hesai and Surestar Optoelectronics in the lidar field; Alibaba Cloud, Amazon Web Services, and Volcano Engine in the software field; and Dongfeng and Jinlong in the automaker field.

According to QZ, its “Ride the Wind” plan has already attracted interest from potential automaker partners. In addition, insiders revealed that QZ’s simulation technology module has been adopted by domestic new carmakers.

2. Second is efficient development: A scalable autonomous driving algorithm system; rich development toolchains; strong development support; and quick adaptation for low-end, mid-range, and high-end configurations.We have mentioned the scalability of the “ride with the wind” solution multiple times before. The algorithm model can seamlessly support different configurations of sensor suite solutions for efficient migration across different car models. Whether the OEM needs one, two, three or four lidar sensors, it can all be supported for development.

In terms of toolchains, “Qingzhou Matrix”, a data toolchain product of Qingzhou, includes data platforms for sensor data collection, transmission, management, and mining; automated labeling platforms for generating true values on a large scale; machine learning training platforms for efficient model training and evaluation; large-scale regression testing simulation platforms and hardware-in-the-loop (HITL) testing platforms. This fully supports the iterative upgrade of mass-produced autonomous driving systems, and large-scale over-the-air (OTA) updates are no longer a problem.

Perhaps a current shortcoming is in the support for OEM development. After all, early-stage development requires a considerable amount of manpower investment. Taking the thousands-strong team of Huawei ADS in the Autovision 11 project as an example, it puts significant pressure on Qingzhou as a start-up. In addition, Qingzhou’s team still lacks in terms of engineering capability. After all, they are all algorithm talents. However, Qingzhou is actively improving in this area.

3. Third, cost can be controlled: In this case, the cost is reflected in the research and development cost and the cost of the solution itself.

In terms of research and development, the OEMs need to pay a huge price for self-developed autonomous driving systems, including capital and time. As a supplier, Qingzhou can quickly assist OEMs in building such capabilities, avoiding the risk of OEMs assembling their own teams and investing capital, yet still not meeting expectations. This is also the greatest industry value of Qingzhou, a self-driving technology company.

In terms of the cost of the solution itself, the current urban NOA used by Qingzhou employs a single lidar sensor solution, which is also very cost-controllable and has enough room for future exploration.

If all these three points are done well, and certain shortcomings are addressed, then new suppliers represented by Qingzhou will have great potential in the future.# The Robotaxi Companies are transitioning into mass production in the industry, those that have a stronger determination to transform may have a stronger competitiveness in the future.

Automakers also need suppliers like “Light Boat Intelligence Navigation” to relieve their anxiety about intelligentization and autonomous driving in order to avoid falling into the “Nokia moment”.

This article is a translation by ChatGPT of a Chinese report from 42HOW. If you have any questions about it, please email bd@42how.com.