NVIDIA Finds More Partnerships in China's Autonomous Driving Market

Leveraging the stage of the Beijing Auto Show, Nvidia is further expanding its commercial footprint in the China’s intelligent driving market.

On April 25, Nvidia signed a cooperation agreement with Chery, announcing jointly building a new generation of high-end intelligent driving platform around Nvidia’s DRIVE Thor computing platform, which will debut on EXEED’s premium models.

On the same day, JIYUE, jointly built by Geely and Baidu, also announced the use of the new generation DRIVE Thor computing platform.

This means that after reaching cooperation with LI Auto, ZEEKR, BYD, GAC, Xpeng, etc., Nvidia has found more commercial landing opportunities for its DRIVE Thor targeting China’s mainstream manufacturers.

Regarding the progress of DRIVE Thor, Newzoo Wu, vice president of Nvidia worldwide and head of the automotive business unit, said that DRIVE Thor is still growing rapidly. It is an unstoppable trend because it not only represents the next-generation chip with the highest computing power, but also represents the next-generation chip with the highest safety level, and it can provide the best support for generative AI and LLM.

He said, DRIVE Thor is expected to carry out the first generation SOP in 2025.

Of course, compared with the yet-to-be-launched DRIVE Thor, Nvidia’s DRVIE Orin computing platform has achieved increasingly wide landing opportunities and has been prominently exhibited at the Beijing Auto Show – these models include not only “Wee Lai”, XiaoMi, ZEEKR, DENZA, Hyper and other brands, but also the latest debuted smart elf #5 mass production version, Weipai Lushan Intelligent Driving version, etc.

Therefore, although Nvidia did not set up a booth at the Beijing Auto Show, from the dimension of intelligent driving, it actually had a strong presence at the Beijing Auto Show.

Of course, for Nvidia, continuing to tap into the commercialization of intelligent driving is a task it must promote; however, from the perspective of technology development, the development of large models and other AI-related technologies is also making Nvidia think seriously about the future development of next-generation autonomous driving and deploy strategies accordingly.

During the Beijing Auto Show, Nvidia held a communication meeting with the media, including Garage 42, for in-depth exchanges. At the meeting, Newzoo Wu, head of Nvidia’s automotive business unit, specifically discussed Nvidia’s latest thoughts on end-to-end technology.

The following is the main content discussed in this communication meeting, where we made edits without changing the original meaning.

Q:

What will be the future development of end-to-end autonomous driving technology?

A:

End-to-end autonomous driving is the final step of the trilogy, and I believe it will come. Don’t take it literally as from pixel to action, there may be some other things to match. For example, in the control loop, there may be optimization to make the control better, because control is a mathematical problem, but this problem is technical. Before the end-to-end model goes online, there must be a “guardrail”, because it needs continuous optimization and growth, and it is very difficult to launch the end-to-end model at the beginning.Businesses that can successfully develop end-to-end models also require high-quality second or even first-generation self-driving stacks. Much like futures PhD researchers, end-to-end models need time to grow and become stronger with the help of primary and secondary school teachers throughout their growth.

In the next few years, we’ll see a trend. End-to-end models will complement pre-existing ones. In some scenarios, like difficult intersections, they can offer more realistic solutions. The existing models and methods will ensure safety, facilitating the large-scale deployment of end-to-end models to become mainstream.

Q:

How to deal with the black box issue and the challenge of computational power?

A:

The black box problem can be solved from several aspects: as I mentioned, the first and second generation of the algorithm stack can ensure the safety of end-to-end models. We can keep assessing the rationality of the decisions made by the end-to-end models and use any differences as inputs. It’s somewhat similar to the feedback in large language model training, or the loop with previous generation models, which makes the results more reasonable and well aligns with the Human In The Loop large language model.

Another critical point is that future large models and end-to-end models will have observable output points, such as DEV results. During training, partial training will be carried out. Thus, a few windows will be opened in the black box. The windows are the output points, which may not need to be run in the final operation, but they can provide insights when necessary.

Q:

How does NVIDIA meet the automotive industry’s high requirements for AI chips through products and technologies?

A:

NVIDIA’s core mission is to empower ecosystems. Our significant advantage is our end-to-end, full-fledged AI empowerment and considering automotive as one of our vertical fields. We take the needs of the auto manufacturers into account but also bring the benefits from AI’s overall field to automotive applications. As ecosystem enablers, we hope to lead ecological transformations, the emergence of new technologies, and the application of AI in the automotive industry.

We invest heavily in data centers and training tools, which applies to all fields, not just automotive. We also have SoC and safety platforms. We introduce robust security concepts at every layer, from the underlying software to the chips, and develop end-to-end full-stack software. These four facets form our automotive ecosystem, which is the layout of NVIDIA.

Q:

What are NVIDIA’s technical plans for intelligent driving software and cockpits?

A:

Currently, we are undertaking very in-depth collaborations with several clients in the intelligent driving industry. Our mainly follows a three-step plan:

The first is to upgrade our software to a market-leading or first-tier level on the existing L2 and L2+ systems as soon as possible. NVIDIA has considerable talent and training assets, and this is backed by our significant advantage in computational power.Our second step involves achieving a game-changing breakthrough in the realm of L2++ and setting the industry benchmark. In our vision, the future software stack will be end-to-end trainable, linking upstream modules entirely. This includes not only bridging different models but also enabling end-to-end training. We have already commenced the strategy of leveraging generative large models overall. A demo expected later this year should offer a glimpse of the accomplishment achieved through end-to-end models.

When I spoke earlier about integrating upstream and downstream models, it did not fully embody the generative AI approach that we’re truly aiming for. Exciting progress happens daily, and we are eager to further extend the limit of what’s possible with NVIDIA using Vertical Language Model (VLM) and Longitudinal Language Model (LLM) in autonomous driving.

The third step, an ongoing effort, is the mass production of L3 by 2026. This level of autonomy will completely eliminate the human element from the system, embodying the true value of autonomous driving. The company’s core focus has always been on ensuring people can engage with their phones instead of driving. The real necessities are getting from Point A to Point B and using a mobile phone, not the act of driving itself.

Q:

In the backdrop of auto and tech companies developing their own intelligent driving chips, how will NVIDIA preserve its unique edge?

A:

Great question.

NVIDIA’s advantage is clear. We are an AI ecosystem enabler rather than a car manufacturer. Ideally, every AI breakthrough would emerge within the NVIDIA ecosystem. Our current chips are primarily based on generic GPUs, but we continuously adapt and optimize to better support the products born within our ecosystem. Automated closed-loop systems are trained continuously in our AI environment, advancing our overall hardware architecture and ultimately producing vehicular chips.

In the dynamic age of AI, NVIDIA’s chips are designed to efficiently bring cutting-edge AI innovations to vehicular chips. This is one of our significant strengths. Another aspect I’d like to emphasize is safety, it’s not achieved overnight and requires enormous investment and experience. End-to-end safety, covering chip modules, operating system modules, and the security of all cloud-based training tools, is crucial. Generating highly efficient and secure networks and effectively deploying the software to vehicular chips is our goal.

We believe that our commitment to safety, coupled with considerable strategic investment, will provide NVIDIA with a significant advantage in the near future. We hope to continually boost the vehicles and robotics sector, enabling robust safety measures to be established on our platform with minimal investment. That’s what we envision.

This article is a translation by AI of a Chinese report from 42HOW. If you have any questions about it, please email bd@42how.com.