Author: 42HOW Year-end Ceremony
At the First Yan知 Auto Annual Conference, Dr. Hsih Yun-Hsia, Market Director of Intelligent Perception Department, Greater China, ON Semiconductor, gave a presentation on “Trends and Solutions for Image Perception Sensors in Vehicle Compliance Maps” to share the basic trends and solutions for semiconductor devices on the image perception sensor. She pointed out that the obvious trend is that the number of image perception sensors will increase, the resolution will become higher, and the mainstream will still use cameras and lidars, resulting in an inevitable large amount of data. Therefore, the processor’s ability must be considered.
Trends in Autonomous Driving and Sensor Development
Hsih Yun-Hsia first introduced the trend of autonomous driving. She said that last year, the state issued a definition of vehicle intelligence level, and vehicles at or above L3 deployment have four types of sensors: image sensors (cameras), millimeter-wave radar, ultrasonic radar, and lidar. In China, many original equipment manufacturers launched L2+ models last year, except for lidars. In foreign countries, such as Tesla, cameras are still the main type of sensor. Vehicle-mounted intelligent perception in China has developed very rapidly, and the four types of sensors, especially lidars, have been deployed on traditional cars this year. NIO, XPeng, and traditional original equipment manufacturers such as SAIC, BAIC, and Great Wall have all begun deploying and loading lidar.
In 2020, China issued intelligent levels L0 to L5 according to SAE standards. For intelligent perception, the number of image sensors is increasing from L1 to L5. L1 is more equipped with cameras and ultrasonic radars, L2 is equipped with cameras, ultrasonic radars, and millimeter-wave radars, and L3 and above begin to use lidars. Currently, most L2 to L3 vehicles have up to a dozen cameras and about 3-5 millimeter-wave radars. Although millimeter-wave radar has a trend of developing towards 4D imaging, cameras and lidars are still the main types of sensor based on visual perception.
Trends in Compliance Map Image Sensors
Hsih Yun-Hsia then introduced a unique trend in image sensors. She said that originally, image sensors were all VGA 300,000 pixels, and now 300,000 pixels are being replaced by 1 million pixels. In Europe, the standard for perception in 2020 chose 2 million pixel image sensors. ON Semiconductor began researching 2 million pixel image sensors very early to prepare for European cars in 2020.
Many original equipment manufacturers in China will use 8 million pixel cameras this year. For example, NIO ET7 will have 11 8 million pixel cameras, which means that the arms race is not only in terms of vehicle systems. In terms of loading 8 million high-pixel cameras and lidars, China has already taken the lead.### Why are companies that make image sensors so rare?
When talking about the development of sensors related to cameras, Xi Yunxia said that the development of sensors related to cameras is very difficult. In the field of automotive semiconductor components, there are many companies that make various processors, such as Nvidia, TI, and NXP. But there are fewer companies that make image sensing chips. Why? This is because image sensors are very difficult to design semiconductor devices. Generally, processors that do “brains” use digital circuits + analog circuits, and most of them are digital. There are relatively few analog circuits, such as PLL and ADC. The image sensor not only has digital and analog parts, but also has a bond layer, which requires many devices related to light, such as Pixel Transistor, Photodiode, Color Filter Array (CFA), Microlens Array, and 4 more layers than general semiconductor devices, so it is difficult to manufacture.
Image sensing is no longer just for human perception
What are the challenges of image sensing? Xi Yunxia believes that in addition to sensor fusion, there is a trend that image sensing is no longer just for human perception. In the past, when parking, a single camera was used to show the driver the road behind. Now it has been upgraded to a surround view system. In 2020, many cars have an Automatic Parking Assistance (APA) function, which automatically parks the car with the push of a button. This does not rely on human eyesight, but it is based on machine vision, which allows the car itself to monitor and evaluate.
Now, APA has been upgraded to the AVP (Automatic Valet Parking) L4 application, which means that when the car is 50 meters, 100 meters, or even 1000 meters away from the parking garage, it can park automatically. Baidu and WM W6 have implemented AVP and were the first to mass-produce it in China, using image sensors from On Semiconductor. Therefore, an important trend is that when the vehicle rises from L0 to L2, it is no longer just for human perception. Most of the more than ten cameras are used for machine perception and judgment.
When machines make decisions, an important indicator is wide dynamic range. The ratio of the brightest and darkest parts, which is called dynamic range. How wide is the dynamic range in the real world? In some scenes, the contrast between the brightest and darkest parts exceeds 120dB, which means that the dynamic range of the image sensor must reach 120dB or even higher. On Semiconductor’s current image processor has reached 140dB, and the next generation can reach 160dB.
Vehicle standardization and reliability requirements are becoming higher
Xi Yunxia said that one challenge is that vehicle standardization requirements are higher and reliability needs to be improved, especially when upgrading to autonomous driving. Most people will drive their own cars and use them for up to four hours a day. Shared vehicles are a trend in the future and need to be running constantly. The lifespan of a vehicle is only two or three years.Not only reliability, but also credibility, including functional safety and cyber security, is necessary for vehicles, especially for L3 or above vehicles that can drive themselves on highways. Do you have the courage to let your vehicle drive itself on a highway? As vehicles become more automated, there will also be various requirements for cabin monitoring, including driver and passenger monitoring.
Xiaoxia Xi spoke about the trends in lidar related to imaging. Lidar used to be considered unattainable because of its high cost – the early price of a lidar was equivalent to that of a car – and unreliability. The lifespan of image sensor cameras is generally 10 years, while lidar has improved to last up to 5 years. Lidar with reliable quality and a longer lifespan is more commonly used in traditional cars. However, why can’t lidar be mass produced? This is related to semiconductor technology. The transformation of cars and systems actually begins with semiconductors. With good semiconductor components, it is possible to achieve a truly effective system, such as data volume and bandwidth improvement. Currently, lidar basically adopts APD (avalanche photodiode) technology, which is linear and requires a large amount of laser energy for long-range detection. In addition, its consistency is not good, and many calibrations need to be done manually. The cost of manual calibration becomes too high for thousands of lidars, so consistency is a very important indicator for mass production. Regarding homologation, many lidar systems claim to have homologation, but many internal components have not yet reached the homologation level. Therefore, homologation of semiconductor components will lead to the homologation of the entire system, thus meeting the requirement for mass production of lidar.
She believes that there are several criteria for mass production of lidar. Firstly, the semiconductor component level must pass homologation and have high reliability. Secondly, the photon detection efficiency (PDE) value must be high. Thirdly, consistency must be good and power consumption must be low.
Analog Devices, which separated from Motorola in 1999, is a well-established semiconductor company that can provide more than 80,000 types of semiconductor devices covering five major applications, with the automotive sector being its largest business department, accounting for around 30% of its total revenue. In terms of perception, the Intelligent Sensing Department covers a variety of sensors such as lidar and cameras, with over 2,000 global patents. It should be noted that Analog Devices began mass-producing CMOS image sensors for the automotive industry in 2005, allowing cars to have cameras and true eyes.In the automotive industry, most image sensors are no longer just for visual functionality. From Advanced Driver Assistance Systems (ADAS) to autonomous driving (AD), image sensor perception is becoming increasingly important. Currently, ON Semiconductor holds an 80% share in ADAS, with L2 and L2+ accounting for 90% of the global market. By 2020, ON Semiconductor’s annual shipment of image sensors exceeded 100 million, and almost all global car manufacturers are ON Semiconductor’s customers. This is mainly due to several reasons: first, ON Semiconductor aims to achieve 0 PPM in terms of quality control, which makes OEMs more confident; second, ON Semiconductor provides complete system solutions including not just image sensors but also peripheral devices such as ASIL-D (Automotive Safety Integrity Level D) compliant power chips. More than half, or even two-thirds, of the components in a camera can be supplied by ON Semiconductor.
ON Semiconductor offers the most diverse and comprehensive range of image sensor solutions. For example, the HayabusaTM platform series products, including AR0147AT, AR0233AT, and AR0323AT, feature breakthrough 3.0-micron backside-illuminated pixel designs, super exposure function, and can provide high-fidelity images with 120 dB HDR and reduced LED flicker (LFM) even in the most challenging scenes, without sacrificing low-light sensitivity. Another expandable image sensor series, AR0138AT, AR0220AT, and AR0820AT, has large 4.2-um pixels that provide leading performance in low-light scenarios for increasingly demanding ADAS and autonomous driving applications. Customers can start early development with one sensor and adapt their algorithm to pixel performance and system characteristics, then expand to more resolutions through further testing. This is conducive to platform development, shortening project launch time, reducing the cost of developing a series of camera systems, and facilitating customer system platform upgrades and cost reduction.
In addition, ON Semiconductor offers a sensor product line for emerging in-cabin monitoring applications, featuring the best infrared (IR) response and the ability to clearly see the driver’s eyes under infrared conditions, as well as the best global shutter efficiency, controlled images in bright environments, and other advantages. Functional safety and cybersecurity are critical requirements for ADAS and autonomous driving. In terms of functional safety, ON Semiconductor has the largest lineup of functional safety patents and was the first to implement functional safety in CMOS image sensors (CIS). Furthermore, ON Semiconductor provides the world’s first network secure image sensor, incorporating new data security features to ensure safety.
LiDAR Products and SiPM/SPAD TechnologyXia Yunxia stated that in the field of LiDAR, ON Semiconductor offers 5 types of devices. In addition to photodetectors, ON Semiconductor provides integrated hardware solutions such as regulatory-compliant driver chips, power chips, amplifiers, and readout chips for LiDAR systems. In 2018, ON Semiconductor acquired SensL, a company specializing in SiPM (Silicon Photomultiplier) and SPAD (Single Photon Avalanche Diode) detectors, becoming a market leader in this technology, which is the core component of LiDAR systems. ON Semiconductor’s goal is to achieve photodetectors that are regulatory compliant, high-gain, low-cost, and compact in size, which are extremely suitable for solving the low-light detection challenges of long-distance LiDAR in automobiles, and provide products with the highest sensitivity, best consistency, and low noise in the industry. Especially in early 2021, ON Semiconductor launched the regulatory-compliant LiDAR core photodetector device SiPM 1X12 linear array, which is fully prepared for the mass production of regulatory-compliant LiDAR, and has become the first choice for globally renowned LiDAR system customers. In addition, ON Semiconductor provides 3 types of products for LiDAR detectors: single-point, linear array, and matrix array.
She emphasized that the technology that ON Semiconductor is very optimistic about is SiPM (Silicon Photomultiplier) and SPAD (Single Photon Avalanche Diode) detectors. First of all, compared with existing APD products, SiPM and SPAD technologies have the advantages of high sensitivity (2000 times), high gain (10,000 times), low supply voltage (~32V), and best consistency. Another obvious advantage is that ON Semiconductor considers designing photodetectors and their packaging for automotive certification from the beginning and adopts CMOS technology. ON Semiconductor provides customers with fast and comprehensive services, which include a wide network of field application engineers, LiDAR-related application documents and video libraries, product demonstration systems, and simulation data of verified LiDAR models, to help customers master new technologies and quickly land products.
This article is a translation by ChatGPT of a Chinese report from 42HOW. If you have any questions about it, please email bd@42how.com.