Dilin Anand from EFY in talk with Haran Thanigasalam, MIPI Alliance, Camera Working Group over MIPI CSI-2 camera interface and its use in autonomous vehicles.
Q. Can you discuss some applications which give engineers sleepless nights? With autonomous vehicles becoming a new avenue, what more can cameras “see” and “process”?
A. Cameras are key enablers for autonomous vehicles. Developers at many automakers and their suppliers already work with a variety of MIPI Alliance specifications on their traditional vehicles. For example, the MIPI Camera Serial Interface (MIPI CSI-2) is used for a broad range of image sensors in autonomous vehicles.
MIPI CSI-2 also is a good example of how MIPI specifications are continually adding enhancements and capabilities to meet new market requirements. For example, MIPI CSI-2 v1.3 supports a wide variety of resolutions, including 1080p, 4K and 8K, in both single- and multi-camera implementations. CSI-2 v2.0 adds support for RAW-16 and RAW-20 color depth, which significantly improves intra-scene dynamic range and provides a superior Signal-to-Noise Ratio (SNR). Those features enable autonomous advanced driver assistance system (ADAS) capabilities even when the environment changes suddenly and dramatically, such as when a vehicle emerging from a dimly lit tunnel into bright daylight.
Q. Please talk about how MIPI CSI-2 can help in areas other than mobile applications. What are some factors that give it an edge?
A. MIPI CSI-2 is the world’s most widely used interface for imaging and vision applications in mobile devices, and vehicles are just one example of how that usage has grown beyond smartphones and tablets. Another is drones. They’re similar to smartphones in the sense that they need to be as light, power efficient, and compact as possible, which means increasing number of components are seated next to one another. For instance, aggressor components generating electro-magnetic interference may be instantiated within close proximity of sensitive radio receivers with the introduction of the MIPI CSI-2 Power Spectral Density (PSD) reduction provisions.
Drones also are subject to dynamic environmental changes and mission-critical collision avoidance as they fly around. And similar to the CSI-2 system advancements targeting ADAS deployments on vehicles, drone platforms benefit from CSI-2 v2.0’s low latency and high performance transport conduit for real-time inferencing and decision-making needs. For instance, the CSI-2 v2.0 Latency Reduction and Transport Efficiency (LRTE) feature utilizes innovative signaling mechanism to alleviate legacy packet overheads, vastly reducing latency for time-sensitive applications.
Q. How does the latest MIPI CSI-2 v2.0 specification contribute to emerging vision needs on automotive and other platforms? Can you provide specific examples?
A. In addition to reduced latency, the CSI-2 v2.0’s LTRE provisions also facilitate native support for longer reach without the need for legacy high-voltage signaling, and optimizes transport overheads while preserving conduit integrity. Also, the newly introduced Differential Pulse Code Modulation (DPCM) 12-10-12 compression helps preserve edge detection for street sign and object recognition while reducing bandwidth needs by 20 percent.
Moreover, the latest CSI-2 v2.0 installment has been revised to support up to 32 virtual channels (VCs), increasing the prior limitation of 4 VCs. VCs are often used for High Dynamic Range (HDR) exposure compositions, spatial and temporal frame captures with sensor aggregation, and meta-data transport mapped to number of vision applications. For example, ADAS applications require multiple image streams to accommodate fully and semi-autonomous driving experience such as lane-departure detection, predictive emergency braking, driver-drowsiness monitoring, surround view and transport of multiple imaging devices on the same CSI-2 link.
The combination of CSI-2 v2.0 LRTE and VC enables complex vision systems to support up to 40 percent more sensor aggregation in automotive platforms and natively support beyond mobile form-factors without the need for costly bridge solutions. And such provisions are very much targeted towards reducing system complexities and engineering development costs for trending vision applications.