In the first part of this article we covered SENSORIS innovation platform, and the steps needed to make a self-driving car possible. In this part we shall cover lidar and the rest.
Lidar, which stands for light detection and ranging, consists of a cone- or puck-shaped device that projects lasers that bounce off objects to create a high-resolution map of the environment in real time. In addition to helping driverless cars to see, lidar is used to create fast and accurate 3D scans of landscapes, buildings, cultural heritage sites and foliage. It is also used to help create Radiohead’s House of Cards music video.
When positioned on top of a vehicle, it can scan up to 60 metres in all directions, generating precise 3D maps of the car’s surroundings and ensuring the vehicle can avoid obstacles and collisions. It is expensive, but it provides visibility where other sensors might fail. Lidar is the best of both the worlds—it sits between cameras and the radar, and can detect both distance and objects, and can make out the shape of those objects.
Radars, on the other hand, are good at detecting objects and how far away these are, but do not offer any information about the shape or size of the object. The radar in Tesla Model S likely did detect the truck it collided into, but the system is designed to tune out objects that might look like overhead road signs to prevent it from events.
The integration Richard Wallace of Tesla referred to is the algorithms and intelligence that control the way different sensors work together. Lidar and vehicle-to-vehicle communication, where each car communicates its location to others nearby, will both play a key role in building safer self-driving fleets.
Lidar units that Google uses in its self-driving cars cost up to US$ 70,000 per unit, though there are now units that cost as little as US$ 250. This could make it more accessible for the mass market.
Sensoris data standard will enable driverless connected vehicles to prepare for changing conditions and hazards well before the vehicle, be it a truck or car, can see these. In addition to the various sensors such as lidar systems, cameras and ultrasonics required in driverless vehicles, the map is a virtual sensor that provides vital information to the vehicle about the road and terrain ahead and what it is likely to encounter over the hill or around the corner where the sensors cannot currently see.
But driverless vehicles need more than just these sensors to provide a smooth and safe driving experience. These need to communicate with all other vehicles on the road by sending, receiving, interpreting and responding to live route conditions in real time. Sharing of data can apply to all modes of transport including bikes, buses, trams and trains; and not just cars or trucks.
To get this right, there is simply insufficient information available from one car brand or model, and to enjoy the huge benefits of the new innovation and its reduction in emissions and congestion, we have to think both on a new scale and collaboratively.
First of all, a lidar sensor is typically installed on self-driving vehicles that emit eye-safe laser light as the sensor head rotates. Unlike streetlight cameras, these systems do not read licence plates, and laser light scatters/reflects off the windshield and surrounding objects.
Mobile active and passive sensors.
This approach can place very inexpensive smart RF transceiver systems in millions of new and existing cars that communicate with a stationary light-pole-mounted lidar system. The easy-to-integrate embedded sensor exchanges RF signals and display in-dash and audible warnings to drivers, while the lidar system maps the local area. Like radar detectors, it could have a huge potential market and also allow for faster technology disruption on a wider scale. It could also help accumulate millions of hours of actual use statistics nationwide in hundreds of cities and municipalities at a much faster pace.
Widespread embedded safety alert systems.
This approach allows ad-hoc technology disruptors and creative individuals to embed these sensors more broadly in smaller and safer stationary use cases, creating smart networks of lidar and embedded sensors.
A public safety system like this might provide a new kind of stationary collision-avoidance and weather systems for congested roads, toll ways, on-ramps/off-ramps, alerts for drivers about pedestrians, school crossings, bikes, narrowed lanes, accident avoidance, construction and wildlife. By sensing the automobile behaviour, it could alert drivers of roadway weather conditions (black ice, ice on bridges, snow conditions, slick roads, etc). All modes of traffic and weather conditions could be understood more deeply. In this case, any new technology ideas in these application areas would be better than none—the situation we have today.