Deep neural networks and Narrow AI
Existing technology for Tesla’s semi-autonomous driving cars uses deep neural networks (DNNs). When the human brain tries to recognise objects in images, it does not see pixels; instead it sees edges on them. A DNN tries to recreate how a human brain functions by programming it to only recognise edges.
This machine or program uses code to look at objects and do some unsupervised learning of its own—known as machine learning. When enough time has passed, the machine learns to distinguish between whatever the DNN was told to look out for. Intelligence of the DNN depends on its processing power and the time spent learning.

Companies like Tesla are developing Narrow AI, which is a variant that stands for a non-sentient computer intelligence that is focused on a narrow task that prevents it from doing something like trying to take over the world. Apple’s Siri is apparently a good example of Narrow AI.
Specialised hardware to run neural networks
Neural networks used for image analysis are typically run on graphics processing units that specialise in image processing. But much more can be done to make circuits that run neural networks efficiently.
Joel Emer, Massachusetts Institute of Technology (MIT) computer science professor and senior distinguished research scientist at NVIDIA, has developed Eyeriss, a custom chip designed to run a state-of-the-art convolutional neural network.
In February 2016, IEEE International Solid-State Circuits Conference in San Francisco, USA, teams from MIT, Nvidia and Korea Advanced Institute of Science and Technology (KAIST) showed prototypes of chips designed to run artificial neural networks. The KAIST design minimises data movement by bringing memory and processing closer together. It saves energy by both limiting data movement and by lightening the computational load. Kim’s group observed that 99 per cent of the numbers used in a key calculation require only eight bits, so they were able to limit the resources devoted to it.
Nvidia also announced a new chip called Tesla P100, which is designed for a technique called deep learning used by Google software AlphaGo. They also unveiled a special computer for researchers involved in deep learning. It comes with eight P100 chips, memory and storage. Models of this computer, known as DGX-1, will be sold for US$ 129,000. It is being given to leading academic research groups, including ones at University of California, Berkeley, Stanford, New York University and MIT—all in USA.
Solving the problem of skewed measurements
The two eyes humans possess help with depth perception. This is helpful in many cases.
Similarly, when we talk about machines, we usually have single aperture systems, which raise the question of skewed measurements. Arvind Lakshmikumar, chief executive officer, Tonbo Imaging, says, “Aperture systems have their own set of limitations that can be overcome with multiple aperture systems.”
Efficient imaging systems employ sensors to the optimum. Their signals have to be safely transferred to the computing unit. Post imaging, signal transmission is another part to be looked at. “Intelligence guys often tap signals,” says Parag Naik, co-founder and chief executive officer, Saankhya Labs.
Hence, signal security becomes an issue while transmitting the signals as well. Some security measures can be taken by making suitable changes in the system. However, even with the best of systems, “chances are still pretty high for collateral damage,” says Lakshmikumar.