Artificial Intelligence Getting Real, Local

Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai

7188
Advertisement

“We will see AI transferring to the edge,” Talla said to the press, with future intelligent applications using a combination of edge and cloud processing.

This requirement to build intelligence into the device is leading to a major bustle in the semiconductor industry, not to forget a lot of hardware innovation. In last month’s story on smart robotics, we read about a micromote (a chip measuring just one cubic millimetre) developed at the University of Michigan, which incorporates a deep-learning processor capable of operating a neural network using just 288 microwatts.

Nvidia Jetson TX2 credit-card sized platform for intelligent edge devices like robots, drones, cameras and portable medical devices (Courtesy: Nvidia)
Nvidia Jetson TX2 credit-card sized platform for intelligent edge devices like robots, drones, cameras and portable medical devices (Courtesy: Nvidia)

Last year, Nvidia launched Drive PX2—a palm-sized platform to implement auto cruise capabilities in automobiles. This open AI car platform features a unified architecture that allows deep neural networks to be trained on a system in the data centre, and then deployed in the car. This year, Nvidia launched Jetson TX2—a credit-card sized, plug-in edge-processing platform designed for embedded computing. Teal Drones has used the Jetson module to develop a smart drone that can understand and react to what its cameras are seeing. Since this drone does not rely on the cloud, it can be used in remote farms or even by children playing hide-and-seek! EnRoute, another drone maker, has used on-board AI to help its drones navigate and fly faster, avoiding objects on their path.

Cisco has developed a collaboration device that uses AI to recognise people in a room and automatically pick a field-of-view (FOV) with people in it instead of empty chairs. The FOV is spontaneously adjusted as people walk in and out, or move around. The system also zooms in on people who are speaking.

Live Planet’s new 360-degree 3D camera for live streaming of video uses on-board AI to encode 3D videos in real time. Live Planet’s chief strategy office Khayyam Wakil explains, “The camera produces a stream of 65 gigabytes, which is too much data to transmit to a cloud server. On-board processing has made the live streaming possible.”

Sniffing the trend, Intel acquired Movidius in 2016. Movidius produces specialised low-power processor chips for computer vision and deep learning. Their button-sized Myriad 2 platform has many features that support implementation of deep learning at the network edge. Myriad’s SHAVE processor engines achieve the hundreds of giga-flops required in fundamental matrix multiplication compute that is essential for deep learning networks. The on-chip RAM keeps huge volumes of intermediate data on the chip itself to avoid bandwidth bottlenecks. The platform comes with native support for mixed precision and hardware flexibility—both 16-bit and 32-bit floating-point data types, as well as u8 and unorm8 types are supported.

The company literature explains that existing hardware accelerators can be easily repurposed to provide the flexibility needed to achieve high performance for convolution computation. Myriad also comes with a development kit that includes dedicated software libraries to support sustained performance on matrix multiplication and multidimensional convolution.

Start-up Graphcore proposes to handle deep learning with a so-called intelligent processing unit (IPU)—a graph processor that can manage both training and inference on the same architecture, and eventually across multiple form factors (server and device) too. The chip is expected to get ready for early usage by year-end.

According to the company, “This same architecture can be designed to suit both training and inference. In some cases, you can design a piece of hardware that can be used for training, then segment that up or virtualise it in a way to support many different users for inference or even different machine learning model deployments. There will be cases when everything is embedded, for instance, and you need a slightly different implementation, but it’s the same hardware architecture. That’s our thesis—one architecture, the IPU, for training, inference, and different implementations of that machine that can be used in servers, cloud, or at the edge of the network.” That would be the ultimate thing to wish for!

AI seems so real

At one time, the term ‘AI’ was associated only with robots, but now it is everywhere—from security cameras to cars and enterprise applications.

Advertisement


SHARE YOUR THOUGHTS & COMMENTS

Please enter your comment!
Please enter your name here