Artificial Intelligence Getting Real, Local

Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai

- Advertisement -
Surtrac is an intelligent approach to traffic management, implemented in Pittsburgh, USA
Surtrac is an intelligent approach to traffic management, implemented in Pittsburgh, USA

In Pittsburg, USA, AI is helping solve traffic woes. Speaking at a White House Frontiers Conference, Carnegie Mellon University professor of robotics Stephen Smith said that traffic congestion costs the U.S. economy $121 billion a year, mostly due to lost productivity, and produces about 25 billion kilograms of carbon dioxide emissions. The AI-based smart traffic management system piloted in the city has reduced travel time by 25 per cent, idling time by over 40 per cent and emissions by 21 per cent. Unlike conventional traffic lights that have pre-programmed timings, the Surtrac system applies AI algorithms to data collected by the radar sensors and cameras of computerised traffic lights to dynamically build a timing plan. The system is decentralised and each signal makes its own timing decision. It also sends the data to traffic intersections downstream so they can plan ahead.

There are 50 such smart intersections now, with plans for expansion citywide. Following that, Smith’s group wants to improve the system to enable signals to talk to cars! According to an IEEE news report, they have already installed short-range radios at 24 intersections. Such systems are expected to begin being built into some cars this year. Traffic signals can then let drivers know of upcoming traffic conditions or change in lights, increasing safety and relieving congestion. The vehicle-to-infrastructure communication system could also prioritise certain vehicles like public transport buses.

AI is helpful on social media too. Facebook, for instance, uses AI to spot and remove offensive content. It is also planning to integrate AI-based suicide prevention tools into Facebook Live and Messenger, in order to recognise and help people with suicidal tendencies.

- Advertisement -

In April this year, Mark Zuckerberg unveiled a platform that transforms users’ smartphone camera into an engine for augmented reality (AR). The solution relies on implementing AI on the network edge. The platform lets users to layer digital effects atop images and videos captured by the camera. One of the fun demos showed digital sharks swimming around a bowl of cereal.

Facebook has more real-world plans for the future. For example, you can pin a virtual note on your fridge and your roommates will be able to see it when they view the fridge through their cameras.

Bosch-backed robot Kuri (Courtesy: Mayfield Robotics)
Bosch-backed robot Kuri (Courtesy: Mayfield Robotics)

Neural networks help to identify people and track their movement and activities within the camera’s FOV, in order to apply appropriate digital effects. Facebook’s deep neural networks run on the phone itself because getting across the Internet and back will be too slow to effectively implement such digital effects.

While Facebook has optimised its deep learning technology to run on current-day mobile phones, they feel things are bound to get difficult as digital effects get more complex. But, Facebook expects that the future hardware enhancements will surely boost their machine learning models.

CES 2017 was full of AI-powered consumer products. Apart from the expected dominance of AI-enabled smartphones, wearable devices, home appliances and cars, two interesting developments relate to operating systems and home assistants. According to industry experts, more than 40 million homes will have a home assistant by 2021. In November last year, Google launched Google Home for this segment. Amazon’s Alexa is too well-known to be introduced again. Samsung has come up with Otto, and Bosch is backing Kuri. Facebook too demonstrated a personal assistant, though there is no information about its availability yet. Apple is also supposedly working on a Siri-powered home assistant.

With so many connected and intelligent devices all around us, people are getting very worried about privacy and data safety. So, companies like Google and Norton have also come up with solutions to secure your devices. Google is offering Android Things—an operating system that powers smart devices and the Internet of Things (IoT) in a secure way. Norton Core is a mobile-enabled Wi-Fi router equipped with machine learning and Symantec’s threat intelligence techniques to defend your home network from potential threats.

Lots more on the anvil

Researchers all over the world are still exploring the possibilities of AI. What was fiction a decade ago has become real now, and fiction today is being chiselled into reality at labs across the world—and by start-ups too.

So far, AI has been achieved mainly using complex algorithms. Now, imagine a chip that by itself works like a synapse of the human brain—wouldn’t it make AI more real and human-like than ever before? Researchers at CNRS, Thales, have managed to create directly on a chip an artificial synapse that is capable of learning. You can use these chips to create intelligent systems comprising a network of synapses, requiring much less time and energy.

Twenty Two Motors’ smart scooter for Indian roads (Courtesy: Twenty Two Motors)
Twenty Two Motors’ smart scooter for Indian roads (Courtesy: Twenty Two Motors)

Another system developed at the Sandia National Laboratories aims to improve the accuracy with which cybersecurity threats (or bad apples) are detected. The brain-inspired Neuromorphic Cyber Microscope designed by the lab can look for complex patterns that indicate specific bad apples, consuming less power than a standard 60-watt light bulb. This small processor was found to be more than a hundred times faster and a thousand times more energy-efficient than racks of conventional cybersecurity systems.

Lots of interesting AI research is happening at MIT too. Last month, MIT researchers presented a paper proposing a fast and inexpensive way to achieve speech recognition. Current speech recognition systems require a computer to analyse innumerable audio files and their transcriptions, to understand which acoustic features correspond to which typed words. However, providing these transcripts to the machine learning system is a costly and time-consuming affair. This limits speech recognition to a small number of languages. The new approach proposed by the researchers does not rely on transcripts. Instead, the system analyses the correlation between images and spoken descriptions of those images, as captured in a large collection of audio recordings. It eventually learns which acoustic features of the recordings correlate with which image characteristics. According to the scientists, this is more natural—more like the way humans learn. Plus, it is less expensive and less time-consuming, opening up the possibility of extending speech recognition to a larger number of languages.

- Advertisement -


Please enter your comment!
Please enter your name here


What's New @

Most Popular DIYs

Electronics Components

Design Guides

Truly Innovative Tech