Another research at MIT attempts to make machines predict the future. For humans, it is easy to understand what will happen when a player bowls a ball or when a car skids off the road. However, it is not so easy for machines. A team at MIT is developing a deep-learning algorithm for this. From a still image of a scene, the system can create a short video simulating the future of that scene. Researchers feel that such generative videos can be used to add animation to still images, detect anomalies in security footage, compress data for storing and sending longer videos, and so on.
nuTonomy prefers formal logic to machine learning because the latter is a black box, with no way to find out why the system made a particular decision. This is also one of the reasons why many people in the medical profession shun computerised decisions.
MIT might have the answer in store. They have figured out a way of training neural networks so that these not only provide predictions and classifications but also the rationale behind each decision. In their research, the team divided the neural net into two modules. The first module extracts segments of text from the training data, then assigns scores to segments according to their length and coherence. The shorter the segment, and the more of it drawn from strings of consecutive words, the higher the segment score. The segments selected by the first module are then passed on to the second module, which performs the prediction or classification task. The modules are trained together, with the aim to maximise both the score of the extracted segments and the accuracy of prediction or classification. The researchers have successfully validated this technique on many textual data sets such as reviews on a website, free-form questions and answers, and pathology reports.
As confidence in AI increases, we find it being used in critical applications too—aviation being an example. A team of researchers at the University College, London, is developing a new AI-based Intelligent Autopilot System that learns how to manage emergencies by watching how well-trained pilots do so, and then applying this learning to similar situations. The system is trained like professional pilots. According to a press release, the team uses a high-fidelity, professional version of the desktop flight simulator X-Plane to teach the autopilot to fly a Boeing 777, subjecting it to severe weather conditions, engine failures, fires and emergency landings or turnarounds.
Closer home, AI-powered scooters may hit the road by 2018. Haryana-based start-up Twenty Two Motors has developed a prototype smart scooter that is powered by AI and connected to the cloud. An app allows the user to remotely control and access the scooter, while the cloud system helps in automatic troubleshooting. The scooter applies AI to data collected by sensors on the scooter, to understand the rider’s behaviour. It also enables decisions like the best route to take depending on battery conditions, target location and topography of the route (presence of bridges, flyovers, etc). The start-up raised ₹ 100 million in April this year and plans to launch the scooter at next year’s Auto Expo. Not a self-driving scooter but definitely smart enough to begin with!
There is a lot of software-based AI innovation happening too, like the one by Mumbai-based Arya.ai, which offers deep learning algorithms for developers to build intelligent systems that can learn, adapt and do things with minimal inputs from humans. The DL Studio platform can be used to incorporate intelligence in e-commerce platforms, diagnostic assistants, image processors for drones, security, device management and maintenance, and more. Deep neural networks are also scalable.
Another interesting trend is the availability of different AI capabilities as services that can be quickly deployed in applications. Clarifai’s powerful visual recognition application programming interface (API) is one example. It uses machine learning to automatically tag, organise and search visual content. Similarly, Datalog.ai offers conversational intelligence as a service for virtual assistants, bots, devices and corporate applications. For developers, it is as easy as plug-and-play. No complex infrastructure or development is needed to put AI to work.
AI is indeed at an inflection point. What has made it so hot today? While some would credit the availability of powerful computers or advances in statistical machine learning and deep learning techniques, others say that AI has attained this level of focus and investment mainly because of the sheer amount of data that the IoT is churning up. With sensors all around us, networks are dizzy with data flying all around. Somebody sitting on data obviously wants to make sense of it. So, there is a greater demand for AI, and demand always drives supply—that is the underlying principle of commerce.
The industry is bustling to meet this demand and the air is rife with partnerships and acquisitions. Still, there is one big challenge in the way of deploying AI in real-world scenarios: You need to win the trust of people before they accept AI as a way of life. There is a lot of doubt about security and privacy. The industry is coming together to solve this bottleneck. Last year, Amazon, DeepMind/Google, Facebook, IBM and Microsoft announced the formation of a non-profit organisation called Partnership on AI to improve public understanding of AI technologies and formulate best practices for development and deployment of AI. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems is also an effort towards aligning the development of AI and autonomous systems with the values of its users and society.
As long as such basic ethical requirements are met and we are assured that intelligent devices will not overthrow us, artificial intelligence is definitely hard to resist!