CADDY’s success depends on the reliability of communication between the diver and the robot. However, underwater environment poses constraints that limit the effectiveness of traditional HMIs. This has inspired many research teams to explore gesture based solutions to communicate with CADDY underwater.
Researchers from National Research Council – Institute of Studies on Intelligent Systems for Automation, Genova, Italy, propose to overcome this problem by developing a communication language based on consolidated and standardised diver gestures, commonly employed during professional and recreational dives. This has lead to the definition of a CADDY language called CADDIAN and a communication protocol. They have published a research paper that describes the alphabet, syntax and semantics of the language, and are now working on the practical aspects of gesture recognition. Read all about the proposed solution at www.tinyurl.com/j8td6km
Jacobs Robotics Group of Jacobs University in Bremen, Germany, is another group that is researching within CADDY to develop similar solutions for 3D underwater perception to recognise a diver’s gestures. Their self-contained stereo camera system with its own processing unit to recognise and interpret gestures was recently tested on Artu, a robot developed by Roman CADDY partner CNR.
Artu is a remotely-operated vehicle that used to be steered via a cable from a ship or from the shore. When Jacobs Robotics Group’s gesture-recognition system was mounted on the underwater robot and integrated into its control system, divers could directly control the robot underwater using sign language.
Gestures are vaguer than 0s and 1s
As useful as gestures are, these are not easy to implement in an HMI. In an ideal system that does not require wearables or controlled environments, and also does away with the support of touchscreens and other devices, the possibility of achieving unambiguous inputs is very bleak. Developing a simple system like that could involve immense efforts to not just develop the hardware but also to define an ideal set of gestures suitable to the user group. Because when a dictionary of accepted gestures is not defined, communication could be too vague and only another human would be capable of understanding it.
One example of such efforts comes from a group at IIT Guwahati, Assam, which aimed to standardise seven gestures to represent computational commands like Select, Pause, Resume, Help, Activate Menu, Next and Previous. The gestures were to be used by pregnant women watching a televised maternal health information programme in rural Assam. Participants belonged to the low socio-economic strata and most had poor literacy levels.
A participative study deduced 49 different gestures that participants performed to represent the seven computational functions. According to the team, they selected seven body gestures based on frequency of use, logical suitability to represent functions, decreased possibility that the gestures would be accidently performed (false positives) and ease of detection for the chosen technology (technical limitations).
If such a simple application requires such long research to arrive at the right gestures, you can imagine how much more complex the situation gets when a large and varied user base is expected, special audiences like disabled people or young children are involved, or when difficult environments like mines or oceans are involved.
Indeed, gesture based computing is a difficult dream to achieve, but it could take computing to several people who really need it. In a book totally unrelated to computers, American author Libba Bray wrote an oft-quoted line, “That is how change happens. One gesture. One person. One moment at a time.”
This holds true for intuitive gesture interfaces, too. One gesture could save a man from drowning. One gesture could help a paralysed person live independently. One gesture could help a child learn about planets. One gesture could forge a business deal. One gesture could change the world!
Janani Gopalakrishnan Vikram is a technically-qualified freelance writer, editor and hands-on mom based in Chennai