We hear a lot about real-life versions of Rosie, an autonomous robot maid from the popular animated sitcom Jetsons, where humans live with robots in the future. Many engineers have been hard at work developing products that can be considered as ancestors to a future Rosie. This interview takes a look at the biggest development challenges these designers face today.
Rajeev Karwal, founder & director, Milagrow Humantech speaks with Dilin Anand from EFY.
Q.What in your experience has been the most challenging area when it comes to designing autonomous robots?
A. By far one of the most challenging areas to design around for the floor cleaning robot would be the fact that while we can design for a generalised area (say, 1000 square feet for example), and also for targeting dust, we cannot predict the customer’s home layout within that 1000 square feet. This means that we get an incalculable number of possible permutations and combinations that we need to design around.
Q.What technology can be used to get around this particular challenge?
A.In our case we have built a Z-programming algorithm that uses orthogonal movement, combined with a few more subroutines (dependent on data processing from multiple sensors) to ensure that not only does the robot function on all areas of the room and around each obstacles it detects, but in some cases, it also goes into edge-to-edge mode if certain conditions are met in the algorithm.
Q.Could you share how algorithms like these for autonomous systems are typically solved?
A.Theoretical calculations only take you so far and since every mathematical model attempts to approximate the real world. In the real world, the engineering team has to go through a lot of iteration and empirical testing. The best way to improve on that part of the algorithm was to test the robot in the real world, create different obstacle courses, see how the algorithm behaves in that scenario, document the places where we felt the robot could have performed better, and tweak the code.
Q .Could you share insights as to how these robots have been improved from a design perspective?
A. We have upgraded the tyre treads to tackle harsher surfaces such as thicker carpets, wires and cords. In addition to this, a lot of models have a switch that detects when the suspension bottoms out. This switch, in combination with the tyre treads and the wheel encoders enables us to write what we internally refer to as ‘escape subroutines’ which allows the robot to bring itself out of situations that would normally cause it to get stuck. The maximum force the device experiences are in the case of a collision or in the case of a fall.
Q. Talking about collisions, how do you prevent them?
A. To avoid collisions, we equip the robot with two sets of different sensors, one is the infrared to detect objects that are a certain distance out, and the other is the bumper sensor, which if the infrared sensor is out of range (for example the height of the object is less than that where the sensors are placed on the machine), then the bumper sensor actuation triggers the obstacle detection subroutines in a similar manner to the infrared sensor. The bumper sensor internally is a series of switches along the length of the front of the robot. The reason this is done is so the robot knows ‘where’ it has been bumped and moves out of the situation accordingly.
Q. I am guessing there is very little you can do to tackle outright falls?
A. Falling, while a far more serious scenario from a structural and product integrity point of view, is actually more straightforward to tackle. Optical ‘Fall sensors’ are present at the bottom of the device that continuously sends signals to the robot to let it know that it is present on a surface. The locations of these sensors are based on how long it takes the robot to move at its maximum velocity (0.28 meters/second for most models) and how long it takes to stop. Once the robot detects a ‘0’ (floor not detected) from the front the sensors, it stops and backs out and continues with it’s Z-programming treating that area as an obstacle. This ensures that the robot can both avoid stairs and continue with its orthogonal movement without one affecting the other.
Q. How does the dirt detection feature work? What sensors are used here and how does the Algorithm figure it out?
A. We have different ways of solving the dirt detection problem. For example, in some, the problem is solved by two piezoelectric plates present on the inside of the robot that comes into contact with the main brush. When dust or any other unwanted particles (such as pet hair) come into contact with these sensors, it triggers a specific subroutine that causes the robot to reduce its velocity slightly and also slightly increase suction power to tackle the area. The alternative is to use an optical sensor present in the inlet before the air enters the dust filter. There is a transmitter and receiver on either side of the hole. If dirt or dust is detected, the optical signal from the transmitter and receiver is lost triggering similar behaviours as mentioned earlier.