Human communication is an extremely complex process where, besides words, many other factors, such as the tone, non-verbal language, or even pheromones, come into play. Likewise, the human brain can predict the movements of another individual by assessing the context and past experiences. For instance, if we are driving a car and reach a pedestrian crossing with someone waiting, we can anticipate their movement and the time they will require to require crossing the road. Also, unconsciously, we may predict a non-linear movement, i.e., that maybe they speed up or slow down. And, if it is a juggler, that they will stand in the middle earning a tip. The problem with robots is that they are not that good at assessing contexts and anticipating events. And that is what the engineers at BMW noticed in their simulations of interactions between robots and humans.
The initial experiment of this technology project took place in 2018, when jointly with MIT researchers, they created a replica of a factory where robots had to travel through a rail carrying equipment while the workers crossed from one side to the other. The robots, when they detected human movement, stood completely still until the operator had moved across. The software was unable to predict the time spent in its traversal and how long it would take to walk back. Hence, they decided to develop a new algorithm applied to robotics that would shorten those idle moments, which, in a large-scale manufacturing environment, can involve a substantial loss in productivity. The outcome is a new algorithm able to predict the partial trajectory of a human being in real-time, which allows the robot gauging the actual time available so it can move without posing any risks.
A matter of time
The new artificial intelligence system takes into account several factors. Besides the traversal distance, it can also assess timings. For instance, if someone has just started moving, it will probably take longer to return to its initial position: first they will need to arrive at their destination, carry out whatever they have planned, and then return. Likewise, if they have just crossed in one direction an immediate return will be less probable. Instead of following a linear movement pattern, the software retrieves information from a database with thousands of different movements, which allows it to learn just like humans would do. According to Julie Shah, associate professor of aeronautics and astronautics at MIT, this is one of their multiple approaches to the improvement of the understanding of human behavior by robots. The use of these techniques in robot technology can also be extrapolated to many other everyday circumstances, such as car traffic where the unexpected is the norm.