IMAGE
Computer scientists from the University of Bonn have developed software that can look a few minutes into the future: The program first learns the typical sequence of actions, such as cooking, from video sequences. Based on this knowledge, it can then accurately predict in new situations what the chef will do at which point in time. Researchers will present their findings at the world’s largest Conference on Computer Vision and Pattern Recognition, which will be held June 19-21 in Salt Lake City, USA. The perfect butler, as every fan of British social drama knows, has a special ability: He senses his employer’s wishes before they have even been uttered. The working group of Prof. Dr. Jürgen Gall wants to teach computers something similar: “We want to predict the timing and duration of activities – minutes or even hours before they happen”, he explains. A kitchen robot, for example, could then pass the ingredients as soon as they are needed, pre-heat the oven in time – and in the meantime warn the chef if he is about to forget a preparation step. The automatic vacuum cleaner meanwhile knows that it has no business in the kitchen at that time, and instead takes care of the living room. We humans are very good at anticipating the actions of others. For computers however, this discipline is still in its infancy. The researchers at the Institute of Computer Science at the University of Bonn are now able to announce a first success: They have developed self-learning software that can estimate the timing and duration of future activities with astonishing accuracy for periods of several minutes. Training data: four hours of salad videos The training data used by the scientists included 40 videos in which performers prepare different salads. Each of the recordings was around 6 minutes long and contained an average of 20 different actions. The videos also contained precise details of what time the action started and how long it took. The computer “watched” these salad videos totaling around four hours. This way, the algorithm learned which actions typically follow each other during this task and how long they last. This is by no means trivial: After all, every chef has his own approach. Additionally, the sequence may vary depending on the recipe. “Then we tested how successful the learning process was”, explains Gall. “For this we confronted the software with videos that it had not seen before.” At least the new short films fit into the context: They also showed the preparation of a salad. For the test, the computer was told what is shown in the first 20 or 30 percent of one of the new videos. On this basis it then had to predict what would happen during the rest of the film.

You may also like

There is something wrong with Feed URL