Robot Hand Driven by Computer Vision and Reinforcement Learning
Implementation of people behavior without their participation is very actual task which could be applicable in different areas of human activities. Today we can teach robots to work in environments which are dangerous for health, execute jobs which are hard for human muscles and so on. Notwithstanding that we have significant progress in reinforcement learning methodology and computer vision techniques, AI teaching process is still a challenging task.
As we know, reward system plays a very important role in reinforcement learning and we should be very precise with it. Within the project “Apple picker” we have investigated three of the most efficient approaches in reinforcement learning and found interesting facts and particularities in computer vision application. We need to consider them because it is very difficult to satisfy high accuracy requirement when implementing computer vision.
When we talk about multi-goal reinforcement learning task we can notice that computer vision role is becoming very important. To teach a robot to make an action based on video input is difficult, because we need to map different robot states and possible actions. In Accenture, we implemented a lot of practical tests including ones with a physical robot too.