My project : Objects Modelling
The ability to recognize objects and to localize them precisely is essential in all service robotic applications. One of the main challenges for service robots during operation lies in the handling of unavoidable uncertainties which originate from model sensor inaccuracies are characteristic for realistic application scenarios.
ODUfinder is a perception system for autonomous service robots acting in human living environments. The perception system enables robots to detect and recognize large sets of textured objects of daily use .We need a robot object detection and recognition system that can recognize thousands of objects by learning and using vocabulary three of SIFT descriptors .
The usage of robots to aid humans is becoming more and more widespread, typically in the industry, but increasingly also in public services and in some cases, home applications too. Robots are starting to be more capable and flexible and thus being able to do almost anything that humans can do in a variety of different environments. But the downside is that virtually all of these complex actions have to be preprogrammed, as robots'ability to recognize complex patterns and react to unforeseen events is fairly limited. The only way to make a robot truly autonomous is to enable it to learn “on the fly” and possibly from its own failures and experiences. To enable this skill, the robots have to be equipped with the robust perception systems that can detect and recognize objects from a priori learnt models and also acquire unknown models - all online. In this thesis we will primarily investigate how the robots can acquire new recognition models for textured objects through an in-hand
modeling center. In the second part we will use the acquired models in the ODUfinder system in order to perform object detection and recognition.
The problem that we are generally facing can be formulated as a perception versus sensing paradigm. Robots can “see” but have hard time understanding what they are looking at. Using a camera, a robot may be able to pick up an image made up of millions of pixels but without a significant programming, it would not know what any of those pixels represent. Furthermore, the data might be corrupted by noise and distortions. Such corruptions stem from variations in the world (weather, lighting, reflections and movements) or electrical noise in the sensor.
The usage of robots to aid humans is becoming more and more widespread, typically in the industry, but increasingly also in public services and in some cases, home applications too. Robots are starting to be more capable and flexible and thus being able to do almost anything that humans can do in a variety of different environments. But the downside is that virtually all of these complex actions have to be preprogrammed, as robots'ability to recognize complex patterns and react to unforeseen events is fairly limited. The only way to make a robot truly autonomous is to enable it to learn “on the fly” and possibly from its own failures and experiences. To enable this skill, the robots have to be equipped with the robust perception systems that can detect and recognize objects from a priori learnt models and also acquire unknown models - all online. In this thesis we will primarily investigate how the robots can acquire new recognition models for textured objects through an in-hand
modeling center. In the second part we will use the acquired models in the ODUfinder system in order to perform object detection and recognition.
The problem that we are generally facing can be formulated as a perception versus sensing paradigm. Robots can “see” but have hard time understanding what they are looking at. Using a camera, a robot may be able to pick up an image made up of millions of pixels but without a significant programming, it would not know what any of those pixels represent. Furthermore, the data might be corrupted by noise and distortions. Such corruptions stem from variations in the world (weather, lighting, reflections and movements) or electrical noise in the sensor.