First of all, I have to apologize myself for the lack of updates. From the next week, I will try to update the blog every Monday/Tuesday.
Secondly, I have to better explain the goal of my thesis. I report below the proposal of my thesis:
Nowadays the level of automation is constantly increasing in the industrial environment. We can easily find completely automated production lines, but for now the interaction between machines and human operators is limited to a small set of tasks. One possible way to increase the efficiency inside a given plant is to use intelligent robots, instead of human resources, for the transportation of objects across different places in the same industrial complex. Traditional AGVs (Automatically Guided Vehicles) are now commonly used for these tasks, and most of them follow strict paths marked by “wires” or special lines placed in the floor. There are also other solutions based on laser and special reflectors to allow triangulation of the robot in the plant. Nonetheless, the “floor-based” solutions have specificities that limit their usage, and laser/reflector solutions, besides being costly, require a somehow elaborate procedure to set up each time the layout changes. These restrictions open the way to exploit vision based solutions, especially if they can be made easier to configure and simultaneously more cost-effective.
CONTEXT OF THE PROBLEM
In real context it is not always possible to predispose the environment to be fully robot-friendly (for example by designing dedicated paths inside the complex), but we can have some a priori information, such as the maps of the environment. Also, and some companies have noticed that, magnetic stripes on the floor, although a cheap solution, is not advisable in some environments since other vehicles and transporters can degrade them with time. So, and to keep production costs confined, using vision, with one or more cameras, becomes appealing, despite the expected higher complexity of algorithms. The robot must also be able to avoid collisions and to correctly perform the path planning. Lastly, the solution must be cheap and reliable.
The idea is to use simple passive markers (like datamatrix codes) placed in the environment as beacons to estimate the robot position in the given map by means of triangulation and predictive filters to improve the estimations. The markers must be easy to place and to detect in images. The approach consists of the following main steps:
1. Conceive and/or adapt appropriate markers to pace in the environment;
2. Create a user interface that, based on the map of the plant, allows the creation of the markers (simple sheets of paper with special marks) and their subsequent placement in the field;
3. Develop algorithms to perform robot localization based on the visual information extracted from cameras (at least two) and estimation techniques for enhanced robustness (such as Kalman Filters). The estimation can also takes advantages of the presence of an inertial sensor, especially when the quality of the visual information is poor.;
4. Define dedicate source and target points on the environment and implement some appropriate path planning techniques to go from one point to another;
5. Perform robot motion control along the planned path by using the localization information extracted from the passive markers;
6. As a complementary feature, implement some solution to prevent collisions, although obstacle avoidance is not in the cope of the work; image based approaches may be insufficient even for obstacle detections, so for example ultrasonic sensors may be a solution to try obstacle detection.
PHASES AND OBJECTIVES
The main phases/objectives include the following:
1. Create the state of the art for this problem and associated techniques, and get acquainted with the existing setup and previous related works.
2. Prepare the ATLAS-MV robot (cameras setup and porting of the packages to ROS Indigo) to act as a AGV in a indoor environment.
3. Test the algorithm used to detect beacons (Datamatrix Codes), and improve it to determine not only the angles used in the triangulation process but also other geometric information to improve the accuracy of the initial estimation.
4. Study the possibility of integrating a simple obstacle detection system to avoid imminent collisions with unexpected obstacles; if viable and useful, the solution can be implemented.
5. Include the physical model of the robot motion (and possibly perception) in a Kalman filter in order to reduce the effect of the noise in the localization procedure.
6. Integrate a proper path planner to permit the execution of the task.
7. Implement an application with a proper GUI to generate interactively the datamatrix beacons.
8. Write the thesis and other documentation.
In the last days I did a lot of research to understand what is the actual state of the art for this kind of problems. I found out some interesting works e libraries for datamatrix recognitions.
Furthermore, I started to study QT again (after too much time, sadly) because I’m developing an user-friendly application to generate and print the datamatrix sheets used by the robot.
I think I finish it in a few weeks. The QT framework is very powerful but not so easy at the beginning.
In the meanwhile I also started to design a proper Kalman filter to estimate the position and the orientation of the vehicle. To describe the physics of the robot I choose a simple bicycle vehicle model.
This model is very simple, but strongly non-linear and I can’t use it directly to build the Kalman filter. I have a few ideas to solve this problem and I’m preparing a simulator in MATLAB/Simulink to understand which is the best approach. The estimator must be able to estimate the position and the orientation of the vehicle using the control parameters (velocity and steering angle), the visual measurements (for example from triangulation) and, if present, the measurements from one or more intertial sensors.