Package atlasmv_gazebo_plugin finished

Hello! I have finally finished  the simulator that I will use in order to test my algorithms and to quantify the accuracy of my localization system.  At the very beginning I have thought that I would have finished all this part in a week or two, but I had to face many unexpected problems.
Gazebo is not very user friendly at the beginning, and often the documentation is deprecated, this means that the examples sometimes does not work properly.
Another problem was creating  a custom object in order to simulate an A4 sheet of paper. I have spent days trying to use Blender and Meshlab but I wasn’t unable to apply the texture correctly.

Example of Datamatrix in Gazebo

Example of Datamatrix in Gazebo

At the end I have found out that the easiest way to do that was to use Google SketchUp (which works only on Windows and Mac) both to create the 3D model and to apply the texture.  Then, I have exported the model in COLLADA format (.dae) and directly imported it in a custom .world file. The result is really good. The only problem is that it is necessary to manually create a different model for each datamatrix.

Example of world populated with a robot, two sheet of paper (0.21x0.29m - A4) and a some objects

Example of world populated with a robot, two sheet of paper (0.21×0.29m – A4) and a few objects

Regarding the robot model, I have used as starting base the ackermann_vehicle package, written by Jim Rothrock. This package include a simple ackermann vehicle with a camera and a IMU and a controller to control the steering angle and the speed of the robot.
I have edited this model in order to have a similar dimension (in particular the same wheelbase length) of the AtlasMV and I have added two cameras:  one that looks backward and one that looks forward.
Finally, I wrote an interface in order to control the simulated robot with a xbox 360 controller in the same way that it is possible to control the real one.

Now, thanks to this package, It will be easier to test the localization system and, in future, also a navigation system.

Update 7: pose estimation improved and some tests with the new MATLAB Robotics System Toolbox

Boa tarde!
I have solved the mentioned problem with the solvePnPRansac() function that I mentioned the last week.
The algorithm is structured in this way:
1. compute the pose using only the four corners provided by libdmtx,
2. find the corners on the top and right edges using some filtering operations and and Harris corner detector,
3. compute the pose again using the solvePnPRansac function and the previous calculated pose as initial condition,
4. if the number of inliers is above a fixed threshold, then the algorithm updates the pose estimation.

This approach allows to have a good pose estimation even when only one data matrix is visible and “close” to the camera.

poseransac

Furthermore, I started to work with the new Robotics System Toolbox. I can’t wait to see ROS and MATLAB (in particular my Kalman Filter) work together.

Update 6: datamatrix 3D pose estimation completed

Hello!
I have completed the pose estimation of the datamatrix. In order to perform the estimation I have used the corners provided by the library libdmtx and the OpenCV function solvePnP(). The estimation seems accurate but I tried to improve the precision using all the corners visible and the function solvePnPRansac(), which uses a RANSAC (Random sample consensus) approach to separate the inliers from the outliers. For now this improvement does not work because the function solvePnPRansac() crashes when it has to deal with a large number of points.
I will try to fix this problem in the future.

The next step will be to connect the localization node with the EKF using the new Robotics System Toolbox for MATLAB.

poseestimation

Update 5: previous code partially merged with the new class

In the last week I merged the code written for the previous thesis with my new code and I added the support for calibration file to my multithreading class “ImStream”.
Thus, the new TO DO list is:
.Add support to yaml calibration file;
.Merge the existing datamatrix_detection_node and datamatrix_calculations_node;
.Use POSIT ( http://code.opencv.org/projects/opencv/wiki/Posit ) to estimate the relative pose;
.Optimize the code.

I have also tried to optimize the library libdmtx in order to improve the perfomance, but without good results.
According to many programmers whom have already used that library, there is not a solution for this since the library is not well optimized and no longer maintained.

Furthermore, the position of the edges returned by this library is not accurate.  It will be very hard obtain good results with the POSIT library if I use the edge positions provided by libdmtx.
One possible solution might be to develop a chessboard-like algorithm to find all the edges, and then find the relative pose.

Update 4: encoding mode changed and new multithread node for multicamera acquisitions

New encoding mode
I’ve changed the encoding mode from Number digit pairs to Base256. In the previous version the information (three pair of digits from 0 to 99) was stored in 3 bytes, to which corresponds a 10×10 matrix. Using the Base256 encoding mode is it possible to take advantage of all the 24bits and store more information.

datamatrixEnc

The figure above shows that with this structure we can store:

  • x [0…1023],
  • y [0…1023],
  • Theta (orientation) [0…7],
  • S (size) [0,1],

With the additional information is possible to have beacons of differente size and store the orientation (0, 45,90,…315 degrees). This new information is essential if we want to estimate the relative pose between the observer (the robot) and the beacon. This is basically the code used:

codedm2 codeuseddm
I have created a class with two simple structures and two function to convert the information from a structure to the other one and viceversa.

Multithread node for multicamera acquisitions
After a few hours spent trying to improve the performance of the existing code, I decided to create a new class optimized for multicamera frame streams. Every instance of this class is associated to one specific camera and it has its own thread.
All the threads are synchronized together and (with all the limits of a non real time system) and every camera should start to grab its own frame in the same time.
// TODO:
.Add support to yaml calibration file;
.Merge the existing datamatrix_detection_node and datamatrix_calculations_node;
.Use POSIT ( http://code.opencv.org/projects/opencv/wiki/Posit ) to estimate the relative pose;
.Optimize the code.

Update 3: first results with the EKF

Hello!

I have some preliminar results with the Extended Kalman Filter and I loaded on youtube a short video in which you can see a comparison between the real robot pose (blue triangle), the red one is the pose measured and the green one is the estimated pose.
The model takes as input the steering angle and the velocity, both with a sin(t) noise overlapped. For now I’m using a very simple model with three state variables (x,y,theta) but I can do something better. For now, enjoy this video!

Update 2: the first version of *datamatrix_generator* is ready!

Hi guys! I finally finished my QT application called “datamatrix_generator”.
It is a small utility to create and print special markers based on datamatrix (special 2D barcodes).
With this application is possible to import a map in a bitmap format, set the correct scale factor and after that is possible to put markers in the map and set their position and orientation.
After that, it is possible to select a marker from a table and generate a pdf or directly print it. Is also possible to save and open projects in a file format that I’ve called “.dgen”.

Here a few screenshots:
Screenshot from 2015-04-16 18:53:36 Screenshot from 2015-04-16 18:54:38 Screenshot from 2015-04-16 18:55:16

In the meanwhile I’ve started to work on a Extended Kalman Filter (EKF) for my application. I want to use a single Kalman filter (and a single model) to process both the odometric information from the camera(s) and from an accelerometer/gyro. Stay tuned!

Update 1: the beginning

First of all, I have to apologize myself for the lack of updates. From the next week, I will try to update the blog every Monday/Tuesday.

Secondly, I have to better explain the goal of my thesis. I report below the proposal of my thesis:

INTRODUCTION

Nowadays the level of automation is constantly increasing in the industrial environment. We can easily find completely automated production lines, but for now the interaction between machines and human operators is limited to a small set of tasks. One possible way to increase the efficiency inside a given plant is to use intelligent robots, instead of human resources, for the transportation of objects across different places in the same industrial complex. Traditional AGVs (Automatically Guided Vehicles) are now commonly used for these tasks, and most of them follow strict paths marked by “wires” or special lines placed in the floor. There are also other solutions based on laser and special reflectors to allow triangulation of the robot in the plant. Nonetheless, the “floor-based” solutions have specificities that limit their usage, and laser/reflector solutions, besides being costly, require a somehow elaborate procedure to set up each time the layout changes. These restrictions open the way to exploit vision based solutions, especially if they can be made easier to configure and simultaneously more cost-effective.

CONTEXT OF THE PROBLEM

In real context it is not always possible to predispose the environment to be fully robot-friendly (for example by designing dedicated paths inside the complex), but we can have some a priori information, such as the maps of the environment. Also, and some companies have noticed that, magnetic stripes on the floor, although a cheap solution, is not advisable in some environments since other vehicles and transporters can degrade them with time. So, and to keep production costs confined, using vision, with one or more cameras, becomes appealing, despite the expected higher complexity of algorithms. The robot must also be able to avoid collisions and to correctly perform the path planning. Lastly, the solution must be cheap and reliable.

PROPOSED SOLUTION

The idea is to use simple passive markers (like datamatrix codes) placed in the environment as beacons to estimate the robot position in the given map by means of triangulation and predictive filters to improve the estimations. The markers must be easy to place and to detect in images. The approach consists of the following main steps:
1. Conceive and/or adapt appropriate markers to pace in the environment;
2. Create a user interface that, based on the map of the plant, allows the creation of the markers (simple sheets of paper with special marks) and their subsequent placement in the field;
3. Develop algorithms to perform robot localization based on the visual information extracted from cameras (at least two) and estimation techniques for enhanced robustness (such as Kalman Filters). The estimation can also takes advantages of the presence of an inertial sensor, especially when the quality of the visual information is poor.;
4. Define dedicate source and target points on the environment and implement some appropriate path planning techniques to go from one point to another;
5. Perform robot motion control along the planned path by using the localization information extracted from the passive markers;
6. As a complementary feature, implement some solution to prevent collisions, although obstacle avoidance is not in the cope of the work; image based approaches may be insufficient even for obstacle detections, so for example ultrasonic sensors may be a solution to try obstacle detection.

PHASES AND OBJECTIVES

The main phases/objectives include the following:
1. Create the state of the art for this problem and associated techniques, and get acquainted with the existing setup and previous related works.
2. Prepare the ATLAS-MV robot (cameras setup and porting of the packages to ROS Indigo) to act as a AGV in a indoor environment.
3. Test the algorithm used to detect beacons (Datamatrix Codes), and improve it to determine not only the angles used in the triangulation process but also other geometric information to improve the accuracy of the initial estimation.
4. Study the possibility of integrating a simple obstacle detection system to avoid imminent collisions with unexpected obstacles; if viable and useful, the solution can be implemented.
5. Include the physical model of the robot motion (and possibly perception) in a Kalman filter in order to reduce the effect of the noise in the localization procedure.
6. Integrate a proper path planner to permit the execution of the task.
7. Implement an application with a proper GUI to generate interactively the datamatrix beacons.
8. Write the thesis and other documentation.

THE BEGINNING

In the last days I did a lot of research to understand what is the actual state of the art for this kind of problems. I found out some interesting works e libraries for datamatrix recognitions.

Furthermore, I started to study QT again (after too much time, sadly) because I’m developing an user-friendly application to generate and print the datamatrix sheets used by the robot.

Datamatrix_generator preview

Datamatrix_generator preview

I think I finish it in a few weeks. The QT framework is very powerful but not so easy at the beginning.

In the meanwhile I also started to design a proper Kalman filter to estimate the position and the orientation of the vehicle. To describe the physics of the robot I choose a simple bicycle vehicle model.
This model is very simple, but strongly non-linear and I can’t use it directly to build the Kalman filter. I have a few ideas to solve this problem and I’m preparing a simulator in MATLAB/Simulink to understand which is the best approach. The estimator must be able to estimate the position and the orientation of the vehicle using the control parameters (velocity and steering angle), the visual measurements (for example from triangulation) and, if present, the measurements from one or more intertial sensors.

Hello World!

Hello everybody!

My name is Marco Bergamin, I study Automation Engineering at the University of Padua and currently I’m beginning my master’s thesis at the Mechanical Department of the University of Aveiro. My supervisor is Professor Vitor Santos.

In this blog I will mainly talk about my thesis and the related work. What I’m going to do is to develop a cheap and reliable system for the autonomous indoor navigation using visual information and simple passive markers. I will post more detailed information as soon as possible.