Package atlasmv_gazebo_plugin finished

Hello! I have finally finished  the simulator that I will use in order to test my algorithms and to quantify the accuracy of my localization system.  At the very beginning I have thought that I would have finished all this part in a week or two, but I had to face many unexpected problems.
Gazebo is not very user friendly at the beginning, and often the documentation is deprecated, this means that the examples sometimes does not work properly.
Another problem was creating  a custom object in order to simulate an A4 sheet of paper. I have spent days trying to use Blender and Meshlab but I wasn’t unable to apply the texture correctly.

Example of Datamatrix in Gazebo

Example of Datamatrix in Gazebo

At the end I have found out that the easiest way to do that was to use Google SketchUp (which works only on Windows and Mac) both to create the 3D model and to apply the texture.  Then, I have exported the model in COLLADA format (.dae) and directly imported it in a custom .world file. The result is really good. The only problem is that it is necessary to manually create a different model for each datamatrix.

Example of world populated with a robot, two sheet of paper (0.21x0.29m - A4) and a some objects

Example of world populated with a robot, two sheet of paper (0.21×0.29m – A4) and a few objects

Regarding the robot model, I have used as starting base the ackermann_vehicle package, written by Jim Rothrock. This package include a simple ackermann vehicle with a camera and a IMU and a controller to control the steering angle and the speed of the robot.
I have edited this model in order to have a similar dimension (in particular the same wheelbase length) of the AtlasMV and I have added two cameras:  one that looks backward and one that looks forward.
Finally, I wrote an interface in order to control the simulated robot with a xbox 360 controller in the same way that it is possible to control the real one.

Now, thanks to this package, It will be easier to test the localization system and, in future, also a navigation system.

Advertisements

Update 7: pose estimation improved and some tests with the new MATLAB Robotics System Toolbox

Boa tarde!
I have solved the mentioned problem with the solvePnPRansac() function that I mentioned the last week.
The algorithm is structured in this way:
1. compute the pose using only the four corners provided by libdmtx,
2. find the corners on the top and right edges using some filtering operations and and Harris corner detector,
3. compute the pose again using the solvePnPRansac function and the previous calculated pose as initial condition,
4. if the number of inliers is above a fixed threshold, then the algorithm updates the pose estimation.

This approach allows to have a good pose estimation even when only one data matrix is visible and “close” to the camera.

poseransac

Furthermore, I started to work with the new Robotics System Toolbox. I can’t wait to see ROS and MATLAB (in particular my Kalman Filter) work together.

Update 6: datamatrix 3D pose estimation completed

Hello!
I have completed the pose estimation of the datamatrix. In order to perform the estimation I have used the corners provided by the library libdmtx and the OpenCV function solvePnP(). The estimation seems accurate but I tried to improve the precision using all the corners visible and the function solvePnPRansac(), which uses a RANSAC (Random sample consensus) approach to separate the inliers from the outliers. For now this improvement does not work because the function solvePnPRansac() crashes when it has to deal with a large number of points.
I will try to fix this problem in the future.

The next step will be to connect the localization node with the EKF using the new Robotics System Toolbox for MATLAB.

poseestimation

Update 5: previous code partially merged with the new class

In the last week I merged the code written for the previous thesis with my new code and I added the support for calibration file to my multithreading class “ImStream”.
Thus, the new TO DO list is:
.Add support to yaml calibration file;
.Merge the existing datamatrix_detection_node and datamatrix_calculations_node;
.Use POSIT ( http://code.opencv.org/projects/opencv/wiki/Posit ) to estimate the relative pose;
.Optimize the code.

I have also tried to optimize the library libdmtx in order to improve the perfomance, but without good results.
According to many programmers whom have already used that library, there is not a solution for this since the library is not well optimized and no longer maintained.

Furthermore, the position of the edges returned by this library is not accurate.  It will be very hard obtain good results with the POSIT library if I use the edge positions provided by libdmtx.
One possible solution might be to develop a chessboard-like algorithm to find all the edges, and then find the relative pose.

Update 4: encoding mode changed and new multithread node for multicamera acquisitions

New encoding mode
I’ve changed the encoding mode from Number digit pairs to Base256. In the previous version the information (three pair of digits from 0 to 99) was stored in 3 bytes, to which corresponds a 10×10 matrix. Using the Base256 encoding mode is it possible to take advantage of all the 24bits and store more information.

datamatrixEnc

The figure above shows that with this structure we can store:

  • x [0…1023],
  • y [0…1023],
  • Theta (orientation) [0…7],
  • S (size) [0,1],

With the additional information is possible to have beacons of differente size and store the orientation (0, 45,90,…315 degrees). This new information is essential if we want to estimate the relative pose between the observer (the robot) and the beacon. This is basically the code used:

codedm2 codeuseddm
I have created a class with two simple structures and two function to convert the information from a structure to the other one and viceversa.

Multithread node for multicamera acquisitions
After a few hours spent trying to improve the performance of the existing code, I decided to create a new class optimized for multicamera frame streams. Every instance of this class is associated to one specific camera and it has its own thread.
All the threads are synchronized together and (with all the limits of a non real time system) and every camera should start to grab its own frame in the same time.
// TODO:
.Add support to yaml calibration file;
.Merge the existing datamatrix_detection_node and datamatrix_calculations_node;
.Use POSIT ( http://code.opencv.org/projects/opencv/wiki/Posit ) to estimate the relative pose;
.Optimize the code.

Update 3: first results with the EKF

Hello!

I have some preliminar results with the Extended Kalman Filter and I loaded on youtube a short video in which you can see a comparison between the real robot pose (blue triangle), the red one is the pose measured and the green one is the estimated pose.
The model takes as input the steering angle and the velocity, both with a sin(t) noise overlapped. For now I’m using a very simple model with three state variables (x,y,theta) but I can do something better. For now, enjoy this video!

Update 2: the first version of *datamatrix_generator* is ready!

Hi guys! I finally finished my QT application called “datamatrix_generator”.
It is a small utility to create and print special markers based on datamatrix (special 2D barcodes).
With this application is possible to import a map in a bitmap format, set the correct scale factor and after that is possible to put markers in the map and set their position and orientation.
After that, it is possible to select a marker from a table and generate a pdf or directly print it. Is also possible to save and open projects in a file format that I’ve called “.dgen”.

Here a few screenshots:
Screenshot from 2015-04-16 18:53:36 Screenshot from 2015-04-16 18:54:38 Screenshot from 2015-04-16 18:55:16

In the meanwhile I’ve started to work on a Extended Kalman Filter (EKF) for my application. I want to use a single Kalman filter (and a single model) to process both the odometric information from the camera(s) and from an accelerometer/gyro. Stay tuned!