New encoding mode
I’ve changed the encoding mode from Number digit pairs to Base256. In the previous version the information (three pair of digits from 0 to 99) was stored in 3 bytes, to which corresponds a 10×10 matrix. Using the Base256 encoding mode is it possible to take advantage of all the 24bits and store more information.
The figure above shows that with this structure we can store:
- x [0…1023],
- y [0…1023],
- Theta (orientation) [0…7],
- S (size) [0,1],
With the additional information is possible to have beacons of differente size and store the orientation (0, 45,90,…315 degrees). This new information is essential if we want to estimate the relative pose between the observer (the robot) and the beacon. This is basically the code used:
Multithread node for multicamera acquisitions
After a few hours spent trying to improve the performance of the existing code, I decided to create a new class optimized for multicamera frame streams. Every instance of this class is associated to one specific camera and it has its own thread.
All the threads are synchronized together and (with all the limits of a non real time system) and every camera should start to grab its own frame in the same time.
.Add support to yaml calibration file;
.Merge the existing datamatrix_detection_node and datamatrix_calculations_node;
.Use POSIT ( http://code.opencv.org/projects/opencv/wiki/Posit ) to estimate the relative pose;
.Optimize the code.