Modified 2019-10-06 by AmaurX
The pipeline to treat the raw input to the system and transform it for the graph.
Knowing the type of input to the system
Knowing how this input is processed for the localization graph
Modified 2019-10-06 by AmaurX
From Unit F-1 - Input to the system, we now know that we have many streams of image input, from both Watchtowers and Autobots. These images contain various Apriltags, that are placed and registered in the city as described in Unit E-2 - BUILDING - Apriltags specifications.
These Apriltags need to be processed in order to feed the associated transforms to the graph builder. This means that on every image, for every Apriltag on the image, a transform is computed that links the camera_frame
to the apriltag_frame
, keeping in memory the name of the agent (Watchtower or Autobot) that detected the Apriltag, as well as the Apriltag unique ID, and the timestamp at which the image was taken. This package of information makes up what will be called everywhere stamped transform.
We also have the wheel commands streams of each Autobots. These need to be processed into stamped transform as well, as those are the only thing we want to feed the localization graph with.
Notation: This parts receives images and wheel commands and outputs stamped transforms to the ROS Listener.
Modified 2020-07-18 by Andrea Censi
Apriltag detection is very computationally expensive, and different strategies can be used:
For online acquisition, three strategies can be used:
As explained in Unit E-3 - DEMO - Localization, in both cases the Watchtowers and Autobots use the acquisition bridge(Pointer to beta/draft material that was removed - acquisition-bridge) to send their image streams to a central computer. The Watchtowers only send images when movements are detected, to reduce the number of images to process and record.
For offline acquisition, all we need to do is record a rosbag on this computer.
For online acquisition, the stream of data needs to be used directly by an apriltag extractor.
Modified 2019-10-06 by AmaurX
No matter how (or on which device) we get the image streams, we need to process them to get the stamped transforms of each Apriltag in each image.
Modified 2019-10-06 by AmaurX
For the offline case, where speed is not relevant, we get a rosbag from the recording. We feed this bag to a post processor
, which code is in part 08 of the cslam repository.
This code will run apriltag extraction as well as odometry processing and it will export all the corresponding stamped transforms to a new bag.
Modified 2019-10-06 by AmaurX
For the online case, we cannot just use one container to do all the extraction. The current strategy is to instantiate one apriltag processor per Watchtower. Each processor is one container that only listens to the image topic of the Watchtower it was assigned to, and outputs the processed stamped transforms. The code is in part 04 of the cslam repository. It is mainly exactly the same process as int the post processing container.
Modified 2019-10-06 by AmaurX
In the wheel odometry case, we can listen to the wheel command topic or get it from a rosbag (same process for offline/online as for the Apriltag detection).
But no matter the way we get the data, here is how we process it:
t1
.t2
> t1
.t1
and t2
, with values from the odometry message at time t1
.V_l
and angular velocity Omega
can be computedt1
and t2
can be computedt1
and with the Autobot’s ID, which makes it a stamped transform, that is then sent to the localization graph.Modified 2019-10-06 by AmaurX
Right now, the acquisition bridge(Pointer to beta/draft material that was removed - acquisition-bridge) Location not known more precisely.previous warning
(20 of 20)
index
This link points to beta/draft material removed
n/a
in module n/a
.V_l
and angular velocity Omega
. Then the above algorithm would just skip the phase where it transform the wheel commands into V_l
and Omega
.