Modified 2020-10-22 by sageshoyu
In this part of the project you will create a class that interfaces with the picamera to extract planar positions of the drone relative to the first image taken using OpenCV’s estimateRigidTransform function.
Modified 2022-09-26 by Stefanie Tellex
Before attempting to analyze the images, we should first check that the images are being properly passed into the analyze method
Exercises
student_rigid_transform_node.py
and print the data
argument in the method image_callback
. Verify you are receiving images from the camera. Modified 2022-09-26 by Stefanie Tellex
To estimate our position we will make use of OpenCV’s estimateAffinePartial2D function. This will return an affine transformation between two images if the two images have enough in common to be matched, otherwise, it will return None.
Exercises
Complete the TODOs in image_callback
, which is called every time that the camera gets an image, and is used to analyze two images to estimate the x and y translations of your drone.
estimateAffinePartial2D
, you’ll see a 2x3 matrix when the camera sees what it saw in the first frame, and a None when it fails to match. This 2x3 matrix is an affine transform which maps pixel coordinates in the first image to pixel coordinates in the second image. translation_and_yaw
, which takes an affine transform and returns the x and y translations of the camera and the yaw.self.altitude
in the callback. Use this variable to compensate for the height of the camera in your method from step 4 which interprets your affineTransform.Modified 2022-09-26 by Stefanie Tellex
Simply matching against the first frame is not quite sufficient for estimating position because as soon as the drone stops seeing the first frame it will be lost. Fortunately we have a fairly simple fix for this: compare the current frame with the previous frame to get the displacement, and add the displacement to the position the drone was in in the previous frame. The framerate is high enough and the drone moves slow enough that the we will almost never fail to match on the previous frame.
Exercises
Modify your RigidTransformNode class to add the functionality described above.
self.x_position_from_state
and self.y_position_from_state
(the position taken from the pidrone/state
topic) as the previous coordinates.Note The naive implementation simply sets the position of the drone when we see the first frame, and integrates it when we don’t. What happens when we haven’t seen the first frame in a while so we’ve been integrating, and then we see the first frame again? There may be some disagreement between our integrated position and the one we find from matching with our first frame due to accumulated error in the integral, so simply setting the position would cause a jump in our position estimate. The drone itself didn’t actually jump, just our estimate, so this will wreak havoc on whatever control algorithm we write based on our position estimate. To mitigate these jumps, you should use a filter to blend your integrated estimate and your new first-frame estimate. Since this project is only focused on publishing the measurements, worrying about these discrepancies is unnecessary. In the UKF project, you will address this problem.
Modified 2022-10-21 by Stefanie Tellex
Now that we’ve got a position estimate, let’s begin hooking our code up to the rest of the flight stack.
To connect to the JavaScript interface, clone pidrone_pkg
on your
base station machine. Point any web browser at the web/index.html
directory. This will open up the web interface that we will be using
the rest of the semester.
/pidrone/reset_transform
and a callback owned by the class to handle messages. ROS Empty messages are published on this topic when the user presses r for reset on the JavaScript interface. When you receive a reset message, you should take a new first frame, and set your position estimate to the origin again./pidrone/position_control
. ROS Bool messages are published on this topic when the user presses p
or v
on the JavaScript interface. When we’re not doing position hold we don’t need to be running this resource-intensive computer vision, so when you receive a message you should enable or disable your position estimation code.Modified 2022-10-20 by Stefanie Tellex
Debugging position measurements can also be made easier through the use of a visualizer. A few things to look for are sign of the position, magnitude of the position, and the position staying steady when the drone isn’t moving. Note again that these measurements are unfiltered and will thus be noisy; don’t be alarmed if the position jumps when it goes from not seeing the first frame to seeing it again.
Exercises
Use the web interface to visualize your position estimates
rosrun project-sensors-yourGithubName student_rigid_transform_node.py
in `4.r
and the p
to engage position hold.rostopic echo /pidrone/picamera/pose
to view the output of your student_analyze_phase classr
should set the drone drone visualizer back to the origin.Modified 2019-10-16 by andrewkpeterson