代写ROB-6213 Project 3代做Python编程
- 首页 >> Matlab编程ROB-6213 Project 3
Due: Friday, April 21, 2024, 11:59 pm
It is time to put together everything that you’ve learned in this course! In this phase, you’ll use vision and IMU for estimation!
1 Unscented Kalman Filter
In this project, you have to develop an Unscented Kalman Filter (UKF) to fuse the inertial data already used in project 1 and the vision-based pose and velocity estimation developed in project 2. The UKF may capture the non-linearity of the system better, but it might require more runtime. Use the IMU-driven model from the project 1 and fuse the inertial data with from camera pose and velocity obtained in project 2. The project is divided in 2 parts. In the first one, you will use as measurement the visual pose estimation, whereas in the second one you will use only the velocity from the optical flow. Please note that for part 1 the pose is in the world frame, so the same linear measurement model used in project 1 applies. However, for the optical flow the velocity is expressed in the camera frame and for this reason the model is different, will be nonlinear, and should be carefully moved to the body frame that is coincident with the IMU frame in this case.
2 Sensor Data
The same sensor data format and type of project 1 and 2 will be used in this project. You will test your algorithm on dataset 1 and dataset 4 as done for project 1. To guarantee that everyone is on the same level to solve project 3, we will provide the results of project 2 in 2 mat separate files, one for each dataset named proj2_dataset1.mat and proj2_dataset4.mat.
A sensor packet for the IMU data is a struct that contains following fields:
1 sensor.is_ready % True if a sensor packet is available , false otherwise 2 sensor.t % Time stamp for the sensor packet , different from the Vicon time 3 sensor.omg % Body frame. angular velocity from the gyroscope 4 sensor.acc % Body frame. linear acceleration from the accelerometer |
A sensor packet for the project 2 is made by matrices named in the following way:
1 time %time of the data 2 position % Position of the robot frame with respect to the world 3 angle %Orientation of the robot frame with respect to the world 4 linearVel %Velocity of the camera frame. with respect to the world frame. expressed in the camera 5 angVel %Angular velocity of the camera frame. with respect to the world expressed in the camera |
A file has been provided to load the data and synchronize it.
3 Report
You need to summarize your results in a report submitted in pdf format and generated with latex or word. Please add on top of your manuscript your name and NYU ID. The report should not be more than 8 pages including plots. You will have to use the plot function that has been provided in the code to generate your results for the 2 datasets. In addition to the results, please include your approach. Do not just write equation, but try to add your logic process and explain why and how you used the equations or models you have in your code. Moreover, briefly comment your plots and compare your results to the vicon data. The plot function already overlaps your filter estimates with the ground truth provided by the vicon. Your code in each part should not take more than 1 minute to run. We will not consider submitted codes that go over this running time.
4 Grade Policy and Submission
The overall score will be 100 and will be subdivided in the following way, part 1 (35 points), part 2 (55 points), and report quality and readability (10 points). Do not modify any part of the code except the files where your code should be added. Any other type of modification will result in 0 points. All the files, including code and report, should be submitted in an unique zip file.