Hey!
I am Aaditya Saraiya and as a starting point to my [GSoC 2018 project](https://summerofcode.withgoogle.com/projects/#6587806298669056), I wish to simulate a Kinect sensor in Gazebo. I followed the Gazebo [tutorial](http://gazebosim.org/tutorials?tut=ros_depth_camera&cat=connect_ros) to simulate a Kinect sensor which uses the OpenNI driver.
As a follow up to that, I wanted to ask if there is any fundamental difference between a Gazebo Plugin for a Kinect V1 and Kinect V2 sensor. I read a ROS Answers post which states that Gazebo treats Kinect V1 and Kinect V2 as similar RGB-D devices. The code below shows the camera plugin for Kinect V1 which has been taken from the Gazebo tutorial. How will the plugin change in order to simulate a Kinect V2 sensor? Is the difference only in the distortion coefficients and the .so file used?
Thanks in advance!
0.2 true 0.0 camera_ir /camera/depth/image_raw /camera/depth/camera_info /camera/depth/image_raw /camera/depth/camera_info /camera/depth/points camera_link 0.05 0 0 0 0 0 0 0 0 0 0
↧