I am looking to create a very basic physical simulation in which a static scene exists in space and is 'viewed' by a simulated LADAR sensor (or Kinect or similar distance-sensing interface).
What I want is a single-input, single-output black box component. It must accept an input structure representing the camera's orientation in space. Its field of view and resolution are configured at startup. The output is an array of values corresponding to the distances measured by each pixel of the sensor. The scene can be sufficiently approximated by a simple grid mapping of elevations. This grid is then interpolated into a surface and ray-traced by the black box 'camera', returning the array of distances.
The end goal is to enable a rough hardware-independent simulation of a rover-like robot navigating obstacles in otherwise flat terrain. This simulation will allow me to close the loop and test its path-planning algorithm without building an actual prototype. There is probably an easy way to do this, but I am limited in the speed at which I can acquire external code, so I prefer built-in libraries.
I have not- I am still assessing my options right now. I can try it, but I'm worried about there being a steep learning curve to the program. Is there a tutorial you might recommend?