So,
this is interesting - the infrared camera used in the Microsoft Kinect
is what allows for depth perception to capture and recognize objects.
Put one on a drone and its like having a sonar.
There was a time when you could create 3d models with such a camera - take a picture and you make what shoot. this is how motion capture works to some extent - the geometry of the models arent clean but depending on the purpose it doesnt matter. Thats ok for something subjective like a model of a person, but doesnt work (I dont think) for something explicit like an industrial object.
Put one on a drone and its like having a sonar.
There was a time when you could create 3d models with such a camera - take a picture and you make what shoot. this is how motion capture works to some extent - the geometry of the models arent clean but depending on the purpose it doesnt matter. Thats ok for something subjective like a model of a person, but doesnt work (I dont think) for something explicit like an industrial object.
Example: superimposeinfrared scans of multimple operators into an infrared scan of a location, and relay the combination into one virtual space. If the elements/objects in the 'shared' location serve as interfaces for affecting things in the real world, the result can be a collaborative space.
Example: a remote team of operators controlling robots in an industrial plant. Or a remote army controlling robot drones. The physical dynamics of the remote location can be simulated in a virtual space to account for collisions, gravity, light, darkness, reflections, refractions, etc.