The problem – being able to interact with your hands with a 3D hologram

The issues

When you put your hand into a hologram, and grab something and move it away, how does the computer know what you’re aiming at?

I guess this is a problem of 3D space, and knowing exactly

– what is being displayed

– where each pixel of light is

– a 3d model of the hand

For example, if someone is touching a cube that is displayed in midair, we need to know all the points of the cube, and where they exist in the 3D space of the real world. So we imagine that the cube is a physical cube. Once we know it exists, we need to model other 3D objects that exist in that 3D world, such as a hand. Once we have a hand modelled and mapped to the 3D world, we can then model these two 3D objects interacting with each other, even though one isn’t real (this doesn’t matter).

So now that the computer knows where two objects are, that doesn’t mean to say a person knows where his hand is in relation to an object. This depends on their perspective, i.e. where their head is in relation to their eyes, in relation to the object they’re interacting with.

The next thing is to make sure we’re interacting with the object we think we’re interacting with. For instance, we could be touching one box, but it’s really in the position of the second box. This is because of the angle that we’re viewing this from. So we need to look at the person that is interacting with an object, and trace a line from their sight of vision to their hand to the object they are touching.