October 19, 2018

Understand Depth Sensor, Gesture Recognition – Some Interesting Videos

 

 

clothing

 

Found some interesting videos about Kinect, but the technology is directly related to HoloLens. Hope you enjoy.

 

The first video is How the “Kinect Depth Sensor Works in 2 Minutes”. The video illustrates how depth sensor works.

Two cameras from two different view points are required, plus the object to form a triangle, so you can calculate the depth of the object. The two camera approach has some limitations. Kinect uses infrared light to improve that by project a speckle pattern to code each spot, so that the RGB camera can identify an object even they are the same color. I am not sure if the sun light limitation applies to the IR sensor in HoloLens though.

 

The second video here is the work done by Microsoft MVP Josh Blake of InfoStrat. He set up a test on Kinect v1 and Kinect v2, found drastic improvement in response time and details. As you can see from the video, Kinect v2 detects much more skeletal details of his body (ignore the legs), especially his hands: not only hand-close, hand-open, but finger pushes, two fingers up or one finger up. I am assuming HoloLens is in the v3 class.

 

The third one demonstrates the use of Kinect in retail clothing scenario: try on clothes without go to the fitting room. The concept is old, but it’s value is there. I hope HoloLens can popularize this idea.

 

HoloLens, Recommended Readings – Business Ideas Collection

 

Two Important Questions About HoloLens

hololens-4
I have written many posts about HoloLens: How HoloLens can transform businesses, entertainment, and our daily life. It’s an amazing piece of technology if it is implemented properly. I have had two questions in mind for some time, and I don’t have answers. I do, however, want to share my analysis and opinions. I hope you find it helpful.

The two questions are: 1. How responsive is the HoloLens to hand gestures, voice and gaze? 2. How does it feel wearing HoloLens, considering its weight (400gram)?

In my opinion, these are two critical questions about HoloLens. The reason they are so important is not only because they directly affect user experience, but also because we have high confidence about other things. These are two areas that we have questions but no clear answers. Now, lets do some analysis.

It is said that hand gesture, voice and gaze are the main user interfaces for HoloLens. As a user interface, number one requirement is being responsive, not sluggishly responsive, but cleanly, crisply responsive. Some journalists who were shown the prototype devices said there was a tiny but noticeable delay. What they have seen was from prototypes, it is not representative to the final product. From the on stage demo, shown to the public, the response seemed very clean. In any case, this is an important question, I am interested in finding out.

The weight is another thing that deserves some analysis. Microsoft had said the final product will weigh 400gram, same as an high end helmet. However, we can’t directly relate a 400gram helmet to a 400gram HoloLens, they are different in shapes. In the helmet, the weight is uniformly distributed around the surface of a half globe. HoloLens is a ring, actually a double ring. 400gram is tiny for a computer, but it is something when you put it on head in the shape of a ring. It’s interesting to know how it feels when you turn your head, and how it feels when you wear it for a long time. I am wondering if the double-ring design has something to do with this in addition to dealing with the heat. I hope so.