Tag Archive: openNi


Today I submitted Multi-touch driver of kinect for windows 7  to multi-touch vista project. Windows7 multitouch application now can be used with kinect.

specification:

Source:
Download
 
Binary:
Download

Prerequisite:
OpenNI 1.3.2 or above and NITE 1.4.1.2 or above ,Multi-TouchVista
How to install:
Extract files to folder “AddIns” . Select NITEProvider by running file Multitouch.Configuration.WPF.exe.
Features:
Wave Gesture to focus.
Grab Gesture To touch one need to do grab gesture and move one’s hand keeping fist closed.Open fist removes the contact from screen. Green dot means no contact to screen. Yellow means can now perform grab gesture to touch screen. Red means point is in contact with screen. Grab gesture should be done by facing hand to sensor.
Remark: Needs improvement in Grab Gesture. You need to first steady your hand then you can perform Grab Gesture.
Future work :Push Gesture need to be added for generating touch event instead of Grab Gesture.(Push Gesture is not support for multi handpoint by NITE in c#) .Code will be submitted soon.

Advertisements

Kinect Cursor Control

When you get the kinect,the  first thing you want to do is to control your pc with with kinect.That’s what i did. well,its not that difficult.With c# ,OpenCv and Nite it is even more easy.Just Get the hand point and set the cursor. its done…!!!! but here is the problem,experience is not as good as mouse. Cursor is jerking and even more you have to move your hand more to reach the corner of the windows.

Check out the video.Here in video you see that with fine movement of hand  you can control the cursor easily. As you see cursor moment is pretty smooth and you don’t need to move your hands to the edge to reach the corner of the window.

Lets see how mouse works. well everyone know it sees the difference in movement and according to that it sets mouse position.we can do the same thing.But lets take different approach. we have to achieve two things.

  1. Control the cursor with your hand by moving hand in comfortable position to reach each corner of the window.
  2. Mouse movement should be smooth and controllable.

Addressing to the problem one , we can  see that we have to map  movement of hand in small space to bigger space.Basically,one can do linear mapping.We use the following notation in rest of the article.

rx=projective x of handpoint.

ry=projective y of handpoint.

cx=cursor x position.

cy=cursor y position.

rwidth=width of bounding box around hand point.

rheight=height of bounding box around hand point.

swidth=screen width.

sheight=screen height.

Let me elaborate “rwidth” and “rheight”. well what we do is we  define our comfort zone. lets say you  feel comfortable to move your hand 10 cm right and 10 cm left same as up and down to navigate within entire screen.Trick is when hand point is create we creat a virtual bounding box  of  ‘rwidth’ and ‘rheight’ keeping handpoint as a center. Now we are ready to go test. Try this.

cx=rx * (swidth/rwidth)

cy=ry*(sheight/rheight)

.It works but not so great.You don’t feel good. Cursor seems like jumping. Relation between cx and rx and between cy and ry is liner.

x1=x2

Solution to above problem is to make relationship between cx and rx and between cy and ry polynomial.

cx=pow(rx,n) *(swidth/pow(rwidth,n))

cy=pow(ry,n)*(sheight/pow(rheight,n)).

You can try now try with different n.  n=1.5 to n=3 works great. Following figures shows mapping for parameter rwidth=20 and  swidth=100 with different value of n.

n=1.7

n=2

n=3

Try with diffrent value of n see what happens.I dont think I need to explain graph you are smart enough to figure it out.

So finally mouse movemnt is somewhat good but….not satisfied. Cursor is still little bit jumpy.

Now lets address to second goal. Cursor movement should be smooth. We will use moving average . Get your self matlab and see how moving average works to smooth the curve. Here is the pseudo code for c#. Try to play with different windows size.  size=3 and n=1.7 gives better control.

int size=5;

int window_counter=0;

Point point_window[size];

void Point_update(Point HandPoint){ //method called by when hand point is updated

cx=pow(rx,n)*swidth/pow(rwidth,n);

cy=pow(ry,n)*rheight/pow(rheight,n);

avgx=cx;

avgy=cy;

for(i=1; i<size-1;i++){

avgx+=point_window[i].x;

avgy+=point_window[i].y;

}

avgx=avgx/size;

avgy=avgy/size;

if(window_counter>(size-1)){

window_counter=0;

}

point_window[window_counter]=new Point(avgx,avgy);

}

Make computer to see the things the way you see.Reduce the dimensions.
To extract finger tips from hand, various techniques can be used like convex hull points, template matching etc..  One of the most difficult task it to extract hand from background. Kinect make it easy for us. The approach I used is somewhat  different to extract finger points.Check out the video. Green points show finger tips and red points show valley points.

Extracting hand is very tedious task but with kinnect hand point and depth image it is easy to do.lets see how  to do it.

Get kinect.Set up OpenNI and Nite. Download c# version of OpenCV. Play with PointViewer sample.Now you know  how to get HandPoint.. here is the step you can follow.

Background subtraction:

  • Get the hand point.
  • Get DepthMap.
  • Take a frame of   M x M pixels(say 200×200 px) from the DepthMap around hand point.
  • To subtract background  ,get depth of HandPoint from DepthMap. Make all pixel to value 0 whose depth is more or less than 5 cm and make rest of pixels to 1 in frame.

HandExtraction:

Here OpenCv save  a lot time. Extract contours using cvFindContour function and select the one contour which contains HandPoint. This contour is  our hand.

FingerTips Extraction:

So we have HandPoint and Hand contour. You can use convex hull  and convexity defect to extract desired  points and then try to identify finger tips.But it has many problems ,like you get more than  convex hull point near corner. So you need to add  filter to reduce it to one point for further processing.  and even more we are reading depthmap that is generated by IR projection.So in realtime video feed data we are getting changes over time ,in few frame point may be present and in few point may not be there.So our algorithme should able to cope this variation in image. Expirement your self you will know what I am talking about.

Polor cordinates:

We have image of lets say 200×200 px. what we will do is  make HandPoint as center of Cartesian system. Then we will find r and theta of each pixel.

r=sqrt(point.x*point.x+point.y*point.y).

theta=atan(y/x).

Now   The idea is  values of r for each point is one dimention curve. find the local maxima and mimima of the curve.bingo.. local maxima is probable finger tips and mimima is propable valley points.