Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Skeletal Tracking Fundamentals

Download

Right click “Save as…”

In the skeletal tracking Quickstart series video, we'll discuss:

  • How skeleton data retrieves joint information for 20 joints (head, hands, hip center, etc)
  • How skeletal tracking works and how you can choose what skeletons to track using tracking IDs.
  • How you can tweak TransformSmoothParameters based on your applications needs (responsiveness versus smoothness)
  • How you can use the built-in depth mapping methods to map a skeletal joint’s position into depth and color space
  • How you can use the Coding4Fun Toolkit to scale the skeletal joint value to make it easier for users of your application to not have to extend their reach when using your hand as a cursor
  • How to use the SkeletonViewer to visualize all joints returned by Kinect including whether those joints are not tracked.

Resources

Tags:

Follow the Discussion

  • Mattia De RosaMattia De Rosa

    Is it possible for kinect windows device to capture gestures of a person when sited down behind a desk? I understood that the currently available kinect device is not capable of partial skeleton detection. Is that true? from your sample video it seem that skeleton is clipped (that is a partial tracking), nevertheless you always stand up to be tracked.
    thank you
    mattia

  • Josiah StapletonJosiah Stapleton

    I would like some help with this please. Does anyone have any source or information on the accuracy of the joint coordinates obtained by the Kinect Sensor? For example, whether the x and y coordinates have an error of +/- 0.05m and the z coordinate an error of +/- 0.07m (Just random values I put to help explain what I am looking for). Any help would be greatly appreciated. Thanks in advance!

    Josiah

  • Hi! I have a question. In that sample code, scaledPosition method is not working.When I take a breakpoint at

    Joint scaledJoint = joint.ScaleTo(1280, 720); 

    in debug mode, scaledJointType is a HipCenter, not what I want. And trackingState fixed to 'NotTracked' mode.I don`t know what I have to do.please help. Thanks.

  • Dan FernandezDan

    @myChan:Try adding a break point only when Tracking State = Tracked. I don't understand how the scaledJoint value could ever be set to a HipJoint if there is no TrackingState. Can you post the code that you're using and we'll see if we can't figure out what's going on?

  • @Dan:Hmm. That`s strange. I didn`t modifiy the sample code. I re-downloaded it, but the same problem occured in that situation.Joint Information is right,but scaledjoint is still incorrect..

    anyways. here is the code.

    bool closing = false;
            const int skeletonCount = 6; 
            Skeleton[] allSkeletons = new Skeleton[skeletonCount];
    
            private void Window_Loaded(object sender, RoutedEventArgs e)
            {
                kinectSensorChooser1.KinectSensorChanged += new DependencyPropertyChangedEventHandler(kinectSensorChooser1_KinectSensorChanged);
    
            }
    
            void kinectSensorChooser1_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
            {
                KinectSensor old = (KinectSensor)e.OldValue;
    
                StopKinect(old);
    
                KinectSensor sensor = (KinectSensor)e.NewValue;
    
                if (sensor == null)
                {
                    return;
                }
    
                
    
    
                var parameters = new TransformSmoothParameters
                {
                    Smoothing = 0.3f,
                    Correction = 0.0f,
                    Prediction = 0.0f,
                    JitterRadius = 1.0f,
                    MaxDeviationRadius = 0.5f
                };
                sensor.SkeletonStream.Enable(parameters);
    
                sensor.SkeletonStream.Enable();
    
                sensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(sensor_AllFramesReady);
                sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30); 
                sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
    
                try
                {
                    sensor.Start();
                }
                catch (System.IO.IOException)
                {
                    kinectSensorChooser1.AppConflictOccurred();
                }
            }
    
            void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
            {
                if (closing)
                {
                    return;
                }
    
                //Get a skeleton
                Skeleton first =  GetFirstSkeleton(e);
    
                if (first == null)
                {
                    return; 
                }
    
    
    
                //set scaled position
                //ScalePosition(headImage, first.Joints[JointType.Head]);
                ScalePosition(leftEllipse, first.Joints[JointType.HandLeft]);
                ScalePosition(rightEllipse, first.Joints[JointType.HandRight]);
    
                GetCameraPoint(first, e); 
    
            }
    
            void GetCameraPoint(Skeleton first, AllFramesReadyEventArgs e)
            {
    
                using (DepthImageFrame depth = e.OpenDepthImageFrame())
                {
                    if (depth == null ||
                        kinectSensorChooser1.Kinect == null)
                    {
                        return;
                    }
                    
    
                    //Map a joint location to a point on the depth map
                    //head
                    DepthImagePoint headDepthPoint =
                        depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);
                    //left hand
                    DepthImagePoint leftDepthPoint =
                        depth.MapFromSkeletonPoint(first.Joints[JointType.HandLeft].Position);
                    //right hand
                    DepthImagePoint rightDepthPoint =
                        depth.MapFromSkeletonPoint(first.Joints[JointType.HandRight].Position);
    
    
                    //Map a depth point to a point on the color image
                    //head
                    ColorImagePoint headColorPoint =
                        depth.MapToColorImagePoint(headDepthPoint.X, headDepthPoint.Y,
                        ColorImageFormat.RgbResolution640x480Fps30);
                    //left hand
                    ColorImagePoint leftColorPoint =
                        depth.MapToColorImagePoint(leftDepthPoint.X, leftDepthPoint.Y,
                        ColorImageFormat.RgbResolution640x480Fps30);
                    //right hand
                    ColorImagePoint rightColorPoint =
                        depth.MapToColorImagePoint(rightDepthPoint.X, rightDepthPoint.Y,
                        ColorImageFormat.RgbResolution640x480Fps30);
    
    
                    //Set location
                    CameraPosition(headImage, headColorPoint);
                    CameraPosition(leftEllipse, leftColorPoint);
                    CameraPosition(rightEllipse, rightColorPoint);
                }        
            }
    
    
            Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
            {
                using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
                {
                    if (skeletonFrameData == null)
                    {
                        return null; 
                    }
    
                    
                    skeletonFrameData.CopySkeletonDataTo(allSkeletons);
    
                    //get the first tracked skeleton
                    Skeleton first = (from s in allSkeletons
                                             where s.TrackingState == SkeletonTrackingState.Tracked
                                             select s).FirstOrDefault();
    
                    return first;
    
                }
            }
    
            private void StopKinect(KinectSensor sensor)
            {
                if (sensor != null)
                {
                    if (sensor.IsRunning)
                    {
                        //stop sensor 
                        sensor.Stop();
    
                        //stop audio if not null
                        if (sensor.AudioSource != null)
                        {
                            sensor.AudioSource.Stop();
                        }
    
    
                    }
                }
            }
    
            private void CameraPosition(FrameworkElement element, ColorImagePoint point)
            {
                //Divide by 2 for width and height so point is right in the middle 
                // instead of in top/left corner
                Canvas.SetLeft(element, point.X - element.Width / 2);
                Canvas.SetTop(element, point.Y - element.Height / 2);
    
            }
    
            private void ScalePosition(FrameworkElement element, Joint joint)
            {
                //convert the value to X/Y
                Joint scaledJoint = joint.ScaleTo(1280, 720); 
                
                //convert & scale (.3 = means 1/3 of joint distance)
                //Joint scaledJoint = joint.ScaleTo(1280, 720, .3f, .3f);
    
                Canvas.SetLeft(element, scaledJoint.Position.X);
                Canvas.SetTop(element, scaledJoint.Position.Y); 
                
            }
    
    
            private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
            {
                closing = true; 
                StopKinect(kinectSensorChooser1.Kinect); 
            }

    only one that I modifiy is ScaledPosition Section.

    Thanks for your help!

  • Dan FernandezDan

    @myChan: Just so I understand the problem, when you put a breakpoint in the ScalePosition method, and before you execute the method, the value of Joint is a HipCenter and it's position is not tracked?

  • @Dan:Definitely. Can you explain why this happens? I can`t find the reason.. Thanks!  

  • I solve the problem. Thanks for your help!

  • BlasBlas

    So I followed your tutorial and cannot get the ScaleTo method to recognize for the Joint. Any idea why? I went and downloaded the sample code, but still cannot get it to work. What did you do myChan?

  • @Dan:

     

    hi...i just wondering how doing the hover button with SDK V1?

    refer link:

    http://channel9.msdn.com/coding4fun/kinect/Recreating-the-Kinect-Hub#c634646980102097278

    anyone knows?

     

    need help. thanks Smiley

  • @Blas: Maybe I think, You should check the GetCameraPoint() method. To scale the position of skeleton, CameraPosition() method must be commented. And After you uncomment the ScaledPosition() method in sensor_AllFrameReady event, You can get a right result!

  • hi!

    @myChan:

    thanks! problem solved.

    hmm...but how about the hover a button to select/click a target?

  • BlasBlas

    I tried that but nothing. I have your code exactly, its just the one line in line 180 where you call the joint.ScaleTo(1280, 720) method. It isn't recognizing the ScaleTo method even though I just downloaded this: http://c4fkinect.codeplex.com/ But its still not working for me. Where is the ScaleTo method in Joint?

  • @Blas: I think ScaleTo() is the built-in function of the joint.After you uncomment the line 180, you should comment the CameraPosition() (line 118~121),then you can get a right answer. Maybe CameraPosition() has a high priority than ScaledPosition(), I think..

  • @kendrick0772: Maybe You can get a inspration in here. Check this out.

    http://social.msdn.microsoft.com/forums/en-US/wpf/thread/f2dbc76e-dd54-4be6-b7d8-a72b1f1296ac/

  • BlasBlas

    @myChan Well I can't do that...like I said I get an error with the scaleTo Method that it does not exist. Where is it? Where can I find it? Why does that error not show up in his code?

  • BlasBlas

    Never mind it works fine now!

  • Hello. I have problems with speed. You can find all details at http://social.msdn.microsoft.com/Forums/en-US/kinectsdk/thread/fe9df13c-94ba-4c4e-805e-51d4b33783fd

    Do you have a solution?

  • Are there any sample code in C++ to get the X, Y, Z coordinate of the joints?

     

  • BlasBlas

    If I wanted to add images to the Camera and interact with them, how would I be able to detect if my either hands are touching the image? I tried using the PointToScreen method but I do not think this will work. Any ideas?

  • VijayVijay

    Hello Dan,
    I have the same question as Mattia De Rosa.
    "Is it possible for kinect windows device to capture gestures of a person when sited down behind a desk? I understood that the currently available kinect device is not capable of partial skeleton detection. Is that true? from your sample video it seem that skeleton is clipped (that is a partial tracking), nevertheless you always stand up to be tracked.
    thank you
    mattia"

    Just to summarize is it possible to capture the hand gestures of a person seated behind a desk?

    Thanks,
    Vijay

  • @myChan:

    how are u?

    thanks for the reply!!! wonderful!

    hmmm, i got questions to ask!

    in the latest sdk v1.

    how to make the cursor move no matter u are using left/right hand?

    any sample code for that?

    anyone knows?

    thanks in advance! Smiley

  • lucabertinettolucabertine​tto

    Thanks for the tutorial!
    Using sensor.SkeletonStream.Enable() (no 'smoothing' parameters then) I still have a significative delay. Can I further improve the response?
    Or maybe it is a problem with my not-so-efficient laptop?

    Thanks in advance ;)

  • Jorge HuertaJorge Huerta

    Hi Dan,
    Thanks a lot for your support with these tutorials, they are helpful. I trying to generate an aplication but I need to detect the pixel color, do you know how to do that?.
    Thanks,
    Regards.

  • Paola SandovalPaola Sandoval

    Hi Dan,
    For my application I would really like to be able to scale the joint distance but I can't get the ScaleTo method to recognize for the Joint.
    It's this line I'm having trouble with:
    Joint scaledJoint = joint.ScaleTo(1280, 720);

    ScaleTo is not recognized. Is it because I'm missing a library or something of the sort?
    Thanks,
    Cheers!
    Paola

  • Dan FernandezDan

    Hello Dan,
    I have the same question as Mattia De Rosa.
    "Is it possible for kinect windows device to capture gestures of a person when sited down behind a desk? I understood that the currently available kinect device is not capable of partial skeleton detection. Is that true? from your sample video it seem that skeleton is clipped (that is a partial tracking), nevertheless you always stand up to be tracked.
    thank you
    mattia"

    Just to summarize is it possible to capture the hand gestures of a person seated behind a desk?

    Thanks,
    Vijay

    Good news, we just recently announced on the Kinect SDK 1.5 that will allow seated skeletal tracking! You can read the full blog post here, but the most relevant part to your question is this:

    Also coming is what we call "seated" or "10-joint" skeletal tracking, which provides the capability to track the head, neck and arms of either a seated or standing user. What is extra exciting to me about this functionality is that it will work in both default and near mode!

     

     

  • Dan FernandezDan

    , Rogerhk wrote

    Are there any sample code in C++ to get the X, Y, Z coordinate of the joints?

     

    Yes, check out the C++ Skeletal Viewer example that ships in the SDK. You an filter by language and select just C++ samples in the Kinect for Windows SDK Sample Browser app that ships with the SDK.

    Cheers,

    -Dan

  • Dan FernandezDan

    Thanks for the tutorial!
    Using sensor.SkeletonStream.Enable() (no 'smoothing' parameters then) I still have a significative delay. Can I further improve the response?
    Or maybe it is a problem with my not-so-efficient laptop?

    Thanks in advance Wink

    Hmm, for a baseline, can you try running the Kinect Explorer app and seeing what the Frames Per Second (FPS) number you get is? The FPS number is located under the depth camera in that sample.

  • Dan FernandezDan

    Hi Dan,
    ... I'm trying to generate an aplication but I need to detect the pixel color, do you know how to do that?.
    Thanks,
    Regards.

    Yes, in the Camera Fundamentals video we show how to get all of the pixels into a byte array (snippet below)

               

    using (ColorImageFrame colorFrame = e.OpenColorImageFrame())            {               if (colorFrame == null)                {                   return;                }               byte[] pixels = newbyte[colorFrame.PixelDataLength];                colorFrame.CopyPixelDataTo(pixels);...

    You can loop through the colors in that byte array to get the colors for each pixel. At a high level, the array is structured so that the data returned for the first point (0,0) and has four bytes representing the pixel color in the BGR32 format (BGR = Blue, Green, Red, Empty).

    1. Blue
    2. Green
    3. Red
    4. Empty

     

    Hope this helps,

    -Dan

  • Dan FernandezDan

    Hi Dan,
    For my application I would really like to be able to scale the joint distance but I can't get the ScaleTo method to recognize for the Joint.
    It's this line I'm having trouble with:
    Joint scaledJoint = joint.ScaleTo(1280, 720);

    ScaleTo is not recognized. Is it because I'm missing a library or something of the sort?
    Thanks,
    Cheers!
    Paola

    Yes, make sure you have a reference to the c4ftoolkit project in the list of references for the project. If you don't, it is available in the dependencies folder of the Quickstarts download or available here - http://c4fkinect.codeplex.com/ 

  • Shashank JereShashank Jere

    Hi Dan,

    Is it possible to limit the tracking to just one skeleton, so that when two or more skeletons appear in front of the kinect, only the skeleton which is recognized first is continued to be tracked? Thanks!

  • HemanthHemanth

    Hello Dan

    Is it possible to get the skeleton coordinates given just a depth image. There is no Kinect involved.

  • MindaugasMindaugas

    Hey help plz :) how to get how many skeletons count in frame

  • Dan FernandezDan

    Hi Dan,

    Is it possible to limit the tracking to just one skeleton, so that when two or more skeletons appear in front of the kinect, only the skeleton which is recognized first is continued to be tracked? Thanks!

    [/quote]

    Yes, the API you'll want to use is SkeletonStream.ChooseSkeletons() which takes one (to track one) or two integers representing which players you want to use for skeletal tracking. The other thing you'll want to do is decide how you want to choose which skeleton you should track. You can do this in a number of ways based on your app, you may need to be a certain distance from the Kinect, so you can get the distance of each player and remove any players not at the correct distance, or you can, similar to Xbox games, have a menu to select the player where the person playing raises their hand over their head and you loop through setting the tracked skeletons (check 1 & 2, check 3&4, check 5&6) and see which of the six skeletons have their hand joint above their head joint.

     

    Hope this helps,

    -Dan

  • Dan FernandezDan

    @Mindaugas:

    Hey help plz Smiley how to get how many skeletons count in frame

    The current API will always return an array of six skeletons, even if there is only one person in the room. You will know which of the six is an actual person by checking the TrackingState of the skeleton.

  • MyraMyra

    Sir this tutotial is really very helpful, what i wanted to ask is that how we can perform the button click function using the Hand Gesture.
    Plz guide me about it....

  • Dan FernandezDan

    Instead of using a built-in button, I'd suggest instead building your own button control. We have an example user control of this on http://c4fkinect.codeplex.com under WPF/Controls/HoverButton. http://c4fkinect.codeplex.com/SourceControl/changeset/view/72114#1215356

    Hope this helps, - Dan

     

    Sir this tutotial is really very helpful, what i wanted to ask is that how we can perform the button click function using the Hand Gesture.
    Plz guide me about it....

  • MyraMyra

    Can you plz upload a tutorial on how to use this control in an app?

  • PramodPramod

    Hi,
    Does anyone know how what is the maximum distance a person could be from kinect and still be picked by skeleton tracking?

  • @Pramod: Find the detailed answer on the dept video =)
    If I remember well it should be 1 to 4 meters for the xbox 360 kinect (default mode)
    and 0.5 meters to 3 meters for the kinect for windows  who has near mode enable.

    Check the video out

  • @Myra: Try using Ray Chambers video to guide you in the process. It helped me.
    http://raychambers.wordpress.com/2012/04/04/kinect-mix-and-match/

     

  •  Hi @Dan 
      Have you ever tried making this  app using navigation windows in wpf? I tried but for some reason it doesn't work.

  • EricEric

    @ Dan

    Hallo Dan,
    i have an important question. I'm a student, and we work this time on a Kinect projekt. Your Videos are great and give a good introduction in programing with kinect.

    I have a problem in the Skeleton Tracking Projekt. If switch the ColorViewer to autosize and maximize the Window, the Elipse on Hands are not longer on the right Position. Can you tell me how i make my app resizable?

  • SKELETON TRACKING PROBLEM

    Hi, I've tried the demo you provide, and I've got problems with the 'skelate tracking". actually the exemple is not working well, I've even tried to copy the complete code, but it still not working. and by checking variable using brake point, ( first, allSkeletons, ...) it appears skeletons are not tracked.

    Does anyone no the reason why or has n idea ?

     

    Thank you in advance

     

    near

  • SOLVED

     

  • Expressionless  Oops Never mind for my question... it's solved ....
    Blushing  Seems that I keep forgeting to add the windows loaded even in the window..

  • Hello

    I want to track Hip Joints for 6 skeletons. Can anyone help with the code?

  • AdilaAdila

    Hey, I need a bit of help here. Bear with me, I'm very new to C#. The video kind of jumps straight to the code. I just have a problem setting the ellipses. They don't move. I don't know how to bind them properly. How do you get the ellipses to move after you create them??

  • MichaelMichael

    @Dan

    Thanks a lot for that awesome Tutorial. I have been trying to apply it in my project but i kept receiving this warning which says "Warning: An ImageFrame instance was not Disposed." I am sure that i disposed all the frames that I created. Can you please help ?!

    Here is my code:


    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    using Microsoft.Kinect;
    using Coding4Fun.Kinect.Wpf;
    using System.Diagnostics;
    using System.IO;

    namespace KinectSkeleton
    {
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
    public MainWindow()
    {
    InitializeComponent();
    }

    bool closing = false;
    const int skeletonCount = 6;
    Skeleton[] allSkeletons = new Skeleton[skeletonCount];

    private void Window_Loaded(object sender, RoutedEventArgs e)
    {
    myKinectSensorChooser.KinectSensorChanged += new DependencyPropertyChangedEventHandler(myKinectSensorChooser_KinectSensorChanged);
    }

    void myKinectSensorChooser_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
    {
    KinectSensor oldSensor = (KinectSensor)e.OldValue;
    if (oldSensor != null)
    {
    oldSensor.Stop();
    oldSensor.AudioSource.Stop();
    }

    KinectSensor mySensor = (KinectSensor)e.NewValue;
    if (mySensor == null)
    return;

    mySensor.DepthStream.Enable(DepthImageFormat.Resolution320x240Fps30);
    mySensor.ColorStream.Enable();
    mySensor.SkeletonStream.Enable();
    mySensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(mySensor_AllFramesReady);

    try
    {
    mySensor.Start();
    Debug.WriteLine("Starting Sensor .....");
    Debug.WriteLine("The Current Elevation Angle is: " + mySensor.ElevationAngle.ToString());
    mySensor.ElevationAngle = 0;
    }
    catch (System.IO.IOException)
    {
    //another app is using Kinect
    myKinectSensorChooser.AppConflictOccurred();
    }
    }

    void mySensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
    {
    if (closing)
    return;

    byte[] depthImagePixels;
    DepthImageFrame depthFrame = e.OpenDepthImageFrame();
    if (depthFrame == null)
    return;

    depthImagePixels = GenerateDepthImage(depthFrame);
    int stride = depthFrame.Width*4;
    image1.Source =
    BitmapSource.Create(depthFrame.Width, depthFrame.Height,
    96, 96, PixelFormats.Bgr32, null, depthImagePixels, stride);


    //Get a skeleton
    Skeleton first = GetFirstSkeleton(e);
    if (first == null)
    return;

    Debug.WriteLine("Head Position is : " + first.Joints[JointType.Head].ToString());

    depthFrame.Dispose();

    }

    private byte[] GenerateDepthImage(DepthImageFrame depthFrame)
    {
    //get the raw data from the frame with the depth for every pixel
    short[] rawDepthData = new short[depthFrame.PixelDataLength];
    depthFrame.CopyPixelDataTo(rawDepthData);

    //use frame to create the image to display on-screen
    //frame contains color information for all pixels in image
    //Height x Width x 4 (Red, Green, Blue, empty byte)
    Byte[] pixels = new byte[depthFrame.Height * depthFrame.Width * 4];

    //hardcoded locations to Blue, Green, Red (BGR) index positions
    const int BlueIndex = 0;
    const int GreenIndex = 1;
    const int RedIndex = 2;
    int player, depth;

    //loop through all distances
    //pick a RGB color based on distance
    for (int depthIndex = 0, colorIndex = 0;
    depthIndex < rawDepthData.Length && colorIndex < pixels.Length;
    depthIndex++, colorIndex += 4)
    {
    //get the player (requires skeleton tracking enabled for values)
    player = rawDepthData[depthIndex] & DepthImageFrame.PlayerIndexBitmask;

    //gets the depth value
    depth = rawDepthData[depthIndex] >> DepthImageFrame.PlayerIndexBitmaskWidth;

    if (player > 0)
    {
    pixels[colorIndex + BlueIndex] = Colors.Gold.B;
    pixels[colorIndex + GreenIndex] = Colors.Gold.G;
    pixels[colorIndex + RedIndex] = Colors.Gold.R;
    }
    else
    {
    pixels[colorIndex + BlueIndex] = Colors.Green.B;
    pixels[colorIndex + GreenIndex] = Colors.Green.G;
    pixels[colorIndex + RedIndex] = Colors.Green.R;
    }
    }

    return pixels;
    }

    Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
    {
    using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
    {
    if (skeletonFrameData == null)
    {
    return null;
    }
    skeletonFrameData.CopySkeletonDataTo(allSkeletons);

    //get the first tracked skeleton
    Skeleton first = (from s in allSkeletons
    where s.TrackingState == SkeletonTrackingState.Tracked
    select s).FirstOrDefault();

    return first;
    }
    }

    void StopKinect(KinectSensor sensor)
    {
    if (sensor != null)
    {
    if (sensor.IsRunning)
    {
    sensor.ElevationAngle = 0;
    sensor.Stop();
    if (sensor.AudioSource != null)
    {
    sensor.AudioSource.Stop();
    }
    }
    }
    }
    private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
    {
    closing = true;
    Debug.WriteLine("Closing window...");
    StopKinect(myKinectSensorChooser.Kinect);
    }
    }
    }

  • kinjadkinjad

    Hi,@Dan.
    I am a student from Tsing Hua University, China. I've been following you tutorial these days. The first four sections went great. However, when I came to the fifth one, which is the Skeletal Tracking Fundamentals, a problem emerged.
    I believe I had exactly followed your steps and codes showed in the video, but it just didn't work out. I mean the Image control and the two ellipses I had set on the MainWindow didn't follow my joints, that is my head and two hands as you have specified in the video.
    Here is my code as you may want to check it out.
    bool closing=false;
    const int skeletonCount=6;
    Skeleton[] allSkeletons=new Skeleton[skeletonCount];


    private void Window_Loaded(object sender, RoutedEventArgs e)
    {
    kinectSensorChooser1.KinectSensorChanged += new DependencyPropertyChangedEventHandler(kinectSensorChooser1_KinectSensorChanged);
    }

    void kinectSensorChooser1_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
    {
    KinectSensor oldSensor = (KinectSensor)e.OldValue;
    StopKinect(oldSensor);
    KinectSensor newSensor = (KinectSensor)e.NewValue;
    newSensor.ColorStream.Enable();
    newSensor.DepthStream.Enable();
    newSensor.SkeletonStream.Enable();
    newSensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(newSensor_AllFramesReady);
    try
    {
    newSensor.Start();
    }
    catch (System.IO.IOException)
    {

    kinectSensorChooser1.AppConflictOccurred();

    }


    }

    void newSensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
    {

    if(closing)
    {
    return;
    }
    Skeleton first=GetFirstSkeleton(e);
    if(first==null)
    {
    return;
    }
    GetCameraPoint(first,e);



    }
    void GetCameraPoint(Skeleton first,AllFramesReadyEventArgs e)
    {
    using(DepthImageFrame depth=e.OpenDepthImageFrame())
    {
    if(depth==null||kinectSensorChooser1.Kinect==null)
    {
    return;
    }
    DepthImagePoint headDepthPoint=
    depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);
    DepthImagePoint leftDepthPoint=
    depth.MapFromSkeletonPoint(first.Joints[JointType.HandLeft].Position);
    DepthImagePoint rightDepthPoint=
    depth.MapFromSkeletonPoint(first.Joints[JointType.HandRight].Position);
    ColorImagePoint headColorPoint=
    depth.MapToColorImagePoint(headDepthPoint.X,headDepthPoint.Y,
    ColorImageFormat.RgbResolution640x480Fps30);
    ColorImagePoint leftColorPoint=
    depth.MapToColorImagePoint(leftDepthPoint.X,leftDepthPoint.Y,
    ColorImageFormat.RgbResolution640x480Fps30);
    ColorImagePoint rightColorPoint=
    depth.MapToColorImagePoint(rightDepthPoint.X,rightDepthPoint.Y,
    ColorImageFormat.RgbResolution640x480Fps30);


    CameraPosition(headImage, headColorPoint);
    CameraPosition(leftellipse, leftColorPoint);
    CameraPosition(rightellipse, rightColorPoint);

    }


    }
    private void CameraPosition(FrameworkElement element, ColorImagePoint point)
    {
    Canvas.SetLeft(element,point.X-element.Width/2);
    Canvas.SetTop(element,point.Y-element.Height/2);

    }

    Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
    {
    using(SkeletonFrame skeletonFrameData =e.OpenSkeletonFrame())
    {
    if(skeletonFrameData==null)
    {
    return null;
    }
    skeletonFrameData.CopySkeletonDataTo(allSkeletons);
    Skeleton first=(from s in allSkeletons
    where s.TrackingState==SkeletonTrackingState.Tracked
    select s).FirstOrDefault();
    return first;
    }


    }

    void StopKinect(KinectSensor sensor)
    {
    if(sensor!=null)
    {
    sensor.Stop();
    sensor.AudioSource.Stop();


    }

    }

    private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
    {
    StopKinect(kinectSensorChooser1 .Kinect);


    }

    private void SetTilt_Click(object sender, RoutedEventArgs e)
    {
    SetTilt.IsEnabled = false;
    if (kinectSensorChooser1.Kinect != null && kinectSensorChooser1.Kinect.IsRunning)
    {
    kinectSensorChooser1.Kinect.ElevationAngle = (int)slider1.Value;

    }
    System.Threading.Thread.Sleep(new TimeSpan(hours:0,minutes:0,seconds:1));
    SetTilt.IsEnabled = true;

    }

    private void slider1_ValueChanged(object sender, RoutedPropertyChangedEventArgs<double> e)
    {

    }
    I was wondering if you could help me a little bit. And I would be so grateful.


  • @Adila & @kinjad: I had a similar problem and it was resolved by removing all 'HorizontalAlignment' and 'VerticalAlignment' properties from the elements you are wanting to move.

     

    Hope that helps,

  • Hi Dan..

    When i try to run my code, which basically tracks my right hand, and ellipses appear on my shoulder, hand and elbow joints, i get an error like this, while running:

    "NullReferenceException was unhandled". , on this line of code:

    Skeleton skeleton= (from s in allSkeletons where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault(); 

    The Bold part on the above line is where the error is highlighted in my code.

    Can you help me with this, coz i'm kinda stuck with this for days now..

    Thanks in Advance.. 

  • Dan FernandezDan

    , ilovekinect wrote

    Hi Dan..

    When i try to run my code, which basically tracks my right hand, and ellipses appear on my shoulder, hand and elbow joints, i get an error like this, while running:

    "NullReferenceException was unhandled". , on this line of code:

    Skeleton skeleton= (from s in allSkeletons where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault(); 

    The Bold part on the above line is where the error is highlighted in my code.

    Can you help me with this, coz i'm kinda stuck with this for days now..

    Thanks in Advance.. 

    The reason this is null is because there is *no* skeleton (or null skeleton) when the app first starts up. What you want to do is do nothing if there is no skeleton.

    if (skeleton == null)

    { return; }

  • vikasvikas

    hi,
    can any1 mail me or upload the code(c++ in opencv) for gesture recognition using kinect.
    here is mail id
    vikas.mulage@gmail.com

  • BalajiBalaji

    @Vikas Meet me tom

  • BalajiBalaji

    Hi Dan...,
    I just went through tutorial its awesome!!,
    i'm just working on Real time Motion Retargeting into 3D model using Kinect,i'm unable to proceed after skeleton data extraction,please help me out regarding further steps
    Thanks,
    Regards>

  • ArtArt

    Hi Dan...,

    It would be possible with the tool box do a mob cauter or a crwd couter?

    associate face images to a square?

    Would not use the skeleton, but as you associate the image to the face .. and an algorithm to count the squares!

    Can you help me implement this?

    something like this video:

    http://www.youtube.com/watch?v=RYuDiQDM0MM&feature=related

    thanx

    Art

  • regiusregius

    Hello Dan,

    I am trying to control a Drone with the kinect. Do i use the skeleton and depth data for that?

    I am running out of time . please help.

  • PerryPerry

    Hi Dan,

    The sample code runs perfectly on my computer; however, when exiting the application, an exception happened on line 177 of KinectSkeletonViewer.xaml.cs where the line reads "ColorImagePoint colorPoint = depthFrame.MapToColorImagePoint(depthPoint.X, depthPoint.Y, this.Kinect.ColorStream.Format);".

    The warning message is shown as "InvalidOperationException was unhandled
    This API has returned an exception from an HRESULT: 0x80070015".

  • Dan FernandezDan

    Hi Dan,

    The sample code runs perfectly on my computer; however, when exiting the application, an exception happened on line 177 of KinectSkeletonViewer.xaml.cs where the line reads "ColorImagePoint colorPoint = depthFrame.MapToColorImagePoint(depthPoint.X, depthPoint.Y, this.Kinect.ColorStream.Format);".

    The warning message is shown as "InvalidOperationException was unhandled
    This API has returned an exception from an HRESULT: 0x80070015".

     

    You may want to try the latest release of Kinect (1.5.2) as I think this should be fixed.

     

     

     

  • Hi Dan,

    Awesome! Thanks for the video.

    I've try out the skeleton tracking (full body) and it work well for the standing/vertical position. However, when I try to do a bit of the exercise in which requires lying down and the skeleton in more of a horizontal position the tracking does not work well. May I check with you if the there is anyway to work around this problem ? or would there be a release in the future of the sdk that would incorporate this lying down/horizontal skeleton tracking features.

    Please help.

    Thanks,

    Viet 

  • @Dan: but I go here http://www.microsoft.com/en-us/kinectforwindows/Develop/Developer-Downloads.aspx and the SDK is still 1.5, the toolkit is the one that have been updated. I'm getting the same "InvalidOperationException was unhandled This API has returned an exception from an HRESULT: 0x80070015" and don't really know what to do, cause it's related to the SkeletonViewer.

    If I delete that viewer from the mainWindow.xaml, that message doesn't appear (not even at closing the window), but that means that I cannot render the Skeletons on Screen automatically.

    Any ideas? Thanks.

  • IztokIztok

    Hello!

    I have a question.
    I've built the demo like that above. But when I've put ellipses for joints on canvas and run the project, ellipses did't appeared in front of color picture. Instead they are in background - not visible. What I did wrong?

    Thank you for the answer.

  • @Dan: I am trying to use the design editor for the main window of the Visual Basic code sample, but i am getting Invalid Markup error. No changes to original code was made. Using KSDK 1.0

     

    THanks!

  • MasonMason

    @Perry and @juanpibanez

    I have a solution to the "InvalidOperationException was unhandled This API has returned an exception from an HRESULT: 0x80070015" and I figured I would post for any future readers.

    I installed version 1.6 of the SDK and the Toolkit (Kinect for Windows) to no avail. After this, I opened the KinectSkeletonViewer.xaml.cs and noticed some function were now obsolete. To make the fix, change the following:

    Line 172 - 181
    //******************
    CoordinateMapper cm = new CoordinateMapper(this.Kinect);
    DepthImagePoint depthPoint = cm.MapSkeletonPointToDepthPoint(skeletonPoint, depthFrame.Format);

    switch (ImageType)
    {
    case ImageType.Color:
    ColorImagePoint colorPoint = new ColorImagePoint();
    if (this.Kinect != null && this.Kinect.IsRunning)
    colorPoint = cm.MapDepthPointToColorPoint(depthFrame.Format, depthPoint, this.Kinect.ColorStream.Format);

    // map back to skeleton.Width & skeleton.Height
    //******************

    Once I updated these operations to the revised functions, I no longer receive the error. Hope this helps!

  • AndrewAndrew

    I used the QuickStart sample code for skeletal tracking and have the latest SDK, but I dont understand why the headImage does not follow my movements. The ellipses follow my hand motions. Why is it when I run it, I can get the ellipses to work, but the headImage automatically goes to the top left corner of the window and doesn't follow my head motions?

  • AndrewAndrew

    I was partially able to solve the problem. All 3 joints in the code are now tracking and objects moving, but I changed the headImage image object to an ellipse object in the .xaml window. The question now is why the headImage image object did not work but the ellipse object works.

  • I really like your videos, they have been very helpful to me. I want to save a movement (x, y, z) and then compare this with another movement (x, y, z) in real time, the idea is that the user will try to mimic the movement saved, and the system makes a calculation to find out how similar (0% ~ 100%) the motions are. My current problem is that the skeleton  from the recorded movement is different of the skeleton that is trying to mimic the movement, I'm trying to work with the idea of ​​a standardized skeleton you have any suggestions on how I can record and compare data in a standardized way?

  • NickBNickB

    Hi,

    I tried to follow the video. Everything compiles, but the ellipses and image are not well mapped in the window. I checked CameraPosition and I do have the same calculation for the "SetLeft" and "SetTop".

    Also, I am unable to download the sample code.

    Thanks for helping

  • BasvmBasvm

    The download link of the source files is dead:(

    Can you fix this?

  • DL Link: http://adf.ly/EP7dk

  • Greg Duncangduncan411 It's amazing what a professional photographer can do...

    .

  • BasvmBasvm

    @SkiRacerDude Thank you:)

  • S FS F

    Hi Channel 9, this is really a beautiful series that you put up.

    Hi I was just wondering if you have your videos transcribed somewhere for download? I really appreciate the tips I'm learning but English is not my native language, so sometimes the presenter speak a bit too fast for me to follow. Would really appreciate a text version of these tutorials.

    Keep up the good work. Thank you very much.

  • @Basvm: Welcome Smiley

  • saloussalous

    Dear Sir
    Im working on kinect, i have some questions
    1- hip center, hip_left and hip_right . what does represent exactly in the body, hip its not a point its a part consist of 3 items in the body,
    please tell me what excatly represent in the body?

    2- for depth data, it gives only distance between object and kinect, if i have object 3d like cube , for points in the z-axis it also retuen this points or just points in front of kinect?

    3- if i need to track points that doesnt exist in the available joints , can i add other joints/points or not?

  • Adam VictorAdam Victor

    Warning 1 'Microsoft.Kinect.DepthImageFrame.MapFromSkeletonPoint(Microsoft.Kinect.SkeletonPoint)' is obsolete: 'This method is replaced by Microsoft.Kinect.CoordinateMapper.MapSkeletonPointToDepthPoint'

    depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);

    this is the line that it say it obsolete.

    anyone know how to fix this problem.

  • Amit SelaAmit Sela

    Hi,
    trying to run the sample code and came up with a problem :
    the line "using Microsoft.Kinect;" has a red line under Kinect. the program is runing, but a build error is showing before. anyone knows how can I make this red line under the Kinect to disapear?

  • imadimad

    hiiii

    how can i do when Hover button by hand skeleton and the command load image to tracking spine skeleton in c#?

    thanksssssss

  • PavelPavel

    Thanks for great tutorial.
    One thing: how to set the image with transparent background? I tried to do so, but the backgroud is always white.

  • TariqTariq

    Hi Dan,

    Really helpful tutorial. Helps me a lot.
    But, I am stuck in a very basic problem. How can I record RGB and depth video. I tried to save RGB and depth as .png files (as shown in colorBasics example, writing one .png color image) in each event handler method. But, it generates 6fps. How can I record RGB and depth video in 30fps speed? Do I need to use multi-threading? I don't know how to use it. Your help regarding this will be highly appreciated and very helpful for the beginners.

    Thanks again.

  • TariqTariq

    Also, the RGB and depth images are not synchronous when I tried to write in this way. Means, for all RGB images, there may not be a similar depth image.

  • MasonMason

    @Adam Victor See my post above. I believe that is the fix you are looking for.

  • SFMIDSFMID

    Hi dan.

    Thank you for all the tutorials, are very useful. I´m doing an school Project and i need to recognize some patterns like rectangles, circles and then know at what depht they are. Do you have some example of this or somewhere can i find something like this?. Thank you again for your tutorials, keep working like that.

  • Dan FernandezDan

     @SFMID: If you are looking to analyze photos and do things like shape or object recognition, check out OpenCV - http://opencv.org/.

  • NeillNeill

    I am developing a kinect app which needs to be able to save the join positions at set intervals , depth won't matter though so I only need to store the x and y I thin , My initial Idea was to do this through image comparison but I want to use the skeleton data , any help much appreciated

  • Muhannad Al-KhudariMuhannadAl​Khudari NUI is my passion

    Please, Could you update this sample for the latest SDK.. It's really great and it's helping a lot in my graduation project, but now it's giving me many warnings with new SDK. Could you please help?

  • Archit KalraArchit Kalra

    Error 9 The name 'headImage' does not exist in the current

    I am gettimg the same error for "leftEllipse" and "rightEllipse". Does anyone can help me debug this issue ?...

    Any help is appreciated

  • RafedRafed

    Hi Dan,

    I have a strange error with these constructor functions...

    GetFirstSkeleton(e);
    ScalePosition
    GetCameraPoint(first, e);

    All the same error "The Name dose not exist for the current context"

  • Sheldon YoungSheldon Young

    Hi Dan

    The thing I do to the sample is to change the reference Microsoft.Kinect to a new version 1.7.0.0.Then I change KinectSkeletonViewer.xaml.cs as Mason told before.But I still get something problems.

    Firstly, when I close the MainWindow by clicking the X at the righttop cornor,the image stop moving. However, the MainWindow is still there and can't be closed. In Windows Task Manager,it shows that the MainWindow is not responding. Only after I stop debugging, the window will be closed. So what's wrong with this? And what should I do?


    Secondly, I get 6 warnings related to obsolete in line 128 131 134 140 144 148. For example:
    'Microsoft.Kinect.DepthImageFrame.MapFromSkeletonPoint(Microsoft.Kinect.SkeletonPoint)' is obsolete: '"This method is replaced by Microsoft.Kinect.CoordinateMapper.MapSkeletonPointToDepthPoint"
    I am new to C# so I don't know how to change that.However the program still goes well with the warning.

  • AfzalAfzal

    Can any one please let me know were can i get the code for skeleton for this video .
    How can I save skeleton images as bit map or jpg in a file ,I would request you guys to help me in this , its very urgent for me .


    Regards
    Afzal

  • Hey all am just curious if what is in here is in the latest sdk. the site seems forgotten, no replies, the quickstart slides is 404

    this specific tutorial helped me with my problem but dont really know if the functions etc still exist on the newer versions, it would be great if you would repost the quickstart slides and let us know that these tutorials are the latest ones 

    regards

    Alex

  • valentinavalentina

    Hi ...
    I am currently having a project that needs to make use of the live streaming and the skeleton tracking together any idea how to do it ? C# PROGRAMMING .

  • thanaponthanapon

    The download link of the source files is dead.

    Can you fix this?

  • @thanapon: I am facing same issue.Page not found is coming.

  • gummybeargummybear

    source files alternative download link: https://code.google.com/p/cs161grp1/source/browse/trunk/KinectforWindowsSDKV1.zip?r=46

  • afzal khanafzal khan

    How can I save skeleton images as bit map or jpg in a file ,I would request you guys to help me in this , its very urgent for me .

    Please find the code .
    It is working . I need to make some specif changes.
    In this code I need to address some problems.

    1. how to create stickman without skeleton viewer binding
    1. how can i save the skeleton stream frames as jpg in file

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    using Microsoft.Kinect;
    using Coding4Fun.Kinect.Wpf;
    using System.Diagnostics;
    using System.IO;


    namespace Play_v_6
    {
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
    public MainWindow()
    {
    InitializeComponent();
    }

    private void Window_Loaded(object sender, RoutedEventArgs e)
    {
    kinectSensorChooser1.KinectSensorChanged += new DependencyPropertyChangedEventHandler(kinectSensorChooser1_KinectSensorChanged);
    }

    void kinectSensorChooser1_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
    {
    // check for the sensor

    KinectSensor oldsensor = (KinectSensor)e.OldValue;

    StopKinect(oldsensor);

    KinectSensor newsensor = (KinectSensor)e.NewValue;

    //enableing various modes of streaming

    newsensor.DepthStream.Enable();
    newsensor.SkeletonStream.Enable();


    newsensor.SkeletonFrameReady += new EventHandler<SkeletonFrameReadyEventArgs>(newsensor_SkeletonFrameReady);
    //newsensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(newsensor_AllFramesReady);
    newsensor.Start();
    //throw new NotImplementedException();
    }





    // setting the postions of ellipse with respect to joints

    public void SetEllipsePosition(Ellipse ellipse, Joint joint)
    {
    Canvas.SetLeft(ellipse, (320 * joint.Position.X) + 320);
    Canvas.SetTop(ellipse, (240 * -joint.Position.Y) + 240);
    }


    // working with skeleton frame

    Skeleton[] skeletons = null;

    void newsensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
    {

    using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())
    {
    if (skeletonFrame != null)
    {
    if (this.skeletons == null)
    {
    this.skeletons = new Skeleton[skeletonFrame.SkeletonArrayLength];
    }
    skeletonFrame.CopySkeletonDataTo(this.skeletons);
    Skeleton skeleton = this.skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault();





    // setiing the ellipses to different joints of skeleton


    if (skeleton != null)
    {

    SetEllipsePosition(HipCenter, skeleton.Joints[JointType.HipCenter]);
    SetEllipsePosition(Spine, skeleton.Joints[JointType.Spine]);
    SetEllipsePosition(ShoulderCenter, skeleton.Joints[JointType.ShoulderCenter]);
    SetEllipsePosition(Head, skeleton.Joints[JointType.Head]);
    SetEllipsePosition(ShoulderLeft, skeleton.Joints[JointType.ShoulderLeft]);
    SetEllipsePosition(ElbowLeft, skeleton.Joints[JointType.ElbowLeft]);
    SetEllipsePosition(WristLeft, skeleton.Joints[JointType.WristLeft]);
    SetEllipsePosition(HandLeft, skeleton.Joints[JointType.HandLeft]);
    SetEllipsePosition(ShoulderRight, skeleton.Joints[JointType.ShoulderRight]);
    SetEllipsePosition(ElbowRight, skeleton.Joints[JointType.ElbowRight]);
    SetEllipsePosition(WristRight, skeleton.Joints[JointType.WristRight]);
    SetEllipsePosition(HandRight, skeleton.Joints[JointType.HandRight]);
    SetEllipsePosition(HipLeft, skeleton.Joints[JointType.HipLeft]);
    SetEllipsePosition(KneeLeft, skeleton.Joints[JointType.KneeLeft]);
    SetEllipsePosition(AnkleLeft, skeleton.Joints[JointType.AnkleLeft]);
    SetEllipsePosition(FootLeft, skeleton.Joints[JointType.FootLeft]);
    SetEllipsePosition(HipRight, skeleton.Joints[JointType.HipRight]);
    SetEllipsePosition(KneeRight, skeleton.Joints[JointType.KneeRight]);
    SetEllipsePosition(AnkleRight, skeleton.Joints[JointType.AnkleRight]);
    SetEllipsePosition(FootRight, skeleton.Joints[JointType.FootRight]);


    }



    // writing the X YZ coodinates to file

    // the only problem is it is taking one output in file , I want the file to be written continuously.

    if (skeleton != null)
    {
    Joint j1 = skeleton.Joints[JointType.Head];

    StreamWriter writer = new StreamWriter("myfile.txt");

    writer.WriteLine("X Y Z");



    if ( j1.TrackingState == JointTrackingState.Tracked )
    {



    //Console.WriteLine("The output of X Y Z coordinates");
    writer.WriteLine("Head: X AXIS" + j1.Position.X + ",Y AXIS \t " + j1.Position.Y + ",Z AXIS\t " + j1.Position.Z);
    //writer.Close();
    }
    writer.Close();
    }

    // it tracks continuosly ans displys the output on console
    // problm is -- i want to write all the output to file
    if (skeleton != null)
    {
    Joint j = skeleton.Joints[JointType.KneeLeft];

    if (j.TrackingState == JointTrackingState.Tracked)
    {
    Console.WriteLine("The output of X Y Z coordinates");

    Console.WriteLine("Head: X AXIS" + j.Position.X + ",Y AXIS \t " + j.Position.Y + ",Z AXIS\t " + j.Position.Z);
    }
    }


    }

    }
    }
    }



    private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
    {
    StopKinect(kinectSensorChooser1.Kinect);
    }

    void StopKinect(KinectSensor sensor)
    {
    if (sensor != null)
    {
    sensor.Stop();
    }

    }

    }
    }

  • IanIan

    anyone know where I can find a working version of this project? The download link above doesn't work and I am not very skilled at XAML...

  • IanIan

    nvm... should have read everything first...

  • alexalex

    looking for selection code gesture:hand move towards the kinect and item should be selected.

  • Soham JainojiSoham Jainoji

    i cant download the sample example.
    please help me.

  • BradBrad

    For those having trouble with the ellipse, you have to put them in a canvas tag like so:
    <Canvas> (Code for ellipses) </Canvas>
    in the .xaml file.

  • manelmanel

    i tried to do the same in the video but it doesn't work.any one can help me

    this the code

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.Windows;
    using System.Windows.Controls;
    using System.Windows.Data;
    using System.Windows.Documents;
    using System.Windows.Input;
    using System.Windows.Media;
    using System.Windows.Media.Imaging;
    using System.Windows.Navigation;
    using System.Windows.Shapes;
    using Microsoft.Kinect;
    using WpfKinectSkeleton;
    using Microsoft.Kinect.Toolkit;
    using Coding4Fun.Kinect.Wpf;


    namespace test_skel_4
    {
    /// <summary>
    /// Interaction logic for MainWindow.xaml
    /// </summary>
    public partial class MainWindow : Window
    {
    public MainWindow()
    {
    InitializeComponent();
    }
    bool closing = false;
    const int skeletonCount = 6;
    Skeleton[] allSkeletons = new Skeleton[skeletonCount];

    private void Window_Loaded(object sender, RoutedEventArgs e)
    {
    kinectSensorChooser1.KinectSensorChanged += new DependencyPropertyChangedEventHandler(kinectSensorChooser1_KinectSensorChanged);

    }

    void kinectSensorChooser1_KinectSensorChanged(object sender, DependencyPropertyChangedEventArgs e)
    {
    KinectSensor old = (KinectSensor)e.OldValue;

    StopKinect(old);

    KinectSensor sensor = (KinectSensor)e.NewValue;

    if (sensor == null)
    {
    return;
    }




    var parameters = new TransformSmoothParameters
    {
    Smoothing = 0.3f,
    Correction = 0.0f,
    Prediction = 0.0f,
    JitterRadius = 1.0f,
    MaxDeviationRadius = 0.5f
    };
    //sensor.SkeletonStream.Enable(parameters);

    //sensor.SkeletonStream.Enable();

    sensor.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(sensor_AllFramesReady);
    sensor.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
    sensor.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);

    try
    {
    sensor.Start();
    }
    catch (System.IO.IOException)
    {
    kinectSensorChooser1.AppConflictOccurred();
    }
    }

    void sensor_AllFramesReady(object sender, AllFramesReadyEventArgs e)
    {
    if (closing)
    {
    return;
    }

    //Get a skeleton
    Skeleton first = GetFirstSkeleton(e);

    if (first == null)
    {
    return;
    }



    //set scaled position
    //ScalePosition(headImage, first.Joints[JointType.Head]);
    ScalePosition(leftEllipse, first.Joints[JointType.HandLeft]);
    ScalePosition(rightEllipse, first.Joints[JointType.HandRight]);


    GetCameraPoint(first, e);

    }

    void GetCameraPoint(Skeleton first, AllFramesReadyEventArgs e)
    {

    using (DepthImageFrame depth = e.OpenDepthImageFrame())
    {
    if (depth == null ||
    kinectSensorChooser1.Kinect == null)
    {
    return;
    }


    //Map a joint location to a point on the depth map
    //head
    DepthImagePoint headDepthPoint =
    depth.MapFromSkeletonPoint(first.Joints[JointType.Head].Position);
    //left hand
    DepthImagePoint leftDepthPoint =
    depth.MapFromSkeletonPoint(first.Joints[JointType.HandLeft].Position);
    //right hand
    DepthImagePoint rightDepthPoint =
    depth.MapFromSkeletonPoint(first.Joints[JointType.HandRight].Position);


    //Map a depth point to a point on the color image
    //head
    ColorImagePoint headColorPoint =
    depth.MapToColorImagePoint(headDepthPoint.X, headDepthPoint.Y,
    ColorImageFormat.RgbResolution640x480Fps30);
    //left hand
    ColorImagePoint leftColorPoint =
    depth.MapToColorImagePoint(leftDepthPoint.X, leftDepthPoint.Y,
    ColorImageFormat.RgbResolution640x480Fps30);
    //right hand
    ColorImagePoint rightColorPoint =
    depth.MapToColorImagePoint(rightDepthPoint.X, rightDepthPoint.Y,
    ColorImageFormat.RgbResolution640x480Fps30);


    //Set location
    CameraPosition(headImage, headColorPoint);
    CameraPosition(leftEllipse, leftColorPoint);
    CameraPosition(rightEllipse, rightColorPoint);
    }
    }


    Skeleton GetFirstSkeleton(AllFramesReadyEventArgs e)
    {
    using (SkeletonFrame skeletonFrameData = e.OpenSkeletonFrame())
    {
    if (skeletonFrameData == null)
    {
    return null;
    }


    skeletonFrameData.CopySkeletonDataTo(allSkeletons);

    //get the first tracked skeleton
    Skeleton first = (from s in allSkeletons
    where s.TrackingState == SkeletonTrackingState.Tracked
    select s).FirstOrDefault();

    return first;

    }
    }

    private void StopKinect(KinectSensor sensor)
    {
    if (sensor != null)
    {
    if (sensor.IsRunning)
    {
    //stop sensor
    sensor.Stop();

    //stop audio if not null
    if (sensor.AudioSource != null)
    {
    sensor.AudioSource.Stop();
    }


    }
    }
    }

    private void CameraPosition(FrameworkElement element, ColorImagePoint point)
    {
    //Divide by 2 for width and height so point is right in the middle
    // instead of in top/left corner
    Canvas.SetLeft(element, point.X - element.Width / 2);
    Canvas.SetTop(element, point.Y - element.Height / 2);

    }

    private void ScalePosition(FrameworkElement element, Joint joint)
    {
    //convert the value to X/Y
    Joint scaledJoint = joint.ScaleTo(1280, 720);

    //convert & scale (.3 = means 1/3 of joint distance)
    //Joint scaledJoint = joint.ScaleTo(1280, 720, .3f, .3f);

    Canvas.SetLeft(element, scaledJoint.Position.X);
    Canvas.SetTop(element, scaledJoint.Position.Y);

    }


    private void Window_Closing(object sender, System.ComponentModel.CancelEventArgs e)
    {
    closing = true;
    StopKinect(kinectSensorChooser1.Kinect);
    }
    }
    }

Remove this comment

Remove this thread

close

Comment on the Post

Already have a Channel 9 account? Please sign in