Coffeehouse Thread

44 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

I love you, Kinect.

Back to Forum: Coffeehouse
  • User profile image
    Bas

    Fact: using multiple pixel shaders in WPF is harder than tracking a freaking skeleton in three dimensions.

    Edit: source code provided below.

  • User profile image
    Ian2

    Sweet, how did you get the water to stick to the screen?

  • User profile image
    Dr Herbie

    Cool.  I haven't made a start on the Kinect yet (I'm having RSI/Carpal problems at the moment so no coding after work for a while).

    But now everyone knows what you really look like .... Big Smile

    Herbie

  • User profile image
    Sven Groot

    But now everyone knows what you really look like .... Big Smile

    And that he has exactly the same Ikea chair as I do. Tongue Out

  • User profile image
    Maddus Mattus

    Warp speed mister Crusher!

    Aye captain!

    Generic Forum Image

  • User profile image
    W3bbo

    , Bas wrote

    Fact: using multiple pixel shaders in WPF is harder than tracking a freaking skeleton in three dimensions.

    Which Kinect library are you using? And is the RGB video from the Kinect device exposed as a DirectShow stream or something else?

  • User profile image
    Richard.Hein

    Ha, good stuff.  Smiley  A friend of mine from work and I are planning on spending Saturday going through the Kinect quickstarts ... I can't wait.

  • User profile image
    dentaku

    @Bas: That's great. Are you going to release it to the public any time soon?
    Now that I have a Kinect (thanks to Laura Foy and Paul M.) I've been looking around for interesting demos to try out.

    So far all the MIDI controller apps use OpenNI and some of the earlier "hacks" so I haven't been able to do a whole lot with it. Of course I've only had it for 3 days Smiley

  • User profile image
    aL_

    cool stuff, i got an extra kinect for hacking this week so hopefully i'll have something to show next week.. im probably going to go down the xna route though Smiley

  • User profile image
    cbae

    Nom nom nom.

  • User profile image
    cbae

    Wink

  • User profile image
    Charles

    Right on, Bas!
    C

  • User profile image
    MasterPi

    Woah, awesome! Can you give us some info on how you put it together (for those of us who aren't familiar with gfx programming)?

  • User profile image
    Bas

    @Ian2: sheer force of will.

    @Sven: heh, I've seen that chair in so many places. When I first got my Roomba it got stuck on the legs, so I figured that seeing how many of those chairs I've seen, there must be someone else out there with a Roomba/Poäng combination, and sure enough as soon as I had typed "Roomba poa" in Google a whole bunch of threads about the problem rolled out. I love IKEA.

    @Maddus: that's so uncanny! When I saw it I thought "hey, I don't remember this picture at OH MY GOD!"

    @W3bbo: I'm using the official Kinect SDK. The RGB video is exposed as a bunch of PlanarImage objects for each frame, that you can poll or subscribe to an event that returns one as soon as it's ready. The PlanarImage isn't specific to any framework, but you can get to the bits and other data easily and convert it to whatever image format you need. It's not entirely convenient, but an extension method is easily written and it's still pretty versatile this way.

    @dentaku: I still want to clean up the code a bit, and I'm too busy this weekend, but with a bit of luck I can post something later tonight. Otherwise probably monday.

    @cbae: disco!

  • User profile image
    Bas

    @MasterPie:I'll post some info on how it works when I post the code, but in short it's a bunch of pixel shaders (that I grabbed from the awesome Shazzam) that I applied to the RGB image from the Kinect.

  • User profile image
    cbae

    , Bas wrote

    @Ian2: sheer force of will.

    @Sven: heh, I've seen that chair in so many places. When I first got my Roomba it got stuck on the legs, so I figured that seeing how many of those chairs I've seen, there must be someone else out there with a Roomba/Poäng combination, and sure enough as soon as I had typed "Roomba poa" in Google a whole bunch of threads about the problem rolled out. I love IKEA.

    Poäng? Hey, I think I have that chair too!

  • User profile image
    cbae

    , Bas wrote

    @W3bbo: I'm using the official Kinect SDK. The RGB video is exposed as a bunch of PlanarImage objects for each frame, that you can poll or subscribe to an event that returns one as soon as it's ready. The PlanarImage isn't specific to any framework, but you can get to the bits and other data easily and convert it to whatever image format you need. It's not entirely convenient, but an extension method is easily written and it's still pretty versatile this way.

    Did you use the Coding4Fun libraries? I think I remember seeing some extension methods to simplify some of that stuff being demonstrated in Dan Fernandez's videos.

  • User profile image
    Bas

    Alright, if this public Skydrive folder thing works, here's the source in KinectRipplelicious.zip: https://skydrive.live.com/redir.aspx?cid=62971dd1548e5f8d&page=play&resid=62971DD1548E5F8D!212

    The way the application works is that it displays whatever the RGB camera sees, and tracks the skeletons of up to two people in the frame, and checks which of their joints are closer than a certain distance to the Kinect sensor. Those joints should cause a ripple in the video image, so the joints 3D coordinates are converted to 2D pixel coordinates corresponding to the joint's position in the video image.

    At these 2D coordinates, a ripple pixel shader effect is displayed. Pixel shaders are small C procedures that run on the GPU, and I have no idea how they work exactly, so I downloaded Shazzam and let it generate a nice C# wrapper class for me. Shazza also allows you to test the shaders, so I played around with the provided Ripple pixel shader's properties. I decided that each ripple effect would start with a certain frequency (the number of ripples in the effect) that would over time decrease to a frequency of zero meaning no ripples. This would give the impression of the ripples calming down again.

    Each ripple effect also has an amplitude, which deterines how 'deep' the ripples are. A light touch should cause a light ripple and a deep punch should cause a wild, deep ripple, so the closer the joint is to the sensor, the higher the ripple's amplitude.

    The biggest problem was applying multiple pixel shader effects to the same image. The WPF Image control, like many others, has an Effect property, but that only takes a single pixel shader effect. The only way I could find to apply two or more pixel shaders to the image was by wrapping the Image in a container (I chose a border but I guess it could be  a Canvas or whatever) and setting the Effect property of that container. Then, when another ripple needs to be added, we create another border to wrap the first border (which wraps the Image), set the new border's Effect, and so on. This results in a deep nested tree of borders, the very deepest of which cointains the Image control. As soon as a ripple animation as finished, I remove the border it was applied to. The application stays remarkably performant, despite the fact that, if you go crazy waving your arms and legs, it can contain up to 50 nested borders, each with animating pixel shader, wrapping the single image of you in front of the Kinect.

    I haven't had time to clean up the code yet, I basically wrote it from scratch non-stop in one evening, so some bits are kind of dodgy, but it still works. If anyone has any further questions, let me know.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.