Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Jellybean, the Kinect Drivable Lounge Chair

You saw it at Mix—in typical fashion, our mission was to build two Jellybean robots in three weeks for the Mix keynote; no pressure, right?—and now it's time to introduce Project Jellybean on Coding4Fun. So, here it is—the Kinect drivable lounge chair! The lounge chair has Omni-directional wheels, eight batteries, two motor controllers, and a frame made of extruded aluminum.

Jellybean exists as a proof-of-concept of what crazy things are possible when utilizing the Kinect for Windows SDK, and the project also leverages the Coding4Fun Kinect Toolkit in order to handle some of the more complex operations.

Before we get into the code, let me point out, THIS WILL WORK WITHOUT THE ROBOT. There is an application setting called IsMotorEnabled, and with this setting set to false, you can play with the user interface and see how we did all our Kinect-enabled goodness.  Smile  The screenshot at the bottom is of me testing this puppy at my desk without any of the motors or relays connected.

clint

Overview

There are five total projects in the C# solution, and Jellybean is broken down into a four big parts:

  • Hardware
  • Robot Software
  • Kinect Software
  • User Interface

image

Hardware

A lot of the hardware is pretty straightforward and can be gleaned from the part list and the wiring diagram. Larry Larsen has a video of me building out the robot and explaining some of the hardware, both during the construction and at the actual event.

 

WARNING

The motors are extremely powerful—everything is very heavy and there is a lot of power in the batteries. Be careful. The wheels easily catch on shoelaces and headphone cords, etc.

With other projects, such as the t-shirt cannon from last Mix, I had to disconnect a rather large number of wires, and so risked short-circuiting the entire project. Jellybean, however, is wired to make charging it a lot easier. The solution below allows me to charge the robot by flipping four heavy-duty switches to the off position. This wiring diagram is also included in the source code as a Visio file called "WiringDiagram.vsd" located in the "Files" directory: 

image

image

Wiring Up the Chair and Relay

I decided to pick a chair that was already electric and just tap into the existing switches, and so I mimicked the wiring to match the chair's "stock" wiring. You'll have to alter this design depending on how your chair is set up.

IMG_1044

Wiring, Wire Management and easy access

Another lesson I learned from the cannon project was to make sure the wiring is nice and easy to get to so the project doesn't have to be half disassembled when I want to reach an individual connection.

To ensure a solid connection, every wire was crimped and soldered with ring connectors. I didn't want any chance of a wire coming loose. As you can see, the left and right wiring harnesses are pretty much exact clones of each other.

IMG_1074

Wheels

These are AndyMark 10" steel omni-directional wheels. A heads-up—you can mount them backwards, and if you do, the chair won't be able to rotate in place. Accordingly, your co-workers will mock you…trust me. What you want is for the wheels to form an O pattern, not an X. Here is a picture of improperly mounted wheels.

IMG_1073

Jellybean Object

The jellybean is what talks to the robotic platform so we can test the platform without the Kinect. The object only knows about two serial ports, which are connected to the motor controllers, and our trusty phidget relay controller, which controls the footrest.

The three methods called during operation are

  • CalculateSpeed
  • Drive
  • ToggleFootrest

How to drive sidewise

Since driving an omni-directional armchair isn't exactly something someone does every day, looked at how I'd drive it with an xbox controller. The Y-axis is the throttle and the X-axis is what I call the vector multiplier. The formula for this is surprisingly straight forward:.

 

private static double ThrottlesThroughVectorMultiplier(double throttle, double vectorMultiplier, bool isFrontMotor)
{
return vectorMultiplier + ((isFrontMotor) ? throttle : -throttle);
}

Since we're dealing with our hands, I also included a "dead" zone where the driver's hands can move but the motors won't react:

 

private double AdjustValueForDeadzone(double value)
{
// positive value
if (value > 0)
{
// value under threshold
if (value < AllowedMovementArea)
return 0; // re-adjust value back to 0 to 1
// Example: deadzone of .2
// that means 1 to .8 is only value, jerky movement
// readjust so .2 = 0 and use same curve
// that brings value - allowed deadzone movement
// so if value = 1, that would be 1, .2 = 0
// values between .2 and 0 would return 0 due to if statement above
// so our adjusted range is .2 to 0 but need 1 to 0
// negating allowed deadzone would be 2 in this case
// which brings us back to 1 to 0 range
value = (value - AllowedMovementArea) * _negatedAllowedMovementArea;
}
else // negative values
{
// value under threshold
if (value > -AllowedMovementArea)
return 0; value = (value + AllowedMovementArea) * _negatedAllowedMovementArea;
} return value;
}

Kinect for Windows SDK

Aww snap, we're finally here! Using the Coding4Fun.Kinect.WPF API with the Kinect for Windows SDK, I simplified the amount of heavy lifting I had to do.

I have two core classes here, and one is just a simple wrapper around the SDK:

From sensor.cs

public void Open()
{
if (_isInit)
Close(); RuntimeOptions flags = 0; if (TrackSkeleton)
{
flags |= RuntimeOptions.UseDepthAndPlayerIndex;
flags |= RuntimeOptions.UseSkeletalTracking;
}
else if (UseDepthCameraStream)
{
flags |= RuntimeOptions.UseDepth;
} if (UseColorCameraStream)
{
flags |= RuntimeOptions.UseColor;
} _runtime.Initialize(flags); if (TrackSkeleton || UseDepthCameraStream)
{
var imageType = (TrackSkeleton) ? ImageType.DepthAndPlayerIndex : ImageType.Depth;
_runtime.DepthStream.Open(ImageStreamType.Depth, 2, DepthResolution, imageType);
} // now open streams
if (UseColorCameraStream)
{
_runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
} _runtime.VideoFrameReady += RuntimeColorFrameReady;
_runtime.DepthFrameReady += RuntimeDepthFrameReady;
_runtime.SkeletonFrameReady += RuntimeSkeletonFrameReady;

_isInit = true;
}

The second class is all about processing the data, NuiDepth.cs. Since the Coding4Fun.Kinect.WPF handles the heavy lifting, the code is pretty straight forward! It's all housed in the DepthFrameReady event:

From NuiDepth.cs

void _sensor_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
{
var imageWidth = e.ImageFrame.Image.Width;
var imageHeight = e.ImageFrame.Image.Height;
var imageHeightWithMargin = imageHeight - 50; var depthArray = e.ImageFrame.ToDepthArray();
var rightHandOffset = imageWidth / 2;
var leftHand = depthArray.GetMidpoint(imageWidth, imageHeight, 0, 0, rightHandOffset, imageHeightWithMargin, MinDistance);
var rightHand = depthArray.GetMidpoint(imageWidth, imageHeight, rightHandOffset, 0, imageWidth, imageHeightWithMargin, MinDistance); leftHand.X *= _bitmapScale;
leftHand.Y *= _bitmapScale; rightHand.X *= _bitmapScale;
rightHand.Y *= _bitmapScale;

var args = new FrameReadyEventArgs
{
DepthBitmap = depthArray.ToBitmapSource(imageWidth, imageHeight, MinDistance, Color.FromArgb(255, 255, 0, 0)),
ImageBitmap = _colorImage,
LeftHand = leftHand,
RightHand = rightHand
};

FrameReady(this, args);
}

User Interface and NUI

Our user interface was designed by the fine folks over at 352 Media and implemented by Dan Fernandez and myself. From this interface, we can turn on the motors and honk a horn as well as raise and lower the chair. We also have a visual for how fast we are going and how the program views our hands.

image

Why didn't we use skeleton tracking?

Well, we wanted to, and as you can see we actually have it turned on. The issue is getting a lock this close to the Kinect due to how close we had to mount the it. Accordingly, we decided to go pure depth data.

We leveraged the GetMidpoint and ToBitmapSource with minimum distance extensions from the Coding4Fun Kinect toolkit to do the coloring and provide us with the hand positions in the screen.

 

 

Conclusion

Now you know how we pulled off Project Jellybean! If you want to try this out, the download link for the source code is at the top of the article. And if you build one and ask nicely, Scott Guthrie may ride it Smile

IMG_1071

About The Author

Clint Rutkas runs Coding4Fun and has built a few crazy projects in the past. Clint is part of the Channel9 team at Microsoft and can be reached at clint.rutkas@microsoft.com or on twitter at @clintrutkas. If you ever have a question, please reach out.

Follow the Discussion

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.