Wiimote Virtual Room Designer

Sign in to queue


Last year I had to earn my M.Sc. in engineering and decided to make up my own final graduation project, a part of which is this very cheap design tool. Check out the video below!

Brian Peek, author of WiimoteLib, saw the video and asked me to write a practical article on how to create something like this yourself. Let’s get started!

Hardware Setup

To get this working, you only need a digital projector, a Wiimote, a few cheap components, and, of course, a computer. The latter won’t need a lot of horsepower; I’m even able to run it on my $300 netbook with an acceptable framerate! Let’s take a look at how to prepare these items before we get working on the software.


Creating the tabletop display

First, let’s create the poor man’s version of the Microsoft Surface table. If you are like me and aren’t burdened by a fat bank account, just take some ugly photo-in-a-frame off the wall and lay its protective sheet of glass down on a table-without-tabletop. Trust me: as long as you don’t sit on it, it works like a charm! Now go to a nearby grocery store and buy some white fat-free paper. Tape it on top of the glass to create your DIY projection screen. You may also tape it to the bottom side, but in my experience it will start hanging down after some time has passed. As shown in the video, use an inclined mirror and a projector connected to your computer. Now you have your own backlit tabletop display!

Making the display interactive

Now comes the trick: making the display interactive. Enter the brilliant Nintendo Wiimote. Simply put it in a stable position facing the mirror on top of the projector, preferably right above the lens. The Wiimote contains a high-performance camera that can simultaneously track up to 4 infrared light sources. Now, if you fashion a simple infrared pen (or do it the easy way and buy one online), the Wiimote can track its position all across the display!

Fashioning the point-of-view device


We’re not finished yet, because for this project we’re also going to need a point-of-view device. This device is nothing more than a small glass turned upside down, with two infrared LEDs in its bottom as shown in the photo. Building this is similar to building 2 IR pens.  Make sure the small glass is large enough to fit an AAA battery in there! It’s best to also incorporate a pushbutton switch right between the LEDs, so that they only emit light when the point-of-view is slightly pressed onto the tabletop display. It may be a bit tricky to get the button position exactly right, but I’m sure you’ll manage with some perseverance.

Connecting to the Wiimote

Now it’s time to connect to the Wiimote. For your convenience, I’ve copy-pasted Brian’s short manual here:

1. Start up your Bluetooth software and have it search for a device.

2. Hold down the 1 and 2 buttons on the Wiimote. You should see the LEDs at the bottom start flashing. Do not let go of these buttons until this procedure is complete.

3. Wiimotes should show up in the list of devices found as Nintendo RVL-CNT-01. If it's not there, start over and try again.

4. Click Next to move your way through the wizard. If at any point you are asked to enter a security code or PIN, leave the number blank or click Skip. Do not enter a number.

5. You may be asked which service to use from the Wiimote. Select the keyboard/mouse/HID service if prompted (you should only see one service available).

6. Finish the wizard.

If you can’t get it to connect, you might try using a Wiimote compatible Bluetooth adapter and stack.

Inserting Some Third-Party Code

With the hardware all set up, we’ll continue to the software. Make sure you have installed a (free) version of Visual Studio with C#, and Microsoft XNA Game Studio version 3.0 or higher. XNA integrates neatly into Visual Studio and we’ll use it as an easy way to harness your graphic card’s 3D power. Our home designer relies heavily on work done by the open-source community, so let’s start by inserting these components first.

XNA Surface As A WinForms Control

We’re going to use a hack that allows us to use XNA-controlled surfaces on as many forms as we like. There’s no need to know exactly how the hack works—we’ll just use it! Therefore, instead of starting a normal XNA project within Visual Studio, we’ll make use of a sample project that has been posted online at the XNA Creators club. Download and open it. If you run the solution you just downloaded, you’ll see two separate XNA-powered surfaces (the left one contains some text and the right one is a spinning triangle).


Both are standard Controls, but why is that important? Because now we can easily create multiple WinForms, each containing its own XNA surface—one for the blueprint and the other for the 3D perspective view! Later on, it will look like this:


A Little Housekeeping

Since we don’t want to do any more work than necessary, we’ll start our project from the this XNA control example project. Let’s do a little housekeeping to keep everything neat and tidy. First, make sure you delete all controls present on the MainForm. In the solution explorer, rename “SpinningTriangleControl.cs” and “SpriteFontControl.cs” to “BluePrintControl.cs” and “_3Dcontrol.cs” respectively, and move all files except “MainForm.cs” to a new directory called “Controls”. By the way, this would also be an appropriate moment to rename both Control classes. We’ll change their functionality later on. Your solution explorer should now look like this:


It may also be smart to change the name of our project from “WinFormsGraphicsDevice” to “HomeDesigner” or something (I actually call it “DualDesign”, but choose whatever you like).

Enabling whiteboard functionality

Time to honor the Wiimote community’s all-time great: Johnny Lee Chung! He’s the guy that used Brian’s managed Wiimote library to build some amazing applications. We’ll tweak his Whiteboard application to make our blueprint interactive. Check out his video:

I downloaded the source code for Johnny’s Whiteboard application and then updated to support version 1.7 of WiimoteLib, which is more compatible than the version Johnny used.

Now, by adding the following code to the Click event handler, let’s add a button on our MainForm that starts Chung’s app inside our own:


    wiimoteForm = new WiimoteWhiteboard.WiimoteWhiteboardForm();
    wiimoteForm.Visible = true;
catch (Exception x)
    System.Console.Out.WriteLine("Exception: " + x.Message);


Make sure you have established a Bluetooth connection to the Wiimote before running the code. What Lee’s app basically does is emulate a standard mouse. The operating system (and thus our blueprint control) won’t see the light pen but will receive standard MouseDown, MouseMove, and MouseUp events. This is very convenient, because we don’t need the Wiimote to be connected while debugging: we can simply draw on the form with our mouse since our blueprint won’t be able to tell the difference!

Pretty Lines

There’s one last component to insert before we can really start. In the demo, you saw the big, fat white lines I drew on the blueprint. These pretty RoundLines have been coded by Microsoft’s Mike Manders and released into the public domain. Download them and insert the code into our project. Make sure to manually copy all .xnb files to our project’s “bin\ … \debug\Content” directory using the Windows Explorer (not the Solution Explorer), otherwise your code might not run.

Coding the BluePrintControl

We’ve got all the third-party code we need, so now it’s time to get our hands dirty. However, the entire solution is pretty comprehensive and much of the code is trivial. Therefore, we won’t be getting into every nitty-gritty detail; instead, I will single out some of the interesting parts. A lot of the code is XNA-related. If you want to read more about it, “Learning XNA 3.0” (O’Reilly) is an excellent starting point, as are Riemer’s online tutorials.

XNA Basics

In order to understand the code, you’ll need a grasp of some of the very basics of XNA Game Studio. XNA Game Studio is a comprehensive set of tools that allows the development of entire games, but we’ll only use it for basic visualization. XNA games run in a continuous loop, basically drawing frames to the viewport at as fast a rate as possible. A lot of things happen behind the scenes in each frame, but the most important part is the Draw() method in which all the drawing is done. Here, we draw all elements to the frame buffer, at the end of which the buffer “empties” to the viewport and becomes visible to the user.

Drawing to the GraphicsDevice

Because we were lazy and started with a pre-coded demonstration control, we’ll have to remove the unnecessary code in “BluePrintControl.cs” (the drawing of the spinning triangle). After cleaning up, make sure our Draw() method looks like this:


/// <summary>
/// Draws the control.
/// </summary>
protected override void Draw()
    // clears the graphicsdevice
    // draws all elements to the viewport


The GraphicsDevice is an abstraction of the graphics card and is what we draw to. The Clear() statement clears the entire frame buffer to our default background color on top of which we can draw. I used “Color.DarkBlue” but you can use anything you want here as long as it’s blue (otherwise you should have differently named the BluePrintControl class Smiley ). Next, we call four methods that draw all elements to the blueprint.

Drawing the Walls

Let’s take a look at the DrawWalls() method:


private void DrawWalls()
            // Draw lines
                ConvertWallListToRoundLineList(wallList), // converts Walls to RoundLines
                camera.View * camera.Projection, // 4x4 matrix to that allows XNA to map 3D coordinates to screen coordinates


This is where we make use of the RoundLine component. The RoundLineManager.Draw() method takes a list of RoundLine objects and draws them to the graphicsdevice in one go. Internally, we store the walls in our own format, so the ConvertWallListToRoundLineList method does exactly what its name implies.

The fourth argument, camera.View * camera.Projection, tells XNA how to map 3D coordinates to the viewport. The View matrix contains information regarding the position and orientation of the camera, and the Projection matrix can be seen as a matrix describing the lens. Since we want the blueprint to be displayed without any distortion, we previously defined the projection matrix to be orthographic:


camera.Projection = Matrix.CreateOrthographic(this.Width, this.Height, 0, 100);

To get an idea of what an orthographic camera is, imagine a real camera that is hanging 500 feet above our blueprint. In order to see the blueprint, it has to really zoom in. In this case, everything becomes very flat and undistorted—basically, it has become an orthogonal camera.

Drawing the Objects Using Sprites

The walls we just drew to the screen are flat lines floating in a true 3D space. However, with XNA it is also possible to just draw to the viewport coordinates; no 3D to 2D mapping has to be done in this case.

XNA handles this in a very convenient way using Sprites. A sprite is a bitmap drawn into a rectangle that you have specified in advance. It offers a very high performance since your graphics card does all the hard work! The following code draws the furniture to the blueprint using sprites:


private void DrawAllSprites()
            spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.BackToFront, SaveStateMode.SaveState);
            foreach (ImageTextureObject tempObject in objectList)
                Rectangle tempBoundingRectangle = this.GetBoundingScreenRectangleFromObject(tempObject);
                    tempObject.Texture, // the bitmap
                    tempBoundingRectangle, // the bounding rectangle (not yet rotated)
                    -tempObject.Rotation.Y, // determines the rotation of the sprite
                    new Vector2((float)tempObject.Texture.Width / 2, (float)tempObject.Texture.Height / 2),

Note that we created an ImageTextureObject such that it contains all information about a piece of furniture: position, orientation, and size in addition to the image of that particular piece of furniture. Now, we loop through the entire list of furniture and use the SpriteBatch.Draw() method draw to the graphicsdevice. Note that any SpriteBatch.Draw() call has to occur within the SpriteBatch.Begin() and .End() statements. This prepares the graphics device to receive the data and allows some optimization to occur after a group of sprites has been received. We draw the handles that appear (see image below) when clicking an object in exactly the same manner:


Receiving User Input

Now, how do we let the user draw lines and drag furniture around? First, we’ll have to subscribe to the MouseDown, MouseMove, and MouseUp events:


this.MouseDown += new System.Windows.Forms.MouseEventHandler(this.bluePrintControl_MouseDown);
this.MouseMove += new System.Windows.Forms.MouseEventHandler(this.bluePrintControl_MouseMove);
this.MouseUp += new System.Windows.Forms.MouseEventHandler(this.bluePrintControl_MouseUp);



Recognizing Button Clicks

When receiving a MouseDown event, we’ll first check if there is a button pressed. We do this using the ButtonPressed() method:


private int ButtonPressed(System.Drawing.Point point)
    foreach (ButtonSprite button in buttonList)
        if (button.Clicked(new Point(point.X, point.Y))) return buttonList.IndexOf(button);
    return -1;


Each Rectangle object comes with a nice Intersects() method that allows us to check if two Rectangles well…intersect J. We exploit this by constructing a new 1x1 rectangle at the cursor location:


public bool Clicked(Point location)
    return this.TargetRectangle.Intersects(new Rectangle(location.X, location.Y, 1, 1));


If true, the index of the button is returned by the ButtonPressed() method so that we know what button was pressed. In a similar way, we check if an object was clicked.

Drawing Walls with the Light Pen

If it turns out that there is no button, object, or handle at the MouseDown location, we assume that the user wants to draw a wall. The location sent along with the MouseDown event is in screen coordinates (in pixels); however, in order to draw a wall we need it in “world coordinates” (in meters). For that we employ the following method:


private Vector3 ConvertLocalMouseToWorldCoors(System.Drawing.Point value)
    Vector3 worldCoors = new Vector3(
        this.camera.Target.X - this.camera.Width / 2 
            + (this.camera.Width / (float)this.ClientSize.Width) * value.X,
        -this.camera.Target.Y - this.camera.Height / 2 
            + (this.camera.Height / (float)this.ClientSize.Height) * value.Y
    return worldCoors;


This method is called a lot throughout the code, and there is also a method that converts world coordinates to screen coordinates: ConvertWorldCoorsToLocalCoors().

Reading the Point-of-View Device

Now comes some good old hackin’! The whiteboard component – the one that uses the light pen to emulate a mouse – only “sees” one infrared dot and ignores any others it may detect. We are going to tweak his code so that it does not discard these dots but instead passes these on to the BluePrintControl in the form of events, so that we can also track the point-of-view.

Whenever the Wiimote state changes (this may be up to 100 times per second), the mouse emulator will receive an event fired by Brian Peeks WiimoteLib. Instead of immediately passing the first IR dot to the emulator, we’ll tap into the process. No matter how many dots are visible, we’re gonna make sure that the emulator gets only the one dot it needs to “impersonate” a mouse.

Here’s how to tap into the emulator code. Open WiimoteWhiteboardForm.cs from the solution explorer and change the first part of the event listener as follows:


void wm_OnWiimoteChanged(object sender, WiimoteChangedEventArgs args)
    // wiimote state with the real data
    WiimoteState wsReal = args.WiimoteState;
    // gets which IR point is mouse pointer and which are POV
    int mouseIR = new int();
    int[] POVIR = new int[2];
    int[,] rawIRCoors = new int[5, 2];
    // will contain the raw IR coors in array form for easy access
    this.IdentifyIRpoints(wsReal, ref rawIRCoors, ref mouseIR, ref POVIR);


This means that the “hard” work of recognizing point-of-view and light pen is done in the IdentifyIRpoints() method. Now imagine that there are three dots visible. We know that one belongs to the light pen and the others to the point of view. In the case of three visible points we’ll assume that:

1. the two points that are closest to each other belong to the point of view and

2. the remaining dot belongs to the light pen

We are able to make these assumptions since there is only one light pen and one point of view. Because the Wiimote can track up to four points simultaneously, it’s also possible to add another light pen. In this case, the algorithm becomes a little bit more complicated (and probably also less robust) but with some commonsense reasoning it should also be possible. I’ll leave that up to you!


/// <summary>
/// Recognizes which IR points are mouse point and POV points and returns
/// </summary>
/// <param name="ws">WiimoteState object</param>
/// <param name="rawIRCoors">Returns the IR-points in a raw form. Needs an initialized int[5,2]</param>
/// <param name="mouseIR">Returns the index of the IR mouse dot or 0 if not found</param>
/// <param name="POVIR">Returns the indices of the IR POV dots or null if not found</param>
void IdentifyIRpoints(WiimoteState ws, ref int[,] rawIRCoors, ref int mouseIR, ref int[] POVIR)
    // Holds all IR points. Note: @index 0 the default values are stored if no point is found.
    // This is needed for faking Chung's implementation.
    int[] IRState = new int[5];
    IRState[0] = 0;
    // Contains the raw coors in a bidimensional array
    rawIRCoors[0, 0] = 1023; // default value when not found
    rawIRCoors[0, 1] = 1023; // default value when not found
    rawIRCoors[1, 0] = ws.IRState.IRSensors[0].RawPosition.X;
    rawIRCoors[1, 1] = ws.IRState.IRSensors[0].RawPosition.Y;
    // Calculates the total number of points seen
    int noOfPointsSeen = 0;
    for (int i = 1; i <= 4; i++) {
        noOfPointsSeen += IRState[i];
    // Executes recognition of which dot is what based on
    // the amount of dots seen
    switch (noOfPointsSeen)
        // No points seen
        case 0:
            mouseIR = 0;
            POVIR = null;
        // Only mouse seen
        case 1:
            // passes the dotnumber containing the mouse
            for(int i=1; i<=4; i++) {
                    break; //exit for loop
            POVIR = null;
        // Only POV seen
        case 2:
            mouseIR = 0;
            POVIR = new int[2];
            // passes the dotnumbers containing the IR dots
            int tempIndex = 0;
            for (int i = 1; i <= 4; i++)
                if ((IRState[i] == 1) && (tempIndex<=1))
                    POVIR[tempIndex] = i;
        // determine which dots are closest. There are the POV dots
        // (so I do not remember points but analyze each frame individually
        // Three dots seen
        case 3:
            double smallestDistanceSoFar = 10000000;
            double currentDistance;
            int[] closestTwoPoints = new int[2];
            POVIR = new int[2];
            // Algorithm to calculate the points the smallest distance apart.
            // Only checks the first 3 points (which are all visible).
            for (int i = 1; i <= 3; i++)
                for (int j = i + 1; j <= 3; j++)
                    // Calculates the distance squared between points (using Pythagoras)
                    currentDistance =
                        (rawIRCoors[i, 0] - rawIRCoors[j, 0]) * (rawIRCoors[i, 0] - rawIRCoors[j, 0]) +
                        (rawIRCoors[i, 1] - rawIRCoors[j, 1]) * (rawIRCoors[i, 1] - rawIRCoors[j, 1]);
                    // If this distance is smaller than the previously smallest distance,
                    // the smallest distance so far is updated, as well as the array
                    // containing the indices of this pair
                    if (currentDistance < smallestDistanceSoFar)
                        smallestDistanceSoFar = currentDistance;
                        POVIR[0] = i; POVIR[1] = j;
            // The mouse index is returned (can be calculated easily
            // based on the fact that the the sum of all three indexes must be 0)
            mouseIR = 6 - (POVIR[0] + POVIR[1]);
        // Four dots seen: not supported so no output
        case 4:
            mouseIR = 0;
            POVIR = null;

Consequently, we create a new WiimoteState object only if a light pen is seen. This stripped-down state is then passed on to the emulator code, effectively filtering the information received by the emulator. An event with the point-of-view data is received by the BluePrintControl, which subsequently updates the camera position. I’ll leave checking for that in the source files to you.

Inserting Images

At the bottom of the BluePrintControl, there is a strip with images that can be inserted as objects. It is a so-called FlowLayoutPanel (found in the System.Windows.Forms namespace) and displays all .png images (“contained” in PictureBoxes) in a certain directory. Make sure you set the right directory by modifying the defaultImageDirectory private variable in the blueprint form (not the control).


I won’t run you through the code to add images to the FlowLayoutPanel because it is quite obvious. However, it’s worth noting that we’ll use a System.IO.FileSystemWatcher to update the list:


private bool SetUpFileSystemWatcher()
        fileSystemWatcher = new System.IO.FileSystemWatcher(defaultImageDirectory, defaultSupportedImageMasks);
    catch (Exception ex)
        System.Console.Out.WriteLine("Error: {0}", ex.Message);
        return false;
    this.fileSystemWatcher.Changed += new FileSystemEventHandler(fileSystemWatcher_Changed);
    this.fileSystemWatcher.EnableRaisingEvents = true;
    return true;
Drag ‘n’ drop

The drag procedure is initiated when the user clicks an image:


// initiates the drag procedure
private void pictureBox_MouseDown(object sender, MouseEventArgs e)
    PictureBox pictureBox = (PictureBox)sender;
    pictureBox.DoDragDrop(pictureBox.Tag, DragDropEffects.Copy);


Now, we’ll have to subscribe to the DragEnter (when the cursor enters the control while dragging is in progress) in the BluePrintControl and DragDrop (when the user drops an object by releasing the button) events:


// Subscribes to (drag and) drop user actions
this.DragEnter += new DragEventHandler(BluePrintControl_DragEnter);
this.DragDrop += new DragEventHandler(BluePrintControl_DragDrop);

In the method handling the DragDrop event, we can receive the object containing information regarding the image using the GetData method:


// called on a drop (as in drag & drop)
void BluePrintControl_DragDrop(object sender, DragEventArgs e)
    ImageObject tempImageObject = 
    // code below omitted



Coding the 3D Control



We’ve come a long way by programming the blueprint. Since the 3D view doesn’t have to accept direct user input – all that is handled by the blueprint – all it has to do is display the home as-is. There are three types of data that the 3Dcontrol has to receive:

1. The walls

2. The objects

3. The location and orientation of the point of view

This introduces some problems related thread-safety: the 3D control and blueprint run asynchronously on separate threads. Imagine that the 3dview is reading an object that is just in the process of being deleted; in this case data corruption may occur and the application may crash with some seemingly unrelated error code.

We’ll use two different methods to avoid these issues in order to assess performance. The objects are stored in a shared static class that uses locking to prevent data corruption. On the other hand, the walls and the point of view location object are sent in the form of events that have a copy of the data attached. Now I hear you ask: why use two different methods? That’s because I wanted to assess the performance of both methods, which, by the way, turned out to be similar.

Here, I’ll focus on only the objects because the locking procedure they employ is cleaner and easier to implement. If I’d have to do the project again, I wouldn’t even bother using the data-exchange-through-events method.

Locking the object list

First, we’ll have to go back to the BluePrintControl for a moment. Internally, this control keeps track of the list containing all objects that have been placed on the blueprint. Also, we’ll have to make sure we can access this data from the 3D Control by implementing a static property in the BluePrintControl. In order to make it thread-safe, we’ll lock access to the list: simultaneous reads are possible, though simultaneous reads/writes are not.

ReaderWriterLockSlim, a class written by a programmer who was unhappy by the low-performing ReaderWriterLock class included in .NET, is what we are going to use for locking. This slimmed down version performed so well that Microsoft eventually bought it and included it in subsequent versions of .NET, and so we are now reaping the fruits J :


static ReaderWriterLockSlim texturedObjectListLock = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
static List<ImageTextureObject> objectList = new List<ImageTextureObject>();
/// <summary>
/// Contains all ImageTextureObjects
/// </summary>
public static List<ImageTextureObject> ObjectList
        try { return BluePrintControl.objectList; }
        finally { texturedObjectListLock.ExitReadLock(); }
    private set
        try { BluePrintControl.objectList = value; }
        finally { texturedObjectListLock.ExitWriteLock(); }

Unfortunately, however, we haven’t achieved complete thread-safety: access to the list is locked, but not to the objects themselves! For this reason the access to properties of the ImageTextureObjects need locking too:

public Vector3 Position
        try { return position; }
        finally { objectLock.ExitReadLock(); }
            dataChanged = true;
            position = value; 
        finally { objectLock.ExitWriteLock(); }


Always use a try/finally combination to ensure that the writelock is released in all cases, no matter what.

Drawing to the GraphicsDevice

Now that we have reliable access to all the data we need, let’s actually draw our house in 3D! Of course, we rely heavily on XNA to get this done. This is what the Draw() method looks like:


protected override void Draw()
    // sets up device for new rendering round
    this.GraphicsDevice.RenderState.CullMode = CullMode.CullClockwiseFace; // hides clockwise faces
    basicEffect.World = Matrix.Identity;
    basicEffect.View = camera.View;
    basicEffect.Projection = camera.Projection;
    // draws elements to screen
    this.drawImageObjects(); // these should be drawn last due to transparency

A graphics cards works by rendering large amounts of triangle surfaces. Often, you only need to draw one side of the triangle, as is the case for us: since our walls have thickness, one side of each triangle is facing “into” the wall itself and will never be seen from that side. Therefore, we use the GraphicsDevice.RenderState.CullMode to tell the graphics card to not even bother drawing the triangle when the camera is on the “wrong” side of the triangle. This enhances performance, though not so much in our case since our scenes are very simple, but it is still a small effort to tell the graphics card this.

Three essential matrices are also set: the Projection matrix determines the camera’s lens, the View matrix the camera’s position, and the World matrix the position of the drawn objects.

Looking at the drawWalls() method, the walls are drawn using the userfriendly DrawUserIndexedPrimitives method. We basically give two lists. The first contains a set of coordinates in 3D space (the corners of the walls) and the second contains a list of integers that refer to coordinates in the other list. In this way, we can easily construct a series of triangles while only specifying the coordinates of the corners once:


    this.All3DWallVertices, 0,
    this.All3DWallIndices, 0,
    this.All3DWallIndices.Length / 3


The images are displayed by drawing textured triangles. In the DrawImageObjectTexturedFaces() method, we use the VertexPositionTexture custom vertex format to this end. This not only defines the texture itself, but also defines texture coordinates: these specify which part of the texture should be drawn onto a triangle:


    0, 4,
    0, 2




The code is quite comprehensive, so I’ve only been able to explain a few parts and ideas. If you want to try this out, the download link for the source code is at the top of the article! And be creative! Why not create a smart Google Earth mash-up in which you can use point-of-view to navigate intuitively though a landscape? Surely there are also some innovative games you can think of. Be inspired by the Nintendo DS games that make use of a similar dual display layout!

About The Author

Thijs Brilleman has a Master in Mechanical Engineering but has always been interested in fun, tangible machines that are somewhere in-between computers and mechanical machines. Currently employed at the University of Twente, he designs new interaction methods that can be used in collaborative design. You can reach him at: thijsbrilleman@gmail.com

The Discussion

  • User profile image

    It's a really cool idea, and nice work. thanks for sharing this.

  • User profile image
    Klaus ​Graefenstei​ner

    This is brilliant. A low budget Infrared surface table.

Add Your 2 Cents