Building the Laser Graffiti System

Sign in to queue


Last year, Clint Rutkas contacted me about building a project involving lasers, graffiti and code. When you get a request like that, you never decline, especially when lasers are involved. The project requirements sounded quite simple: create an application that can, using a laser pointer as a virtual spray can, draw virtual graffiti on the side of a building. It sounded a little daunting to me at first, but after it was broken up into small digestible pieces, the project ended up not being very complicated.

Below is the basic hardware setup of the Laser Graffiti System.


Building a Laser Tracker Engine

Giving Sight to the Blind

The very first thing that needed to be proven was the ability to track a laser point. The most obvious solution was to give our application some vision with a webcam. Since this was going to be a WPF application, I decided to use my open-source project, WPF MediaKit, which comes with a webcam control called the VideoCaptureElement.

At the time, VideoCaptureElement was not very robust, and it occurred to me that my control wasn't very useful beyond looking at your webcam in a WPF app! I needed a way to get high-performance access to every pixel of every frame the webcam spat out. This was a good time to add that ability and increase value in my project. Without getting into the gory details of DirectShow and p/invoke, I was able to add a hook to pass me the pixel buffer, where I would wrap it in a Bitmap class and raise an event for each frame.

The XAML required to get each video sample from the web camera.

<MediaKit:VideoCaptureElement LoadedBehavior="Play"
    DesiredPixelWidth="{Binding DesiredPixelWidth}"
    DesiredPixelHeight="{Binding DesiredPixelHeight}"
    VideoCaptureDevice="{Binding SelectedItem, ElementName=videoCapDevices}"
    MinWidth="500" />

Did I Mention Lasers?


Now that I had the ability to look at every pixel sent to me, I needed to make sense of it all. In order to complete the project, we have to be able to find a laser point shined on a wall. The difficulty lies in the fact that a laser can be of varying size and color. For example, if you take two consecutive video frames with the same green laser point, you will find the point may slightly differ between the two. This is because we are dealing with analog data and slightly changing lighting conditions. Whatever algorithm we choose to find the laser must take this into account.

There are many advanced ways to use video analytics to find the laser pointer, but this project did have a deadline, and I wanted to keep it fun, so I went with a simple method, which allows a user to filter the video based off ranges of hue, saturation, and luminance. To do this, we make use of the AForge image processing library. AForge is an open-source library that comes with tons of useful utilities for just what we want to do.

Filtering pixels and counting blobs…

As previously mentioned, we need to first filter the image based on hue, saturation, and luminance. Luckily, the AForge library comes with such a filter, so there is not much work to be done here:

/* This AForge class helps us filter out the pixels we do not want */
var hsl = new HSLFiltering
    Hue = new IntRange(HueMinimum, HueMaximum),
    Saturation = new DoubleRange(SaturationMinimum, SaturationMaximum),
    Luminance = new DoubleRange(LuminanceMinimum, LuminanceMaximum)

var bitmapData = bitmap.LockBits(
    new Rectangle((int)targetSearchArea.X, 

/* Apply the AForge filter.  Doing it "in place" is more efficient as
 * new bitmap does not have to be allocated and copied */

So, what needs to be done now that we have a filtered image showing only a laser dot? The answer: Blob counting! A blob, in the context of video analytics, is a set of pixels that are all touching, and AForge comes with a class ready to tackle that too!

The AForge blob counter takes in a gray-scale image for processing, and then returns all blobs found in the image. The information included on a blob includes the rectangular pixel area where the laser was found in the video frame. The center of that area should be the center of our laser. It's important to note that if the HSL filter is improperly calibrated, we will get an overabundance of blobs and there will accordingly be no way we can tell if it is the laser point or not.

/* Create an initialize our blob counter */
var blobsCounter = new BlobCounter
       FilterBlobs = true,
       ObjectsOrder = ObjectsOrder.Size,
       MinHeight = BlobMinimumHeight,
       MinWidth = BlobMinimumWidth,
       MaxWidth = 25,
       MaxHeight = 25

/* We first let the blob counter process our image */

/* Retrieve a list of blobs that were found */
Blob[] blobs = blobsCounter.GetObjects(grayImage);

The early laser tracking prototype. We can literally have it track anything, based off color.clip_image008[5]

Building the Laser Graffiti Application

Our Rocket Engine Needs a Rocket

At this point, I had a working proof of concept and a laser tracker. What I needed now was an application to use it. You know, the laser-graffiti application.

There are many ways to construct a WPF application and one pattern that's been getting a little bit of traction is MVVM. I chose to make a composite application using Prism and MVVM. Prism is a great guidance library pack for making composite WPF applications with tons of tools to help in common scenarios.

Composite What? Huh?

When developing applications, usually of the larger variety, things can become complex. Complexity turns into a mess. A mess turns into time investment. Time investment turns into money. The more money spent, the fewer bonuses you will receive. To make sure we get that end-of-year pay raise, we want to control complexity from the get-go by making our code and our internal systems loosely coupled. A composite application is just a collection of loosely coupled systems that is composited at runtime.

The advantages of a composite application are:

· Components can be developed independently

· Components can be swapped out more easily than if they were tightly coupled

· The application remains extendable, meaning you can more easily extend your application

· Maintenance is much clearer with less chance of breaking other components

· Unit testing is more feasible

Composite Application Entrails

There are many, many excellent resources about Prism available on the internet, so I'd rather not compete with them. I did, however, want to cover the basics, and at least how I've used it within the Laser Graffiti application.


The Shell Assembly – This contains the main entry point of the application and the initial or main UI container for the application. Before anything else, this assembly will initialize something called a bootstrapper. The bootstrapper simply does any pre-initialization that needs to be done before initializing the rest of the application.

Module Assemblies – Each module assembly can be considered a component. The bootstrapper initializes each component and then the module spins up any application services and adds any user interface to the shell. For instance, the MediaKit module handles the capture of the video and the AForgeModule handles the vision routines.

The Infrastructure Assembly – This contains any base classes, well known/shared interfaces or general infrastructure code. Simply put, most if not all modules will have reference to this.

Rendering the Graffiti

I needed a way to draw the graffiti to a projector. Overall, this will be accomplished by drawing the graffiti to a window. The window will be set to maximized with no border. I also needed to write the code to draw the graffiti. Initially, I just used the WPF InkCanvas control, but I found it wasn't well tuned for what I wanted to do with it. So, I decided XNA would give the most performance and flexibility with the level of effects desired for drawing graffiti.

I had never really done any XNA development, so this project is certainly not built using best practices. The important part to notice is that the project is more or less a regular XNA application with the exception that the executable is added as a library reference. This is because we instantiate the XNA from our WPF application's process. The following snippet of code accomplishes this:

private void ShowGraffitiWall(bool show)
    if (m_game != null)

    if (!show) 

    var t = new Thread((ThreadStart)delegate
        m_game = new XnaGraffitiGame();
        m_game.Effect = m_lastGraffitiEffect;

    t.IsBackground = true;

We instantiate the XNA window this way because it allows our WPF code to send the XNA messages of where the laser pointer was detected.

Drawing with the GPU

A blanket statement about XNA is that it is a drawing API. XNA applications can get quite complex when you add in things like 3D or shaders, but for this article, I'm only covering the required simple stuff. That said, I do make the assumption that the reader has had at least a few hours of XNA experience.

Our primary goal is to render virtual graffiti using XNA. This means we need to be able to draw a line. I started by creating a PNG file that would act as a “brush.” In XNA, these 2D graphics are also known as sprites. The brush sprite looks like this:


Now we can use XNA's SpriteBatch class to draw our brush. The first problem I had, though, was that most XNA tutorials show drawing the sprite, continue, clear the screen, draw the sprite, continue, etc. I needed to retain previous brush strokes, which means drawing onto an intermediate drawing surface (RenderTarget2D), then drawing that to the screen.

The next issue I had was that the line I was trying to draw was very broken. See, the laser detection code runs at a maximum 30 FPS (the limit of my web camera). If I moved the laser fast enough, the screen would just look like a bunch of random points were being drawn. I had to move the laser very slowly to make it appear like a line. I remember reading an excellent article detailing a project by Rick Barraza, which led me to a link to some popular algorithms for creating a line where your input is two, 2D points.


So, drawing a line required only a simple, two-step process of A) getting a new point from laser detection, and B) drawing a line from last_laser_coordinate to current_laser_coordinate. After putting that together, we can now produce some useful looking graffiti!


Adding Crazy Graffiti Effects

The excitement of drawing simple lines in XNA wore off pretty fast. After all, the GPU is capable of some pretty amazing things. What about drawing with fire? Fire isn't boring:


The fire effect is done using a method slightly similar to the solid color sprite method described earlier. However, since it uses a particle system, the fire effect is much more complex. Each particle has its own shader to control position and color. With the time constraints of this project, I had modified code from this XNA tutorial. There weren't many changes from the tutorial's code, but a big hurdle was that the particles took 3D coordinates instead of the 2D coordinates I needed to draw the graffiti. I found a small snippet to handle that, though I was informed by other XNA gurus that there is an easier way to accomplish this. If you are curious, this is what it looks like (and it's not for the faint of heart):

protected Vector3 ScreenPointToVector3(Point coords)
    Matrix viewMatrix = 
        Matrix.CreateTranslation(0, -25, 0) *
            MathHelper.ToRadians(CameraRotation)) *
            MathHelper.ToRadians(CameraArc)) *
            new Vector3(0, 0, -CameraDistance),
            new Vector3(0, 0, 0), 

    float aspectRatio = 
        (float)m_game.GraphicsDevice.Viewport.Width /

    Matrix projectionMatrix = 
            1, 10000);

    Vector3 nearScreenPoint = new Vector3(coords.X, coords.Y, 0);
    Vector3 farScreenPoint = new Vector3(coords.X, coords.Y, 1);
    Vector3 nearWorldPoint = 
    Vector3 farWorldPoint = 

    Vector3 direction = farWorldPoint - nearWorldPoint;

    float zFactor = -nearWorldPoint.Z / direction.Z;
    Vector3 zeroWorldPoint = nearWorldPoint + direction * zFactor;

    return zeroWorldPoint;

The End Result

Web-camera device selection:


Fine tuning the laser detection:


About Jeremiah

Jeremiah Morrill is a software developer and 2010 MVP living in Las Vegas, Nevada. He owns a software company called HJT,, with a couple partners. There he focuses on multimedia and rich user interfaces with WPF and Silverlight. Jeremiah spends a lot of his free time learning, listening and helping others in the online development communities. 

The Discussion

Add Your 2 Cents