Windows 10 IoT, Raspberry Pi and your next Halloween Project

Today's Hardware Friday Project is one that will give you an early jump start for Halloween. Yes, Halloween. All to often, I find these projects right before or right after, i.e. to close to the holiday to give you all enough time to actually build it.

Today's project is different! You now have MORE than enough to to get the stuff, write the code and extend it even further...

Halloween treat with Windows 10 IoT Core and Raspberry Pi

Overview

Have you ever wanted to make some ghostly figures that amaze your neighbors and their kids on Halloween?  Perhaps you have a Halloween party every year and you want to greet your guests with a ghostly surprise.  Well I am going to show you how you can put together a really fun project that will be the talked about by anyone who comes in contact with it.  

This project will leverage a well known theatrical illusion called Pepper's ghost. We will be using a Raspberry Pi 2 Model B running Windows 10 IoT Core to project an animated pumpkin onto a reflective surface.  You can use an actual projector to display your animation but I have a number of old LED monitors that will work just as well and won't cost me as much to put together.  All of my LED monitors have DVI inputs but the Raspberry Pi has an HDMI port for video output.  No worries as you can buy a simple HDMI to DVI adapter that will allow you to hook up your Raspberry Pi to most LED monitors.  I purchased my HDMI to DVI adapter from Amazon.  Using this type of adapter will not allow you to send audio out the HDMI port since the DVI connector and LED monitor only supports video signals.  But you can still use the audio jack on the Raspberry Pi to blast horrifying sounds through some amplified speakers. 

The application will monitor a Motion Sensor connected up to the Raspberry Pi and when motion is detected an animation will be projected onto the viewing area.  This animation will be made up of facial feature movements as well as sounds coming out of a pair of speakers.  Once the animation is done the application will wait for another motion detection and repeat the sequence.  

The intent of this project is to get the foundation in place so that you can build your own animations.  Simple animations can be made using WPF's image transformations and binding these transformation properties to a View Model.  You simply change the property values on the view model and the image moves or transforms in a 2D space.  If an image transformation does not achieve the correct 2D effect then you can also swap out the image with another one that is shaped a little differently. This is not how you would build a complex graphical game but animating a cartoon like figure works very well using this technique.

Defining facial movements in such a way that does not dictate the shape of the actual face that is being animated was another goal on this project.  I didn't want to animate a pumpkin using x/y coordinates because that wouldn't translate well if you had a different shaped pumpkin or perhaps a skeleton head you wanted to animate.  In other words I wanted to be able to create a timeline based animation that could be applied to any face.  This meant I needed to use a domain specific language that describes a facial expression.  Well luck would have it that this domain language already exists and it is called Facial Action Coding System.

Diagram of the high level design

image

Build the display

Here I will lay out how to build the display that will hold the monitor as well as a scary scene that will be a backdrop for the ghost pumpkin.  

...

Step 1 - Hook up Raspberry Pi

This project consists of a Sonar Sensor that outputs an analog voltage in direct relation to the distance to the object that it detects.  Since the Raspberry Pi does not have a hardware Analog to Digital converter we need to add an external one.  The MCP3008 chip is an 8 channel 10 bit ADC with an SPI interface.  This means the chip can read 8 analog sources with 10 bits of precision and communicate with a microprocessor using the Serial Peripheral Interface Bus.  We are only going to use one of those channels for our Sonar Sensor.

...

Step 2 - Running the program

This application runs on the Windows 10 IoT Core platform.  If you have not setup your Raspberry Pi 2 to run Windows 10 then make sure you follow the detailed instructions for first time setup

...

Step 3 - Make your own Animation

You can orchestrate your own animation by modifying the scare.pumpkin.ui  -> Services - MotionActivatedSimpleAnimation.cs.  Animations are time based and they just end up being a collection of Actions.  There are only 3 types of actions supported: Facial Coding, Sound and Timer Stop.  In addition to making your own animations you can modify the images used to set the background pumpkin head or any of the facial components such as the eyes, eye brows, nose or mouth.  

...

Conclusion

This project ended up being a little more challenging than I had originally intended it to be.  I wanted to use a simpler sensor to detect people than the Sonar sensor I ended up using.  The MaxSonar sensor is not difficult to use at all.  In fact it is one of the most reliable and easiest sonar sensors to use that I have seen.  The difficult part was the fact that I had to add an extra ADC chip to read the Analog Voltage.  There are other options to read the sensor that I could have used but I had the MCP3008 ADC chip so I used it.  I originally wanted to use a PIR sensor but the one I had was discontinued and it seemed to give me false triggers.  I was also concerned about using the PIR sensor in colder environments outside and being able to detect people.

I hope someone gets some ideas from this project and builds their own cool animation.  There are a lot of ways this can be enhanced:

  • Support multiple sensors that can trigger the same or different animations
  • Add a way to randomize what animation gets executed
  • Make animations load from a file or a database
  • Have animations that contain animations ( a way to build larger animations from smaller re-usable ones)
  • Network multiple Raspberry Pi's that can all work together in one big animation
  • Make it easier to build animations by using a joystick
  • Have other items such as a fog machine, strobe lights, or animatronic monsters that also can participate in an animation.

[See the entire project]



Tags:

Follow the discussion

  • Oops, something didn't work.

    Getting subscription
    Subscribe to this conversation
    Unsubscribing
    Subscribing

Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.