Today's project is not Kinect or HoloLens related, I just thought it was cool... ;)
This post is part of a series called Intel RealSense for Game Developers.
- Text Input for Games With Just an Intel RealSense Camera
- Get Your Head in the Game With Intel RealSense
I'm very excited about sensification of computing: an idea where smart things sense the environment, are connected, then become an extension of the user. Asan Intel Software evangelist (and a self-confessed hobbyist coder), I'm fortunate enough to get to experiment with these possibilities using Intel's cutting-edge technology. Previously, I had written about 3D-scanning the world around you in real time and putting a virtual object in it; however in this tutorial, I'd like to explain how to do the converse, by 3D-scanning yourself then adding that scan to the virtual world.
More specifically, we will:
- Scan your head using a RealSense camera and the SDK.
- Convert the 3D file from OBJ format to PLY format (for editing in Blender).
- Convert the vertex colors to a UV texture map.
- Edit the 3D mesh to give it a lower vertex and poly count.
We'll end up with a 3D, colored mesh of your head, ready to use in Unity. (In the next tutorial, we'll see how to actually get this into Unity, and what you could use this for in a game.)
You will need:
- An Intel RealSense camera (or computer with integrated RealSense camera)
- A computer with an Intel Core 4th gen processor (or better)
- The RealSense SDK (which is free to download)
- MeshLab (free)
- Blender (free)
Scanning Your Head ...
How to Scan Your Head ...
Converting to PLY Format ...
Converting Vertex Colors to a UV Texture Map ...
Export the Mesh...
You've scanned in your head, reduced the polycount, cleaned up the model, and exported it as a 3D model ready to use in other applications. In a future tutorial, we'll import this 3D mesh into Unity, and look at how it could be used in games.