There are no clouds when raytracing with Azure


Today's project is something that I've found cool for years, since my Amiga days, where raytracing was 'the thing'. Today's project takes takes raytracing and moves it into the Azure cloud...

Use the power of Azure to create your own raytracer

The power available in the cloud is growing every day. So I decided to use this raw CPU power to write a small raytracer.

I’m certainly not the first one to have had this idea as for example Pixar or GreenButton already use Azure to render pictures.

During this article, we will see how to write our own rendering system using Azure in order to be able to realize your own 3D rendered movie.

The article will be organized around the following axis:

  1. Prerequisites
  2. Architecture
  3. Deploying to Azure
  4. Defining a scene
  5. Web server and workers roles
  6. How it works?
  7. JavaScript client
  8. Conclusion
  9. To go further

The final solution can be downloaded here and if you want to see the final result, please go there:

To get started there's just a very pre-req's;


To be able to use the project, you must have:

You will also need an Azure account. You can get a free one just there:

What is the big picture of this project?


Our architecture can be defined using the following schema:



While you can play with the instance David is already running, having your own, where you can tweak it to your hearts content is key. The good news is that this article walks you through all the steps to deploy the solution to your own Azure instance.

Deploying to Azure

After opening the solution, you will be able to launch it directly from Visual Studio inside the Azure Emulator. You will be so able to debug and fine tune your code before sending it to the production stage.

Once you’re ready, you can deploy your package on your Azure account using the following procedure:

  • Open the “AzureRaytracer.sln solution inside Visual Studio
  • Configure your Azure account: to do so, right click on the “AzureRaytracer” project and choose “Publish” menu. You will get the following screen:


Once he gets you online, he then describes the schema of the XML file he's using to define the scene that the code we draw.

Defining a scene

To define a scene, you have to specify it using an xml file. Here is a sample scene:


The file structure is the following:

  • A [scene] tag is used as root tag and allows you to define the following parameters:
    • FogStart / FogEnd : Define the range of the fog from the camera.
    • FogColor : RGB color of the fog
    • ClearColor : Background RGB color
    • AmbientColor : Ambient RGB
  • A [objects] tag which contains the objects list
  • A [lights] tag which contains the lights list
  • A [camera] tag which define the scene camera. It is our point of view, defined by the following parameters:
    • Position : Camera position (X,Y,Z)
    • Target : Camera target (X, Y, Z)

All objects are defined by a name and can be of one of the following type:

  • sphere : Sphere defined by its center and radius
  • ground : Plane representing the ground defined by its offset from 0 and the direction of its normal
  • mesh : Complex object defined by a list of vertices and faces. It can be manipulated with 3 vectors:Position, Rotation and Scaling:

Now that we have our own instance, and a feeling for how scenes are created, next the makeup of the two Azure roles are covered.

Web server and worker roles

The web server is running under ASP.Net and will provide two functionalities:

  • Connection to worker roles using the queue in order to launch a rendering:
  • Publish a web service to expose requests progress:


The WorkingUnit works according to the following algorithm:

  • Loading the scene
  • Creating the raytracer
  • Generating the picture and accessing the bytes array
  • When the picture is rendered, we can save it in a blob and we update the job progress state
  • Launching the render


Finally the raytracer itself is discussed (and remember you get the source too all of this!)

The raytracer

The raytracer is entirely written in C# 4.0 and uses TPL (Task Parallel Libray) to enable parallel code execution.

The following functionalities are supported (but as Yoda said “Obvious is the code”, so do not hesitate to browse the code):

  • Fog
  • Diffuse
  • Ambient
  • Transparency
  • Reflection
  • Refraction
  • Shadows
  • Complex objects
  • Unlimited light sources
  • Antialiasing
  • Parallel rendering
  • Octrees

The interesting point with a raytracer is that it is a massively parallelizable process. Indeed, a raytracer will execute strictly the same code for each pixel of the screen.


Let's take a peek at the Solution.


As the article covers, there's three basic parts, the Azure Web, Worker and the raytracer.




What I thought kind of cool is that the raytracer was complete and had no external references at all.


So okay, you don't want to create your own instance, you just want to raytrace stuff? David's got an instance already running which you can play with.




If your interested in raytracing, cloud development or just looking for some interesting code to check out, there's a little for everyone here...

The Discussion

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.