Behind the Scenes: Making the 2016 Iowa Caucus App

Play Behind the Scenes: Making the 2016 Iowa Caucus App
Sign in to queue


Capturing more than 90% of the caucus results within three hours in a secure, accurate and trusted manner is an amazing accomplishment. In today’s special TechNet Radio episode, join Tommy Patterson as he welcomes the team that brought to life the 2016 Iowa Caucus app.

Built on Microsoft technology, the new platform featured a secure system, enabling precincts to report their results directly by party, ensuring that only authorized Iowans were reporting results.  Tune in and learn how this cross-platform app came to be --- from planning, to testing and execution --- and learn how Microsoft Azure’s cloud computing platform played a vital role in storing, managing and reporting the election results.

  • [3:44] How did the planning process look like at the beginning of this project? What were some of the hurdles that had to be crossed?
  • [8:30] How did this work from a technology perspective? What did you use?
  • [13:40] What about the testing and security aspect of this project?
  • [27:01] How did you handle performance issues?

Have questions? Reach out to us on Twitter:

Tommy Patterson
Microsoft Innovation Center Sr. Technical Evangelist

Ashish Jaiman

Manager – Campaign Technology Advisors

Rodney Guzman
Founder, Interknowlogy

Joel Cochran

Campaign Technology Advisor (GOP)

Ethan Chumley

Campaign Technology Advisor (Democrat)



If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:

Websites & Blogs:

Related Resources:

 Follow the conversation @MS_ITPro
 Become a Fan @
Generic Episode Image Subscribe to our podcast via iTunes, Stitcher, or RSS



The Discussion

  • User profile image

    Would have liked to hear more about the response to performance issues. Sounds like nothing was done to spin up more servers?

  • User profile image

    @smcgough: The dev team said they were on the ground watching for any performance issues and instead of scaling on demand, they had the environment prestaged and ready.  With such a narrow window of operation, all systems were ready for maximum participation.  By all measurements the operation was a success.  I will ask the team to chime in if there are further details they can share as well. Thanks for watching the show! 

  • User profile image

    @smcgough: @tommypatterson: When we started the investigation, resource utilization across our web server pool wasn't over any limits as this issue affected single servers independently at different times during the event, so we did not think that this would be mitigated by adding more servers. In addition, if we wanted to increase the number of instances we would have needed to deploy our solution to a separate data center due to the number of resources we were requesting, which would have taken too long. By the time we had a hypothesis for the root cause and were ready to deploy a fix the issue had been mitigated due to lower load.

  • User profile image

    Thanks for the additional details.

    It sounds like the reports of downtime were overstated in the media. Any chance of metrics? Something like x% of total requests were dropped due to peak load.


Add Your 2 Cents