Thanks a lot for answering. I have to say that waiting 11 minutes for uploading or even upgrading a package is a stretch. I hope you'll be able to come up with a solution for this, maybe like hotswapping from a pool of already-booted VMs or something like
And maybe when SSD prices go down you could have a premium-option to run on SSD-VMs for an extra cost of course?
For my other questions please check out my posts at episode 21. I can't see that any of them have been answered.
And please let me know if I shouldn't pose questions here on C9.
A question: Why does it take 11 minutes to deploy and start a new service in Windows Azure? My installation of Win7 on my SSD tok less than that! I'm using the North European zone if that can have any impact. What is the normal time it takes anyways??
Here are some performance numbers for adding 100 "hello azure" messages into a queue (using ThreadPool for each message insert) from on-premise:
Managed API: 15,287 ms
REST API: 7,758 ms
Can this have something to do with internal locking in the managed API upon calling AddMessage() or something like that?
I also made my own tcp/ip server running inside azure in a worker role for adding messages to the queue. And by my homegrown tcp/ip worker role I got just over 3 000 ms in adding 100 messages from outside azure (under half the time the REST API uses). What
about making a general tcp/ip interface towards the queue API and making a managed wrapper over that on the client side?
But the really strange part is that the managed api was even slower inside an azure worker role than from on-premise. How can that be??
From within a worker role in azure, adding 100 messages took over 20 000 ms (5 000 ms more than from the outside)..
Thanks for the show! Always fun and interesting! Though you could cut back on the who's who jokes
I have to say I really like the idea of letting the community ask questions. I would love if this could be a fixed part of your show like tip-of-the-week In the spirit of one-question-only :
1a) Recently I tried to deploy a simple worker role that basically did nothing but wait. Still it took over several
minutes from "Initializing" to the "Ready" state was shown. This caused me to believe that my package was buggy. I actually used several hours to finally figure out that I just had to wait for like 5 minutes. Can you explain why starting an
app takes so long?
1b) Will there be some more diagnostics available for apps that don't even start? I know about IntelliTrace, but let's face it, not every company out there can afford the ultimate edition of VS2010... Many times I've uploaded packages that won't run for
some reason (forgot to change devstore, used a non-native dll ref without local copy and what have you). This is by far the biggest pain of the windows azure experience right now.
1c) On a related note to the question above, why does deploying a package first have a screen with a button saying "Processing..." and then afterwards shows a progress bar saying "deploying...". And when you click "Run" it says "Enabling deployment...".
What are all these seemingly identical processes doing?
1d) Still related to the above somehow, ehem... As mentioned, you thought about doing a piece on the service bus. Well, what about a piece about how to set up access from azure to your on-premise sql server? This is what every man, woman and androgynous
person wants and dreams of.. Well, at least until sql azure gets backup capabilities
Thanks Steve. I got it all wrong then. 500 msg per sec is actually pretty OK, and you can also put up as many queues as you like, right? So say I need about 2000 msg/sec throughput, I could create queue1 to queue4 and come out at about 2000 msg/sec. Thanks
a lot for answering anyway! You guys way cool