A very valuable and thorough presentation on key aspects of Event Hubs (and Stream Analytics). I like the work stealing algorithm implemented in the EventProcessorHost API - there's a lot of coordination going on which usually is an obstacle for a large group of developers.
I'm wondering if one can pipe events from the EventHub to Azure storage and use the EventProcessorHost in parallel (over the same stream of events). So computation and storage at the same time - using an Azure provided service for storage, and a custom impemented EventProcessorHost.
@emiliogg84: I think I heard something like end of this year. It doesn't look like Microsoft is going for a quick release. Unfortunately, this means that there won't be a unified service bus API for Windows Azure and Windows Server until they release the next version - 1.8 is the last release that supports both environments.
Thanks for the 100th episode - can't believe you did 100 already.
The Windows Azure Service Bus Notification Hub feature looks really interesting, but following the walkthrough of the APIs, I can't help but wonder about the naming of the API operators. What's up with the extremely long method names? I know you're reaching for a broad developer base here but there's really no reason to materialize all combinations of permissions as concrete methods.
I'm sure the developers of the WASBNH APIs are capable of shorter names / more elegant namings than the following (combined with a set of enums, and associated overloads).
It feels like the API was designed to be used with simple one-liners, perhaps leaving the impression of simplicity but at the cost of elegance, and future evolution of the APIs. Am I the only one asking for more consistent API designs in the Windows Azure APIs in general?!
This is probably one of the most inspiring talks I've watched on Channel 9 this year. I can definitely relate this to the software we're developing. On a side note, I found Charles' comment on RAM as a service (i.e not necessarily a distributed cache) to be quite interesting.
I've installed the latest version of Service Bus (1.8) and Windows Azure SDK (1.8), but am not seeing the "Windows" section in the "add reference" dialog in Visual Studio 2012. Is this a feature enabled by a seperate extension to Visual Studio 2012 or do I need to enable a checkbox somewhere?
thanks for a great introduction to the new tools. I can't express how thrilled I am to see this release from Microsoft as it promises to finally provide a solid end to end story for database developers in terms of deploying to on premise and SQL Azure.
We are running a team here that produces a software product that is subject to quick release cycles (and subsequent refactorings). We're targetting SQL Server 2008 and SQL Azure and currently rely on writing the per-release upgrade scripts using a combination of VSDBCMD (for automated schema diffs) and hand crafted scripts for e.g. renaming columns or moving data around, prior to executing the beforementioned schema diffs produced by VSDBCMD.
We just started on a new sprint with a user story for improving on quality assurance for upgrade scenarios and automated deployment to SQL Azure. As the person also responsible for the data tier in our software, I can't tell you how happy I am with the release of the new tools. It's going to make a big difference to our team, helping us provide a clear upgrade path with more automation - especially in terms of producing schema diffs for SQL Azure.
I have a few questions:
Is there a managed API (like the SQL SMO) that allows us to include dacpac files in our own installer and subsequently deploy (clean install / upgrade) to on premise SQL Server or SQL Azure?
The scenario I'm thinking of is having a single SQL Server Data Tools project as part of our solution and using TFS builds output dacpac files for SQL Server or SQL Azure (i.e. targetting the two different environments via configuration and then build the dacpacs). Is that possible?
Again, thanks for producing this video. Can't wait to get started.
Using a single resource in multiple variations is also referred to as "single source publishing" in CMS terms. It's a technique that truly eliminates a lot of tedious and error prone work from editors and lets them focus on content authoring.
However, while this library is a step in the right direction, the use of query strings to parameterize the image processing is a feature that's begging to be exploited. Given enough query string variations, the disk runs out of storage and potentially brings down the service.
I'm using another technique to achieve comparable features without potentially having the server run out of disk space because of malicious requests. I'm stacking "stream providers" on top of each other as profiles configured in e.g. an XML document. The profiles contains the configuration details of each "stream provider" and makes itself available as single query string parameter (i.e. "image.jpeg?profile=smallProfile").
While I acknowledge this requires a bit more configuration, it's usually not much of an issue. Once the layout has been approved, all required image translations are basically reproduced as profiles and referenced from the generated markup.
Sounds great with the latest CTP. Another thing: the C# team usually has more than a single feature scoped for a major release of the language, and while the async stuff is definitely a major feature, the community has suggested numerous other features.
Are we going to see other features in the next major version of C#? Come on .. spill the beans
I've been following the CTPs and am looking forward to pushing esp. the caching part of App Fabric to our architecture this summer when the RTM is released. It's been a pleasure to work with the team on the API and meeting up with MK et all at PDC last year
was also a really positive experience.
Looking forward to more videos and talks on AppFabric.
I attended this pre-conference workshop, hoping that I would see "demonstrations of code quality best practices that have been proven to work on a variety of projects", but unfortunately, the workshop didn't touch on this subject at all.
In fact, most of the stuff presented was focused on how to drag stuff around and irrelevant stuff such as coloring of visuals in UML diagrams, and zooming in and out of diagrams (hint: don't read out loud what it says in the new context menus - your audience
can read etc.).
The first half of the workshop was a complete waste of time, and this is not just my personal oppinion. I heard from several other guests that they found the first half a complete waste of time. The second half picked up the pace, but Todd Girvin kept dragging
the level down to level 100, where as Chris Tullier wanted to do level 300 - 400 stuff, being more of a experienced developer.
PDC is going to be a blast; can't wait to get started on the sessions and meet the other devs in the lounge. Charles, please make sure the big Channel 9 guy is attending the keynotes too, as it was great fun last year. Oh, almost forgot, bring plenty of
small Channel 9 guys too for throwing out to the audience.
Did I miss something or are there no sessions with Anders Hejlsberg this year?