A conversation with John Shewchuk about BizTalk Services and the Internet Service Bus
- Posted: Jul 20, 2007 at 10:31 AM
- 10,004 Views
Right click “Save as…”
In today's installment of my Microsoft Conversations series I talked with John Shewchuk about BizTalk Services, a project to create what he likes to call an Internet Service Bus. The project's blog, with pointers to key resources, is here. There's also a Channel 9 video on this same topic, in which John Shewchuk and Dennis Pilarinos illustrate the concepts using a whiteboard and demos.
I began our conversation with a reference to a blog item posted by Clemens Vasters back in April when BizTalk Services was announced. He described a Slingbox-like application he'd done for his family.
It's a custom-built (software/hardware) combo of two machines (one in Germany, one here in the US) that provide me and my family with full Windows Media Center embedded access to live and recorded TV along with electronic program guide data for 45+ German TV channels, Sports Pay-TV included.
Clemens did this the hard way, and it was really hard:
The work of getting the connectivity right (dynamic DNS, port mappings, firewall holes), dealing with the bandwidth constraints and shielding this against unwanted access were ridiculously complicated.
And he observed:
Using BizTalk Services would throw out a whole lot of complexity that I had to deal with myself, especially on the access control/identity and connectivity and discoverability fronts.
I began by asking John to describe how BizTalk services attacks these challenges in order to mitigate that complexity. We talked through a couple of scenarios in detail. The one you've heard the most about, if you've heard of this at all, is the cross-organization scenario in which a pair of organizations can very easily interconnect services -- of any flavor, could be REST, could be WS-* -- with reliable connectivity through NATs and firewalls, dynamic addressing, and declarative access control.
There's another scenario that hasn't been much discussed, but is equally fascinating to me: peer-to-peer. We haven't heard that term a whole lot lately, but as the Windows Communication Foundation begins to spread to the installed base of PCs, and with the advent of a WCF-based service fabric such as BizTalk Services, I expect we'll see the pendulum start to swing back.
At one point I asked John whether BizTalk Services supports the classic optimization -- used by Skype and other P2P apps -- in which endpoints, having used the fabric's services to rendezvous with one another, are able to establish direct communication. He said that it does, and followed with this observation about the economics of hosting BizTalk Services.
When we host it, we'll incur certain operational costs, so we'll want to recover those costs. But our goal is not to differentiate our offering from others because we host the software, it should be the case that Microsoft competes on an equal basis with other hosters.
In many regards, our motivations differ from other providers. Take Amazon's queueing service as an example. Because we've got software running both up in the cloud as well as on the edge nodes, we can create a network shortcut so that the two endpoints can talk directly. In that scenario, we don't see any traffic up on our network. All we did was provide a simple name capability, so the two applications end up talking to each other, using their own network bandwidth. We can use the smarts in the clients and in our servers to reduce the overall operating cost.
Now in that scenario, the endpoints are presumably servers running within an organization, or maybe across organizations. But WCF-equipped clients can play in this sandbox too. The idea is, in effect, to generalize the capabilities of an application like Skype, and enable developers to build all sorts of applications that leverage that kind of fabric.
That's a vision that many of us in the industry share. We'd just like to reduce the barriers to being able to connect our machines and our solutions. The industry's seen a bit transition to a hosted world, because that's been the easiest way to get universal connectivity. If a big organization with a whole bunch of high priests of IT were out there running the servers for you, then you didn't have to get your machine to be able to do that.
But sometimes I might just want to put those things on my machine, and if it were easy enough, wouldn't that be a great model? Why do I want to be beholden to some organization that's capturing my data? Maybe I want to have more privacy.
Of course there are benefits to moving it out to the cloud, but we think that should be a decision you make after the fact. Build your application to a consistent abstraction, then decide where you want the dial as the demands on the application change. If I'm just trying to do a quick video share with my friends, why do I have to create a new space? Why not simply say, here's the URL? And have that URL be stable?
Why not indeed. At its core the Internet has always been fundamentally peer-to-peer, but after a while we couldn't sanely continue in that mode. Things got too scary, so we built walls and created ghettoes. Technically our PCs are still Internet hosts but, except when they're running a few important P2P apps, they haven't really been hosts for a long time. It'd be great to get back to that.