WEBVTT

00:00:01.460 --> 00:00:02.340
Good afternoon.

00:00:04.930 --> 00:00:05.880
How are folks doing?

00:00:08.810 --> 00:00:14.600
Good? You guys have made it almost
to the end of the conference.

00:00:15.630 --> 00:00:17.150
How's the experience
been so far?

00:00:17.160 --> 00:00:19.360
[Applause]

00:00:19.520 --> 00:00:20.120
>> Good.

00:00:20.170 --> 00:00:24.940
Awesome. Well, as they say, they
always save the best for last.

00:00:26.240 --> 00:00:32.190
So hopefully, I will not disappoint
you guys. I really appreciate

00:00:32.240 --> 00:00:34.450
your making it this afternoon.

00:00:35.200 --> 00:00:40.360
I'm Abhishek Lal. Program manager
with the Azure platform team.

00:00:41.090 --> 00:00:45.840
This is the team which builds PaaS
services such as Mobile Services,

00:00:45.890 --> 00:00:48.550
Service Bus, Azure cache.

00:00:49.240 --> 00:00:51.080
And media services.

00:00:51.720 --> 00:00:54.320
Those services are what
the team owns.

00:00:54.830 --> 00:00:58.940
And specifically I've been working
for the past three plus years

00:00:58.990 --> 00:01:05.100
on building the brokered messaging
pieces. So this is queues,

00:01:05.150 --> 00:01:08.720
topics, are pub sub,
the pieces of that.

00:01:09.470 --> 00:01:15.150
Today we'll be talking about
messaging at scale.

00:01:17.010 --> 00:01:22.030
Queues and topics. Now folks are
familiar with Service Bus.

00:01:22.840 --> 00:01:26.920
It does encompass relay. It does
encompass notification hub,

00:01:27.780 --> 00:01:29.010
queues and topics.

00:01:29.560 --> 00:01:34.840
So it's sort of a whole breadth
of messaging-related services.

00:01:35.710 --> 00:01:39.560
This particular session is going
to focus primarily on queues

00:01:39.610 --> 00:01:46.260
and topics so that's the primary
area. But if you have questions

00:01:46.310 --> 00:01:50.120
or anything you would like to know
specifically about relay or

00:01:50.170 --> 00:01:55.150
notification hubs, I'm happy to
answer that or at least point

00:01:55.200 --> 00:01:57.410
you in the right direction.

00:01:58.820 --> 00:02:00.930
There's a lot of things
I want to cover today.

00:02:01.710 --> 00:02:04.730
Talk about all the different aspects
of scale. I want to talk

00:02:04.780 --> 00:02:08.490
about senders and receivers and
throughput, all the different

00:02:08.540 --> 00:02:11.630
patterns as well as the
specifics of code.

00:02:12.390 --> 00:02:14.870
Of how you can achieve scale.

00:02:15.810 --> 00:02:19.040
So I'll try to keep a good pace.

00:02:19.640 --> 00:02:24.190
Questions are great. If you see me
starting to cut short questions

00:02:24.240 --> 00:02:27.780
a little later, just so that I
can cover all the stuff I want

00:02:27.830 --> 00:02:31.490
to cover. I will be available after
the session and you can always

00:02:31.540 --> 00:02:36.200
reach out to me but do keep it interactive.
Anything you have,

00:02:36.250 --> 00:02:41.270
the microphones are right here.
Just walk up and I'll call out.

00:02:43.930 --> 00:02:48.720
We'll start by talking about what's
new. Just sort of an update

00:02:48.770 --> 00:02:51.210
on what we've announced as SDK 2.3.

00:02:52.250 --> 00:02:56.290
We'll switch to talking about
the dimensions of scale.

00:02:56.340 --> 00:03:00.420
We'll talk about senders, receivers,
throughput, how you achieve that.

00:03:01.800 --> 00:03:05.770
And then we'll spend some time on
availability considerations.

00:03:05.820 --> 00:03:07.850
Availability just broadly meaning

00:03:09.190 --> 00:03:14.340
resilience, better SLA and how
to design your application to

00:03:14.390 --> 00:03:19.520
be always up, always on, running
in there so we'll spend some

00:03:19.570 --> 00:03:20.510
time on that.

00:03:22.060 --> 00:03:25.780
So SDK 2.3.

00:03:26.330 --> 00:03:28.310
What did we just release?

00:03:29.070 --> 00:03:32.540
On message session. Of so on member
of the juries are a push

00:03:32.590 --> 00:03:36.970
style API. It essentially takes
away all the hard work from you

00:03:37.020 --> 00:03:42.960
of writing the C loops or anything
of that complexity and it

00:03:43.010 --> 00:03:46.420
gives you a very event-different
model to consume messages.

00:03:46.470 --> 00:03:50.110
This is the receiver side API. So
we've got that for sessions.

00:03:50.160 --> 00:03:52.680
We'll definitely cover that
in more detail today.

00:03:53.890 --> 00:03:58.440
Connectivity mode, auto detect.
So as you know, one of the real

00:03:58.490 --> 00:04:02.520
key value Azure Service Bus has been
that when you're connecting

00:04:02.950 --> 00:04:07.700
to queues and topics in the cloud
from behind firewalls from

00:04:07.750 --> 00:04:11.450
your own data centers or from your
customers data centers which

00:04:11.500 --> 00:04:16.230
are sit being behind very well protected
sort of firewalls, Service

00:04:16.280 --> 00:04:19.660
Bus has the ability to make outbound 
connections not only on TCP port

00:04:19.710 --> 00:04:22.110
but port 83 and 443


00:04:23.670 --> 00:04:25.860
while TCP ports are blocked.


00:04:26.700 --> 00:04:30.790
This facility will still now available
only if you directly set

00:04:30.840 --> 00:04:34.230
the directory with the mode to TCP,
so you never had the choice.

00:04:34.910 --> 00:04:38.730
Now in your code you can just set
it to auto detect and we will

00:04:38.780 --> 00:04:42.910
automatically see if TCP port is
available, we will use that.

00:04:42.960 --> 00:04:48.410
If the firewall blocks it, we will
drop it down to HTTP. So SDK

00:04:48.460 --> 00:04:51.560
2.3, that's available
for messaging also.

00:04:54.390 --> 00:04:57.980
CORS support. How many folks
know what CORS is?

00:05:00.360 --> 00:05:04.200
Most folks know it. It essentially
enables easy send/receive

00:05:04.250 --> 00:05:09.370
from browsers. So the idea is you
can always have done, you have

00:05:09.420 --> 00:05:14.320
the STPI with SCTP. You can do send
messages, receive messages,

00:05:14.370 --> 00:05:18.920
but with CORS now it makes it much
easier for browsers and websites

00:05:18.970 --> 00:05:23.650
to integrate back and we'll dive
into that in detail today.

00:05:25.010 --> 00:05:29.530
Similarly, sort of helping out with
performance as well as scale

00:05:29.580 --> 00:05:34.760
for HTTP senders, we've got
batching now available.

00:05:35.200 --> 00:05:43.980
And then couple of client side perf
counters which is when you're

00:05:44.030 --> 00:05:46.900
really bringing an application
which is complicated or you're

00:05:46.950 --> 00:05:50.450
going to run it in different environments,
you might need to

00:05:50.500 --> 00:05:53.340
debug it and you might need to profile
it so we've added client

00:05:53.390 --> 00:05:57.890
side perf counters of messages sent
per second, letters per second

00:05:57.940 --> 00:06:01.460
and things like that which can really,
really help you profile

00:06:01.510 --> 00:06:05.250
what you're messaging layer is
doing overall you're opposite

00:06:05.300 --> 00:06:09.020
occasion is doing. So those will
then manifest for those perf

00:06:09.070 --> 00:06:14.230
counters as part of the NuGet package
in it so it really enables

00:06:14.280 --> 00:06:17.550
you to do some good debugging.

00:06:20.550 --> 00:06:23.340
And finally, ForwardTo
for deadletter queues.

00:06:23.880 --> 00:06:27.380
Deadlettering is a very, very powerful
feature where it protects

00:06:27.430 --> 00:06:30.820
you're back ends if there are poison
messages. These are typically

00:06:30.870 --> 00:06:34.620
called poison queues where you try
to receive a message and the

00:06:34.670 --> 00:06:38.600
message is not formed or there's
a bug in your code somewhere

00:06:38.650 --> 00:06:42.080
on in the de civilizer somewhere
where you're not able to open

00:06:42.130 --> 00:06:44.560
the message and your backend crashes.

00:06:45.780 --> 00:06:50.390
Service Bus provides you the ability
of setting a max delivery

00:06:50.440 --> 00:06:54.420
count which by default is 10, and what
it means is that if we see

00:06:54.470 --> 00:06:57.660
that we've delivered the message
to you 10 times and you have

00:06:57.710 --> 00:07:01.310
not successfully completed the
message, we will move it from

00:07:01.360 --> 00:07:03.240
the main queue into the
deadletter queue.

00:07:03.870 --> 00:07:07.930
So this literally helps your applications
be resilient by default

00:07:08.190 --> 00:07:12.840
without you having to write a single
line of code and protect

00:07:12.890 --> 00:07:18.660
your back-end servers. So ForwardTo
is the ability to channel

00:07:18.710 --> 00:07:23.810
messages automatically create rich
message flows and now you

00:07:23.860 --> 00:07:30.000
can take an application which may have
6, 8, 10 queues and ForwardTo

00:07:30.050 --> 00:07:34.450
for all the deadletter queue into
a single queue which means

00:07:34.500 --> 00:07:38.530
now you'll have one place to go now
receive all the poison messages

00:07:38.980 --> 00:07:42.340
irrespective of how many queues
or topics or subscriptions you

00:07:42.390 --> 00:07:46.280
are using so that's a
need feature add too.

00:07:47.180 --> 00:07:49.910
We will cover that in
a little more detail.

00:07:51.740 --> 00:07:57.570
I did want to quickly recap on what
we have done since last April

00:07:57.620 --> 00:08:01.400
because when we talk about today in
terms of scale and performance

00:08:01.450 --> 00:08:05.780
and throughput you will see a lot of
these features being referenced

00:08:06.180 --> 00:08:08.570
so I just wanted to call them out
into terms of whether they're

00:08:08.620 --> 00:08:12.370
available today already and they've
been out for some time but

00:08:12.420 --> 00:08:16.250
they're still relevant in there.

00:08:18.520 --> 00:08:22.290
The one thing to see here is serving
below the line, the first

00:08:22.340 --> 00:08:26.310
to Service Bus on promise so last
year we did the Service Bus

00:08:26.360 --> 00:08:28.900
1.1 for Windows server release.

00:08:29.580 --> 00:08:33.210
This is completely symmetric for
queue and topics which means

00:08:33.260 --> 00:08:37.450
if you pick up SDK 2.1, for example,
which was the last SDK,

00:08:38.470 --> 00:08:42.010
you would be able to either hit
the service or on premise all

00:08:42.060 --> 00:08:45.070
the features that are available.

00:08:46.760 --> 00:08:51.600
This cadence of sort of cloud release
every three months you

00:08:51.650 --> 00:08:55.290
can see every three to four months
and on premise release at

00:08:55.340 --> 00:08:59.520
least once a year is what we try
to maintain and then bring both

00:08:59.570 --> 00:09:02.680
the feature sets into parity.

00:09:05.540 --> 00:09:08.740
So this is available for you for
reference later in terms of

00:09:08.790 --> 00:09:10.010
the features.

00:09:12.110 --> 00:09:13.310
Any questions so far?

00:09:15.820 --> 00:09:16.720
Yes, please.

00:09:16.730 --> 00:09:19.730
[Indiscernible]

00:09:19.950 --> 00:09:23.560
>> So the question was: When will
be the next update and where

00:09:23.610 --> 00:09:28.920
we will bring the 2.3, the latest
functionality there.

00:09:28.970 --> 00:09:33.240
Right now I don't have any dates
to share for the next Service

00:09:33.290 --> 00:09:36.320
Bus release but there will
be a 2.2 or a 1.2.

00:09:37.800 --> 00:09:42.620
But you can typically think this
particular release that date

00:09:43.340 --> 00:09:46.900
matched the Windows Server release
so most of the time they try

00:09:46.950 --> 00:09:51.580
to align with server releases so
we get the maximum platform

00:09:51.630 --> 00:09:55.010
benefit so we make sure we have
greatest server with the latest

00:09:55.060 --> 00:09:59.310
clustering with the latest management
and defaces and everything.

00:09:59.360 --> 00:10:03.610
So typically just the guidance assume
that the same kind of cadence

00:10:03.660 --> 00:10:05.820
will be followed. Good question.

00:10:08.920 --> 00:10:13.130
Scale on sender. Let's start with
this in terms of the first

00:10:13.180 --> 00:10:14.210
aspect of scale.

00:10:15.570 --> 00:10:18.650
So senders is nothing but
someplace where you're

00:10:18.660 --> 00:10:20.040
[Indiscernible]

00:10:20.000 --> 00:10:22.830
You can think of a lot of scenarios
here. You can think of device

00:10:22.880 --> 00:10:24.970
telemetry, user actions.

00:10:26.630 --> 00:10:31.030
And your systems generating events
and B to B kind of scenario.

00:10:31.080 --> 00:10:32.910
The events being generated.

00:10:33.640 --> 00:10:37.660
How do you take care of scenarios
where you have a lot of these

00:10:37.710 --> 00:10:41.620
or maybe a few of them with a lot
of events or a lot of senders

00:10:41.670 --> 00:10:45.250
with a lot of events? All of those
are possible scenarios.

00:10:46.830 --> 00:10:50.480
So we'll make it concrete. We'll
start with an actual scenario

00:10:50.530 --> 00:10:54.510
which customers are use for today
which is where you have to

00:10:54.560 --> 00:10:58.850
collect events for analysis from
a large number of devices.

00:11:00.370 --> 00:11:05.900
Those devices may look familiar
but that is a coincidence that

00:11:05.950 --> 00:11:11.000
I will neither confirm nor deny.
So it could be any device.

00:11:11.050 --> 00:11:12.350
It could be any device.

00:11:13.160 --> 00:11:18.850
Now all of this starts with the
device being able to queue in

00:11:18.900 --> 00:11:24.250
messages, being able to take a few
topics or one topic and push

00:11:24.300 --> 00:11:28.090
in a lot of information
into that channel

00:11:29.520 --> 00:11:33.640
once you have a message in a topic
you can think that you can

00:11:34.710 --> 00:11:39.370
have several scenarios in which
you want to consume it.

00:11:39.420 --> 00:11:43.330
Realtime analytics or what you
would do with your own code is

00:11:43.380 --> 00:11:48.570
really becoming much more prevalent
and popular. Did folks make

00:11:48.620 --> 00:11:53.840
it to the Orleans session which
was done yesterday? Well, if

00:11:53.890 --> 00:11:57.080
you did, awesome, awesome piece
of technology because it tries

00:11:57.130 --> 00:12:02.580
to solve this problem of running your
code at scale in a distributed

00:12:02.630 --> 00:12:06.190
fashion whether you're dealing with
events which are being generated

00:12:06.240 --> 00:12:10.830
by a large number of senders and are
correlated in every which way.

00:12:12.020 --> 00:12:15.930
So how do you make sure that these
back-end systems are decoupled?

00:12:15.980 --> 00:12:18.590
How do you make sure that these
back-end systems are able to

00:12:18.640 --> 00:12:24.640
consume messages at that rate and act
in ways in which they're resilient?

00:12:25.950 --> 00:12:29.560
And for that you put topics in
the middle. So topics not only

00:12:29.610 --> 00:12:33.440
give you the buffering, just like
a queue would, which means

00:12:33.490 --> 00:12:35.950
your back-end could be done for
a couple of hours and you don't

00:12:36.000 --> 00:12:39.060
lose any of the events. The events
still stay there but they

00:12:39.110 --> 00:12:40.490
also give you pub sub.

00:12:41.470 --> 00:12:45.530
Which means that if you have other
systems which are just doing

00:12:45.580 --> 00:12:51.310
state tracking, let's say putting
values into Azure cables, or

00:12:51.360 --> 00:12:56.520
they're doing batch analytics with
link your file structure in

00:12:56.570 --> 00:13:00.330
HDFS and then run Hadoop
jobs on it.

00:13:01.400 --> 00:13:05.850
Or they height be putting you're
data into a SQL data warehouse

00:13:05.900 --> 00:13:09.170
and running BI queries
on top of that.

00:13:09.790 --> 00:13:13.980
All of these systems can go look
at the same event stream.

00:13:15.280 --> 00:13:18.350
And not only the same event stream,
they can look at it event

00:13:18.400 --> 00:13:21.780
stream, too. Maybe the BI work warehouse,
you don't want to consume

00:13:21.830 --> 00:13:25.870
all the events. Any of the action related
events don't belong there.

00:13:25.920 --> 00:13:29.420
They belong only for the code stuff.
You can split the streams

00:13:29.470 --> 00:13:30.210
in that way.

00:13:32.750 --> 00:13:36.990
And then from your back-end, whether
you're reading your Azure

00:13:37.040 --> 00:13:41.730
tables or your SQL data warehouse,
you can generate your bash

00:13:41.780 --> 00:13:43.200
boards and analytics.

00:13:44.750 --> 00:13:45.750
So one of the key

00:13:46.970 --> 00:13:49.340
design points in this packet.

00:13:50.180 --> 00:13:52.920
First is using topics
for the fan in.

00:13:53.960 --> 00:13:57.730
Fan in essential means you have fewer
topics than you have devices.

00:13:57.780 --> 00:13:59.900
Right? What are that
cardinality may be.

00:14:01.080 --> 00:14:03.820
It's probably not going to be
one. It's not going to be one

00:14:03.870 --> 00:14:07.660
topic for everything. It's probably
not going to be N. Lets going

00:14:07.710 --> 00:14:12.220
to be somewhere in between and as
we talk about how to come up

00:14:12.270 --> 00:14:13.860
with that right number.

00:14:14.410 --> 00:14:18.960
You're going to load balance across
data centers for several reasons.

00:14:19.320 --> 00:14:22.490
If you think about it, these devices
are actually geographically

00:14:23.190 --> 00:14:26.300
spread out, so you want to make
sure that the device uses the

00:14:26.350 --> 00:14:30.740
least amount of power, the lowest
latency connection to be able

00:14:30.790 --> 00:14:33.770
to reach out and queue its data.

00:14:35.480 --> 00:14:39.640
So it's load balanced across data
centers. So this bus is available

00:14:39.690 --> 00:14:45.690
in all Azure regions, all data centers.
So you have the ability

00:14:45.740 --> 00:14:50.730
to spread topics around. On now that
does not mean your back-end

00:14:50.780 --> 00:14:53.890
systems have to be abdicated
on all those places, too.

00:14:54.880 --> 00:14:58.000
In if fact, if you think about Hadoop
clusters, it's typically

00:14:58.050 --> 00:15:01.860
not something you would replicate in
every region in every data center.

00:15:01.910 --> 00:15:05.890
But this gives you a low latency
endpoint. From there you can

00:15:05.940 --> 00:15:10.490
collect data to where it is being
generated. And then pull it

00:15:10.540 --> 00:15:14.310
from your back-end. Reaching across
to all those regions and

00:15:14.360 --> 00:15:18.450
subscriptions in different regions
and correlating that data.

00:15:20.910 --> 00:15:23.690
True filter for all but one subscription,
so in this vertical

00:15:23.740 --> 00:15:27.550
customer case, they actually why
consuming all their data and

00:15:27.600 --> 00:15:31.700
code in state tracking and batch
analytics but not in BI.

00:15:31.750 --> 00:15:35.900
So all those three were actually true
filters but one subscription

00:15:35.950 --> 00:15:39.960
had a reduction filter. It had a
filter that said if it's a game

00:15:40.010 --> 00:15:45.060
event, then we don't care about
that and of course you can do

00:15:45.110 --> 00:15:47.360
realtime and batch analytics.

00:15:49.410 --> 00:15:53.110
So for this scenario, I thought
we'll jump into a quick demo.

00:15:54.270 --> 00:15:59.080
And show you the CORS
support aspect of it.

00:16:00.290 --> 00:16:05.680
Because it enables a lot of client
reach from the perspective

00:16:05.730 --> 00:16:11.600
of being able to in queue
messages just using pure

00:16:13.270 --> 00:16:15.140
HTTP and stuff.

00:16:15.730 --> 00:16:21.550
I've got a website set up. You guys
can hit it too if you have

00:16:21.600 --> 00:16:25.950
a device or something. The called
note file user do the Azure

00:16:26.000 --> 00:16:28.260
websites .NET.

00:16:29.750 --> 00:16:40.510
All I have here is very, very simple
JavaScript which I will

00:16:40.560 --> 00:16:41.160
show you.

00:16:41.880 --> 00:16:43.280
And what it does

00:16:48.770 --> 00:16:53.470
is taken the key values, the basic
values of what's her name

00:16:53.520 --> 00:16:58.790
space name what's the queue name,
give me your SaaS rule, the

00:16:58.840 --> 00:17:02.140
shared access signature authorization,
that's what it's using

00:17:02.190 --> 00:17:03.800
as well as the SaaS key.

00:17:04.950 --> 00:17:07.970
And based on that it can send a message.

00:17:14.280 --> 00:17:18.140
Message successfully sent. That's
it. So you can see if you

00:17:18.190 --> 00:17:21.380
have lots and lots of browser clients
or any other client or

00:17:21.430 --> 00:17:25.940
device which can just do pure HTTP,
there's no SOAP here. There's no...

00:17:26.900 --> 00:17:31.300
any encoding. You can put message
properties in JSON and then

00:17:31.350 --> 00:17:35.930
a very, very simple way get messages
in queued. Let me show

00:17:35.980 --> 00:17:38.170
you the code for this website.

00:17:47.070 --> 00:17:52.110
So here you can see whether you're
doing any rich properties

00:17:52.730 --> 00:17:55.220
or even just very, very basic properties,

00:17:58.440 --> 00:18:05.280
you can easily send that code. And
in fact, the JavaScript library

00:18:05.330 --> 00:18:09.370
which is being used here, let
me show that to you also.

00:18:16.200 --> 00:18:22.410
So this is the web page which I
showed you and you can see how

00:18:35.560 --> 00:18:40.400
simple really the send and the
receive for this message is.

00:18:40.450 --> 00:18:44.840
The HTTP, the delete is actually
for the receive scenario.

00:18:45.430 --> 00:18:47.500
Which we will see a little later.

00:18:48.120 --> 00:18:56.600
And the put is for send scenario, post,
sorry, as far as send scenario.

00:18:58.510 --> 00:19:02.420
So let

00:19:03.620 --> 00:19:05.210
me send a few more messages.

00:19:05.810 --> 00:19:09.220
And just to show you the messages
showing up, here I've got Server

00:19:09.270 --> 00:19:12.280
Explorer loaded up with...

00:19:21.330 --> 00:19:25.310
connected to my name space. And I've
got a simple queue on which

00:19:25.360 --> 00:19:28.770
you can see now there are two
messages in queued. If I do a

00:19:28.820 --> 00:19:35.430
refresh, I see 14 messages. So
as when messages come in, they

00:19:35.480 --> 00:19:37.840
will show up on this queue.

00:19:48.480 --> 00:19:53.620
We'll cover the receive scenario
a little later in terms of the

00:19:53.670 --> 00:19:56.920
HTTP client. So that's for HTTP client.

00:19:57.510 --> 00:20:02.200
But I really wanted to talk specifically
about protocols.

00:20:02.820 --> 00:20:06.840
What are the considerations that
you should make when deciding

00:20:06.890 --> 00:20:11.460
whether to use HTTP or to use
the AMQP. As you know Service

00:20:11.510 --> 00:20:13.930
Bus supports several protocols.

00:20:15.060 --> 00:20:21.750
HTTP is just our RKDPI, AMQP is
a standard protocol which I'll

00:20:21.800 --> 00:20:27.620
talk more about, and SBMP is our other
proprietary protocol over .NET.

00:20:29.320 --> 00:20:35.000
Now, each of these can have perf considerations
and reach considerations.

00:20:35.710 --> 00:20:39.950
So if you have a device on which
is very low powered, you might

00:20:40.000 --> 00:20:44.810
have concerns about which protocol
implementation can you put

00:20:44.860 --> 00:20:49.590
on there. If you have scenarios where
you want to be vendor independent,

00:20:50.070 --> 00:20:54.160
you might have reach considerations
saying here I won't buy into

00:20:54.210 --> 00:20:57.830
any particular protocol or API
with one vendor. I'm going to

00:20:57.880 --> 00:21:00.060
use an open standard like AMQP.

00:21:01.900 --> 00:21:04.390
Sometimes features do vary by protocol.

00:21:05.130 --> 00:21:08.000
And the part I want to emphasize
which gets lost on a lot of

00:21:08.050 --> 00:21:11.300
folks is that it's mostly
receive side features.

00:21:11.950 --> 00:21:13.290
There are some send side

00:21:14.560 --> 00:21:19.100
implications, too, most of the
time it's on the receive where

00:21:19.150 --> 00:21:23.270
protocols really are deferred a
lot and we'll see why that is

00:21:23.320 --> 00:21:24.240
the case.

00:21:25.950 --> 00:21:28.810
And then in general there are some
quota differences in terms

00:21:28.860 --> 00:21:32.360
of how many connections you can
create with AMQP and SBMP.

00:21:32.410 --> 00:21:35.550
So those are also important considerations
when thinking, hey,

00:21:35.600 --> 00:21:38.980
which protocol am I going to use
for my large scale, large number

00:21:39.030 --> 00:21:50.090
of senders? So binary protocols
versus HTTP, why does it matter

00:21:50.140 --> 00:21:53.280
for messaging? What are the key
considerations for messaging?

00:21:53.810 --> 00:21:56.350
I just wanted to call out the key
scenarios where it makes a

00:21:56.400 --> 00:21:59.380
difference so then you can choose
and decide whether it matters

00:21:59.430 --> 00:22:02.780
or not for your particular case.

00:22:04.210 --> 00:22:08.070
HTTP case, every time you make
a call out, you're going to be

00:22:08.120 --> 00:22:11.480
able to reach one entity. So that's
one endpoint whether it's

00:22:11.530 --> 00:22:13.850
a send endpoint or a receive endpoint.

00:22:14.850 --> 00:22:16.820
You can do one pending operation.

00:22:17.560 --> 00:22:21.540
Just a single send call or
a single receive call.

00:22:22.370 --> 00:22:26.300
And most of the time, the operation
lifetime cannot be more than

00:22:26.350 --> 00:22:30.940
60 seconds or whatever your load
balancer allows for whatever

00:22:31.480 --> 00:22:33.060
provider you're running on.

00:22:34.490 --> 00:22:41.480
So that does bring into sort of
a case scenarios where you want

00:22:41.530 --> 00:22:43.390
to talk to multiple end points.

00:22:44.040 --> 00:22:47.590
A lot of times in buy directional
communication scenarios you're

00:22:47.640 --> 00:22:51.230
going to be send to go a queue and
receiving from a subscription.

00:22:52.080 --> 00:22:55.730
Or also send to go a notification
hub. All of those kinds of

00:22:55.780 --> 00:22:57.060
scenarios may be there.

00:22:57.640 --> 00:23:01.320
With a binary protocol, you actually
can create a single connection,

00:23:01.370 --> 00:23:08.270
a single bite, a single socket,
and all the other links in the

00:23:08.320 --> 00:23:13.320
AMQP context is a links multiflexed
over that single HTTP connection.

00:23:14.500 --> 00:23:18.740
So you get a lot of advantage by
not having to do the handshake

00:23:18.790 --> 00:23:22.680
and not having to establish that socket
and stuff for every single

00:23:22.730 --> 00:23:26.880
entity versus doing... paying that
cost once and then reusing

00:23:26.930 --> 00:23:29.460
that when you're talking
to several entities.

00:23:30.290 --> 00:23:33.900
Keep that scenario in mind. Sometimes
when you write field gateways

00:23:33.950 --> 00:23:37.240
or custom gateways where you're
fronting a lot of devices, this

00:23:37.290 --> 00:23:40.690
will be a very important consideration.

00:23:43.280 --> 00:23:48.250
The other part is long pulling.
So there's this constant thing

00:23:48.300 --> 00:23:51.400
about pulling on queues, right,
of hey, do I have a message?

00:23:51.450 --> 00:23:55.160
Do I have a message? Do I have
a message? Here because it is

00:23:55.210 --> 00:24:01.040
a connection on the AMQP protocol
we keep the connection alive.

00:24:01.090 --> 00:24:04.370
You don't have to do any operation
other than have a pending

00:24:04.420 --> 00:24:09.120
receive which could be set for a
time out of infinity. You could

00:24:09.170 --> 00:24:12.110
settle it for a day, a week. Generally
you will not settle it

00:24:12.160 --> 00:24:16.090
for infinity. You will set it for whatever
your shutdown characteristics

00:24:16.140 --> 00:24:19.560
look like, maybe 20 minutes or
something like that. But you

00:24:19.610 --> 00:24:24.920
can have a long pull pending receive
and not have to worry about

00:24:24.970 --> 00:24:27.640
churning CPU cycles and stuff of

00:24:29.370 --> 00:24:33.080
getting about that. We will keep
the connection alive through

00:24:33.130 --> 00:24:37.040
pings or whatever the load balancer
is needed and we will provide

00:24:37.090 --> 00:24:41.640
you the low latency response
whenever a message shows up.

00:24:42.360 --> 00:24:45.820
So this becomes another very important
consideration in terms

00:24:45.870 --> 00:24:50.380
of cost as well as the impact on
your device. So binary protocols

00:24:50.430 --> 00:24:53.310
do make a difference in terms
of your scenarios.

00:24:56.240 --> 00:24:59.820
The other consideration which protocols
brings in are SDKs.

00:24:59.870 --> 00:25:03.520
You want to get productive. You want
to use solid core. You want

00:25:03.570 --> 00:25:08.220
to use solid libraries. So you really
want to be able to choose

00:25:08.270 --> 00:25:11.010
the right protocol with
the right SDK.

00:25:12.880 --> 00:25:13.950
So for Service Bus,

00:25:15.670 --> 00:25:19.750
if you're using .NET, then our default
SBMP protocol is the default.

00:25:19.800 --> 00:25:24.130
That's what is used. You can switch
it to AMQP at any time and

00:25:24.180 --> 00:25:25.170
that is fine too.

00:25:25.850 --> 00:25:28.980
There are some featured defenses
right now but we're closing

00:25:29.030 --> 00:25:33.730
that gap pretty soon. But if you're
using .NET, then SBMP is

00:25:33.780 --> 00:25:36.010
sort of your default scenario today.

00:25:37.560 --> 00:25:42.400
If you're using HTTP, if that's
a case, we have HTTP wrappers on a lot

00:25:42.450 --> 00:25:46.160
of operating systems available and
a lot of libraries available.

00:25:47.010 --> 00:25:50.510
And then with AMQP you are starting
to see a lot of community

00:25:50.560 --> 00:25:51.700
libraries come up.

00:25:52.940 --> 00:25:59.670
AMQP being an open standard was
designed and developed all with

00:26:00.690 --> 00:26:05.690
keeping in mind efficient, reliable,
are portable sort of data

00:26:05.740 --> 00:26:10.310
representation, and flexibility
in mind. Flexibility in terms

00:26:10.360 --> 00:26:13.470
of whether it's client to client
libraries or client to broker

00:26:13.520 --> 00:26:15.120
or broke to broke libraries.

00:26:16.680 --> 00:26:20.260
So you're starting to see with the AMQP
standardization moving forward...

00:26:20.310 --> 00:26:26.370
by the way, AMQP was OASIS standard last
October. It just cleared ISO/IEC.

00:26:27.560 --> 00:26:32.950
So now it is an international recognized
standard, too. So that's

00:26:33.210 --> 00:26:35.180
fresh off the press.

00:26:36.990 --> 00:26:41.560
But what it means to you is that you
will see a bunch of libraries

00:26:42.230 --> 00:26:47.750
developed by the Apache Qpid library
set or the proton library

00:26:47.800 --> 00:26:51.010
clients in several different languages.

00:26:51.890 --> 00:26:55.240
C, Java, there's a JMS implementation.

00:26:56.110 --> 00:27:00.670
Of PHP. All of these will be available
to you with community

00:27:00.720 --> 00:27:05.970
library open source support for using
and developing and contributing

00:27:06.020 --> 00:27:06.740
to and

00:27:07.970 --> 00:27:12.310
with service plus or with any other
provider which supports the

00:27:12.360 --> 00:27:14.070
AMQP portal in there.

00:27:14.820 --> 00:27:18.400
So if you're trying to access Service Bus,
you can see the different protocols.

00:27:18.450 --> 00:27:22.940
You have a lot of choice of what
SDKs you use and what libraries

00:27:22.990 --> 00:27:34.850
you use and you don't have to be
limited in any particular way.

00:27:34.900 --> 00:27:36.150
Sync, async, versus batch.

00:27:37.150 --> 00:27:40.650
So now that we understand what are
the protocol nuances, I think

00:27:40.700 --> 00:27:45.840
we should talk about when should
we write a sync code, async

00:27:45.890 --> 00:27:49.170
code, and batch code and what are
the real differences in terms

00:27:49.220 --> 00:27:54.100
of performance that you could see
in these different scenarios.

00:27:55.890 --> 00:27:58.710
Batching clearly increases throughput.

00:27:59.460 --> 00:28:04.620
It is always a very, very good practice
in terms of whether it's

00:28:04.670 --> 00:28:09.260
on the receive side or even on
the send side to use batching.

00:28:09.310 --> 00:28:13.190
The only negative concern for folks
sometimes with that is latency

00:28:13.240 --> 00:28:17.490
and we'll see how that can be
affected but not too much.

00:28:17.540 --> 00:28:18.880
We'll talk about that.

00:28:21.250 --> 00:28:24.830
Async in general is always the
best practice. You always want

00:28:24.880 --> 00:28:28.620
to use it whenever possible. Except
that you do want to bound

00:28:28.670 --> 00:28:31.760
the number of facing calls. You
just don't want to have a tight

00:28:31.810 --> 00:28:34.720
loop that makes an infinite number
of calls and we'll see how

00:28:34.770 --> 00:28:37.660
Service Bus helps with that scenario.

00:28:40.160 --> 00:28:44.110
And then finally we see the binary
protocols significantly higher

00:28:44.160 --> 00:28:47.980
throughput being able to achieve
just because these protocols,

00:28:48.030 --> 00:28:54.290
the AMQP protocol was developed
with efficiency in mind with

00:28:55.260 --> 00:28:58.750
the flow control and all of that
built into the protocol layer

00:28:58.800 --> 00:29:03.950
itself you see a lot of advantage
showing up. So let me actually

00:29:04.000 --> 00:29:08.550
show you some numbers. Some running
numbers so you can compare

00:29:08.600 --> 00:29:10.090
these for yourself.

00:29:20.030 --> 00:29:24.820
So here I have some code which is
going to try to send messages.

00:29:26.190 --> 00:29:28.970
And you can see I've divided
up into three parts.

00:29:29.850 --> 00:29:32.930
The first one is doing a sync send.

00:29:33.690 --> 00:29:38.660
Here are the key lines. For each
messages, do a qClient and send

00:29:38.710 --> 00:29:44.060
off the message. This is a very sync
call. Weights for one to complete.

00:29:44.110 --> 00:29:48.030
It waits for acknowledgment to come
back from the server, reach

00:29:48.080 --> 00:29:51.200
back from the client, a full
loop and then it moves on.

00:29:52.910 --> 00:29:56.650
The second one does it
in an async manner.

00:29:57.900 --> 00:30:02.780
Where essentially it is creating
Async tasks for all of these

00:30:03.350 --> 00:30:04.470
send operations.

00:30:05.700 --> 00:30:09.150
And then waiting for all of
the tasks to complete.

00:30:11.410 --> 00:30:15.170
And then finally, there is a batched
send and I call it ordered

00:30:15.220 --> 00:30:19.430
batch send because with Async, generally
the people come up with

00:30:19.480 --> 00:30:22.840
a scenario where they say, hey, with
Async, I lose order. I don't

00:30:22.890 --> 00:30:25.800
know which one will happen first,
which one will happen next.

00:30:26.300 --> 00:30:29.430
And that's why there is batch send
which is sort of superior

00:30:29.480 --> 00:30:32.300
in both cases because it preserves
all the... either the whole

00:30:32.350 --> 00:30:35.920
batch comes through or the whole
batch comes back and you'll

00:30:35.970 --> 00:30:38.910
see how much after performance
impact this can have.

00:30:40.310 --> 00:30:45.300
So I have all of these pointing to
a simple on message sample queue.

00:30:45.350 --> 00:30:47.900
You can see right now the
queue count is zero.

00:30:48.910 --> 00:30:52.560
And I have set my number of messages
to a small number of 100.

00:30:53.660 --> 00:30:54.780
So let's run this.

00:30:57.310 --> 00:30:59.530
And see how far we get.

00:31:00.250 --> 00:31:04.670
So first it's doing send using
sync. So synchronously making

00:31:04.720 --> 00:31:09.020
100 calls from my laptop all the
way to the service and back.

00:31:09.550 --> 00:31:13.970
Took about ten seconds in terms
of that. And just to show you,

00:31:14.020 --> 00:31:18.360
we can always come back, check on
the message count, and it should

00:31:18.410 --> 00:31:21.860
be at 100 now. All the hundred
messages have made it in here.

00:31:23.160 --> 00:31:26.940
Now let's see what happens when
I do the same thing with Async.

00:31:29.190 --> 00:31:30.590
Same thing with Async.

00:31:31.940 --> 00:31:36.120
And no difference in terms
of the messages because

00:31:37.540 --> 00:31:40.460
the messages have all made it
here. It's 200 messages now.

00:31:41.250 --> 00:31:46.450
It took .3 seconds. For all those
messages to get in there.

00:31:50.260 --> 00:31:52.620
With batch, it's even faster.

00:31:53.370 --> 00:31:54.990
It's actually even faster.

00:31:56.080 --> 00:31:58.880
And the reason is again, because
under the cover, Service Bus

00:31:58.930 --> 00:32:04.440
is using a binary protocol so when
you us messages asynchronously,

00:32:04.490 --> 00:32:09.600
we're table to chunk them together and
send them over with implicit batching.

00:32:10.260 --> 00:32:13.630
You get to set that value. The
batch flush interval, what you

00:32:13.680 --> 00:32:17.710
set on a messaging factory, allows
you to set that window.

00:32:18.310 --> 00:32:21.010
You can set it to a broader window.
You'll see more latency,

00:32:21.060 --> 00:32:23.690
but you'll see much more better
end-to-end throughput. You can

00:32:23.740 --> 00:32:27.310
set it to a much smaller window
and you will see better latency

00:32:27.360 --> 00:32:32.110
and maybe a little bit of less throughput.
But you can see the

00:32:32.160 --> 00:32:36.660
magnitude of difference here it
makes in terms of using sync

00:32:36.710 --> 00:32:38.410
versus Async versus batch.

00:32:45.080 --> 00:32:49.310
So let's quickly see, now that
we have our 300 messages here,

00:32:49.360 --> 00:32:51.110
what can we do on the receive side?

00:33:02.730 --> 00:33:06.700
In receive, note here I'm not
using the on message APIs.

00:33:08.710 --> 00:33:12.460
This is just to show you an apples
to apples comparison of what

00:33:12.510 --> 00:33:15.560
the sync sort of APIs look like
and then I'll show you how the

00:33:15.610 --> 00:33:18.370
on message API does all
of this for you.

00:33:20.100 --> 00:33:23.620
This is a sync receive.

00:33:24.300 --> 00:33:28.740
So I have clearly two calls being
made to the server so this

00:33:28.790 --> 00:33:33.600
is in terms the message processing.
You will never, ever lose

00:33:33.650 --> 00:33:38.280
a message on the wire or in transit
because till you don't call

00:33:38.330 --> 00:33:41.950
complete on it, we will send
you back the same message.

00:33:43.810 --> 00:33:48.260
The next is an Async and here you
can see the what I'm doing

00:33:49.430 --> 00:33:56.230
is a task with a continue with to
then call the complete on there.

00:34:01.730 --> 00:34:05.290
And again I will wait for all those
carving tasks to complete

00:34:05.340 --> 00:34:07.770
before calling my stopwatch done.

00:34:09.300 --> 00:34:10.660
And finally there's batch.

00:34:11.330 --> 00:34:12.950
Batch is a little more interesting.

00:34:13.890 --> 00:34:19.030
Here, it's even easier because I
do receive batch, note a pass

00:34:19.080 --> 00:34:21.370
a number of message
it's which is 100.

00:34:22.040 --> 00:34:24.860
Now once you call receive batch
with hundred doesn't mean we

00:34:24.910 --> 00:34:28.830
will give you a hundred messages
back. It we will do whatever

00:34:28.880 --> 00:34:32.660
is most optimum for the wire based
on compete being consumer,

00:34:32.710 --> 00:34:35.970
based on how many other nodes you
have pulling message to see

00:34:36.020 --> 00:34:38.800
build an optimum batch
and send you that.

00:34:39.610 --> 00:34:43.320
And that's why you see I have an
outer loop which keeps calling

00:34:43.370 --> 00:34:47.620
receive batch until I don't reach
my hundred messages. I want

00:34:47.670 --> 00:34:51.430
to do this batching computation until
I reach a hundred messages.

00:34:53.920 --> 00:34:59.030
And in the case here, I'm going to
only hold on to its locked token.

00:34:59.080 --> 00:35:01.160
That's all I'm doing on the message.
Of I don't have to keep

00:35:01.210 --> 00:35:04.440
the whole message. Once I've consumed
the message, I have processed

00:35:04.490 --> 00:35:07.710
it, being I only have to keep on
the lock token and then call

00:35:07.760 --> 00:35:12.940
the complete batch Async with all
the locked tokens in there.

00:35:14.060 --> 00:35:16.940
And I'm doing this on a batch basis,
so again, I'm not waiting

00:35:16.990 --> 00:35:19.490
all the way till the end to
complete all messages?

00:35:19.500 --> 00:35:21.500
[Indiscernible]


00:35:21.660 --> 00:35:22.750
...subset in there?


00:35:23.510 --> 00:35:24.840
>> Sorry, what was the question?


00:35:24.890 --> 00:35:28.400
>> If you could process some of those
messages, could you complete

00:35:28.450 --> 00:35:30.520
that testing, conduct a subset?

00:35:30.860 --> 00:35:34.510
>> Absolutely. Absolutely.
So complete batch Async.

00:35:35.250 --> 00:35:39.040
You can call with a single locked tokens
two locked tokens, whatever

00:35:39.090 --> 00:35:42.720
the set is. It's just that it will
send all those locked tokens

00:35:42.770 --> 00:35:46.250
in a batch and get you back the
results in a batch. So it's

00:35:46.300 --> 00:35:50.010
saving you that latency and that
roundtrip for doing all of that

00:35:50.060 --> 00:35:52.540
and making it very efficient.

00:35:54.300 --> 00:35:56.070
So let's see what that adds up to.

00:35:58.400 --> 00:36:03.230
So here I have the same case. I'm
first going to use sync and

00:36:03.280 --> 00:36:07.440
try to receive all the hundred...
the first hundred messages

00:36:07.490 --> 00:36:11.190
in there. Now note this will be worse
performance than send because

00:36:11.240 --> 00:36:14.080
it's doing twice the number of operations
so I want to receive

00:36:14.130 --> 00:36:16.460
every message, complete every message.
Receive every message,

00:36:16.510 --> 00:36:20.110
complete every message. And
then go on. So 18 seconds.

00:36:20.160 --> 00:36:24.220
Instead of the ten sends we had seen
for send, it takes 18 seconds

00:36:24.270 --> 00:36:28.760
to those messages and complete
them. So definitely not good.

00:36:30.090 --> 00:36:35.330
With Async because you're doing a bunch
of them in parallel, now you get down to

00:36:35.380 --> 00:36:38.880
about the 2.8 seconds. Now,
these numbers are just...

00:36:39.410 --> 00:36:43.230
take them with a grain of salt,
running on a network here, are

00:36:43.940 --> 00:36:47.470
but you can just see the magnitude
of difference. You can see

00:36:47.520 --> 00:36:49.620
how much of an improvement it does.

00:36:50.830 --> 00:36:52.580
And now let's see what
happens with batch.

00:36:55.730 --> 00:37:00.720
We're back. We're able do the same
almost characteristics as

00:37:00.770 --> 00:37:04.590
0.1 seconds for all the hundred
operations which had taken...

00:37:05.410 --> 00:37:07.930
just because we're using
batch in there.

00:37:11.380 --> 00:37:16.640
Now, not only do you see all these
advantages in here, but Service

00:37:16.690 --> 00:37:21.680
Bus actually makes it very, very easy for
to you write this particular code.

00:37:21.730 --> 00:37:26.700
The code I showed you is not very
complex, but you actually taken

00:37:26.750 --> 00:37:29.280
it a step further and we
made it even easier.

00:37:30.200 --> 00:37:33.470
So for the... by the way, I just
wanted to show you here on the

00:37:33.520 --> 00:37:37.280
messages you see those 300 messages
there, if he refresh, it

00:37:37.330 --> 00:37:41.920
should go back to zero indicating
I'm not lying. All those 300

00:37:41.970 --> 00:37:43.380
messages were processed.

00:37:47.270 --> 00:37:54.910
Okay. So we'll look at the on message
APIs but in the interest

00:37:54.960 --> 00:37:57.880
of time I'm going to speed
up a little bit here.

00:38:00.480 --> 00:38:04.820
So you saw the difference between
sync, Async, and batch, and

00:38:04.870 --> 00:38:10.330
I hope that [Indiscernible] always use 
batching. The next thing about throughput.

00:38:10.380 --> 00:38:14.100
Partitioned queues and topics.
So we released SDK 2.2.

00:38:15.680 --> 00:38:19.590
Partition queues and topics essentially
take one queue and partition

00:38:19.640 --> 00:38:21.830
it across several processing nodes.

00:38:23.240 --> 00:38:26.950
This not only gives you much more
throughput power in terms of

00:38:27.000 --> 00:38:31.900
being able to process more messages but
it gives you more storage capacity.

00:38:32.410 --> 00:38:35.820
It gives you the ability to have
much larger queues. It gives

00:38:35.870 --> 00:38:38.170
you the ability to be more resilient.

00:38:39.270 --> 00:38:42.290
If one partition is unavailable,
another partition can continue

00:38:42.340 --> 00:38:43.580
to process messages.

00:38:44.640 --> 00:38:49.270
So partition queues by and by far
for most scenarios will give

00:38:49.320 --> 00:38:52.990
you a much, much better throughput
availability and resilience

00:38:53.040 --> 00:38:58.570
see characteristic. Out of box.
It is so easy to create and

00:38:58.620 --> 00:39:02.700
use partition queues that it's
just the recommendation as to

00:39:02.750 --> 00:39:06.470
always use these. Just always use
these. In fact, in the next

00:39:06.520 --> 00:39:11.000
SDK release, we're on track to make
it the default that by default,

00:39:11.050 --> 00:39:13.380
when you create a queue you'll
get a partitioned queue.

00:39:15.690 --> 00:39:20.650
Now, you do have to be cognizant
of what happens when you take

00:39:20.700 --> 00:39:22.590
a queue and you partition it across.

00:39:24.060 --> 00:39:26.530
If you're not using sessions, we'll
talk about sessions a lot

00:39:26.580 --> 00:39:30.340
in detail but if you're not using
sessions then essentially,

00:39:31.060 --> 00:39:33.050
you have to be...

00:39:34.220 --> 00:39:38.380
you have to be aware that your messages
may show up out of order

00:39:38.430 --> 00:39:41.830
now because essentially they can
go into different partitions

00:39:41.880 --> 00:39:46.770
and if a partition is unavailable,
then the message will show

00:39:46.820 --> 00:39:47.720
out of order.

00:39:48.460 --> 00:39:51.270
So that's one thing to be aware of
but if you're using sessions,

00:39:51.320 --> 00:39:54.720
which we'll talk about now, then
all your ordering semantics

00:39:54.770 --> 00:39:56.100
are completely preserved.

00:39:57.120 --> 00:40:02.330
And we'll see how. Just to show you
the code here, anytime you're

00:40:02.380 --> 00:40:05.590
creating a queue there's one single
property EnablePartitioning.

00:40:05.640 --> 00:40:08.720
It's today set to false by default.
Like I said in the next

00:40:08.770 --> 00:40:10.040
SDK it will be true.

00:40:10.780 --> 00:40:13.750
So you should just set that. By
the way I don't know how you

00:40:13.800 --> 00:40:18.770
folks generally do but my philosophy
in general is never copy

00:40:18.820 --> 00:40:20.730
code which you see in a PowerPoint.

00:40:21.330 --> 00:40:24.470
I don't know if that works for
you guys. I never, ever copy

00:40:24.520 --> 00:40:28.150
code which you see in PowerPoint because
will be the most simplistic

00:40:28.590 --> 00:40:32.710
and basic kind of code which anyone
can put out there. In this

00:40:32.760 --> 00:40:35.500
case it's okay. You're just setting
a property, are so that's fine.

00:40:35.550 --> 00:40:38.540
But if I ever showed you code
in PowerPoint, don't copy.

00:40:40.650 --> 00:40:46.660
So connection throughput. We've talked
about senders. We've seen

00:40:46.710 --> 00:40:50.290
how binary connections are really,
really important. There are

00:40:50.340 --> 00:40:55.090
some cases where you might be sending
using a very, very fat pipe.

00:40:55.660 --> 00:40:58.340
Think of it as your back-end so we're
trying to in queue messages.

00:40:58.390 --> 00:41:03.370
You've got a ton of logs you want
to push up and things like that.

00:41:04.400 --> 00:41:08.450
Well at some point, creating more
physical TCP connections may

00:41:08.500 --> 00:41:12.630
actually be a good idea and you can
easily do that. Each messages

00:41:12.680 --> 00:41:16.220
in factory instance for a class
instance of messaging factory

00:41:16.270 --> 00:41:18.390
corresponds to one PCP connection.

00:41:19.390 --> 00:41:22.550
So the more number of queue clients
and stuff that you're creating

00:41:22.600 --> 00:41:25.680
off the same factory like I showed
you, you're multiplexing all

00:41:25.730 --> 00:41:31.430
the connections over the same TCP socket.
So create more messaging factories.

00:41:31.480 --> 00:41:33.700
And if you create more messaging
factories, you'll just get more

00:41:33.750 --> 00:41:38.720
pipes and more data can be pushed
through so a key consideration

00:41:38.770 --> 00:41:42.540
for that. Connection level resilience
is built in. So once you

00:41:42.590 --> 00:41:46.140
create a messaging factory, you
never have to discard it.

00:41:46.190 --> 00:41:49.320
If the connection breaks, we'll
rebuild it. If your link breaks

00:41:49.370 --> 00:41:52.740
to a queue, we'll rebuild it. Whatever
it is we will rebuild

00:41:52.790 --> 00:41:54.860
in terms of that so you never
have to do this...

00:41:55.370 --> 00:41:58.030
have to throw this object away
and recreate this object.

00:41:58.310 --> 00:42:02.780
Just create more of them and reuse
them for as much as you need.

00:42:05.910 --> 00:42:07.540
So that brings us to sessions.

00:42:08.520 --> 00:42:11.670
Because I'm telling you to take
all these senders large number

00:42:11.720 --> 00:42:14.910
of senders and multiplex them all
into very, very small number

00:42:14.960 --> 00:42:17.850
of queues, how are you going
to actually process this?

00:42:17.900 --> 00:42:21.110
We've seen Orleans kind of framework
and stuff, which are all

00:42:21.160 --> 00:42:23.460
trying to demultiplex the stream,

00:42:24.720 --> 00:42:26.530
demultiplex the stream.

00:42:28.490 --> 00:42:33.070
Sessions is an awesome built in
feature in Service Bus which

00:42:33.120 --> 00:42:37.130
essentially creates subqueues.
So each session you can think

00:42:37.180 --> 00:42:40.290
of as a subqueue when a full queue.

00:42:41.480 --> 00:42:44.860
And the original nature just
has to set the session ID.

00:42:44.910 --> 00:42:46.840
That's the one single property
they have to set.

00:42:48.090 --> 00:42:51.240
It's the receiver where the
paradigm really changes.

00:42:52.050 --> 00:42:56.090
The receiver now no longer goes and
says hey, give me the next message.

00:42:56.140 --> 00:42:59.670
The receiver says give me the next
session. Give me the next

00:42:59.720 --> 00:43:02.690
subqueue which has some messages
and I'll go process them in

00:43:02.740 --> 00:43:06.760
order I'll go process them with
some state, which I might store

00:43:06.810 --> 00:43:10.600
for that receiver. So if you think
about millions of devices,

00:43:10.650 --> 00:43:13.290
now you can think that on a single
queue, you can have all these

00:43:13.340 --> 00:43:18.620
million of subqueues and store
state per subqueue. So very,

00:43:18.670 --> 00:43:20.410
very powerful in that sense.

00:43:21.050 --> 00:43:24.400
You can do work set work set pinning.
Which means you can say

00:43:24.450 --> 00:43:29.230
receiver one, I want to localize
devices 1 through 100. So it

00:43:29.280 --> 00:43:32.810
will go and ask for sessions 1
through 100 and will be pinned

00:43:32.860 --> 00:43:33.440
to that.

00:43:35.000 --> 00:43:39.680
And then of course you can store
state. So I'll show you the

00:43:39.730 --> 00:43:43.490
code for this. Essentially you do
the quire session to true when

00:43:43.540 --> 00:43:45.270
you're creating the queue.

00:43:45.790 --> 00:43:49.670
On the send side, you just have to
set one property, the session ID.

00:43:50.530 --> 00:43:55.720
And then on receive side, all the
same kind of parameters apply

00:43:55.770 --> 00:43:59.840
like I showed you, the accept message
session, you can do accept

00:43:59.890 --> 00:44:03.730
message session with an ID or now
what we have just released

00:44:03.780 --> 00:44:08.760
is a very, very simple way
of being able to do

00:44:11.810 --> 00:44:13.010
session receivers.

00:44:14.920 --> 00:44:18.080
So I'll open a session sender.

00:44:18.970 --> 00:44:21.810
We have already realized that batching
is the best way of sending

00:44:21.860 --> 00:44:25.740
so all the sender is doing that
for each session ID it's going

00:44:25.790 --> 00:44:30.240
to send as many messages as the
session ID plus one. So if it's

00:44:30.290 --> 00:44:33.480
session ID one, I'm going to send
two messages. If it's session

00:44:33.530 --> 00:44:36.070
ID two, I'm going to send
three messages and so on.

00:44:37.350 --> 00:44:38.920
So I'll just start the sender.

00:44:39.880 --> 00:44:43.910
And here, if you look at this queue,
on message queue sample,

00:44:44.580 --> 00:44:49.140
when I created this queue, the
only one property extra I set

00:44:49.190 --> 00:44:55.090
on it was this, the require session property.
That was the only difference.

00:44:55.670 --> 00:44:59.940
So now when you go to this particular
queue, and you see it's

00:44:59.990 --> 00:45:02.440
properties, you'll

00:45:08.230 --> 00:45:09.410
see that...

00:45:11.710 --> 00:45:16.480
requires session property is
false. That's not good.

00:45:16.530 --> 00:45:20.780
Okay. Let me delete this queue then.

00:45:24.670 --> 00:45:34.390
Create on message session sample.

00:45:37.280 --> 00:45:38.780
Require session.

00:45:45.040 --> 00:45:47.020
Going to read on my sender.

00:45:51.490 --> 00:45:53.840
So this will start sending messages.

00:46:09.430 --> 00:46:18.880
I guess it's not finding that
queue name right now.

00:46:18.890 --> 00:46:20.800
[Indiscernible]

00:46:20.870 --> 00:46:27.580
>> Oh, did I? On message sessions...
oh, there you go.

00:46:29.640 --> 00:46:36.750
Perfect. So let me change that
to on message session sample.

00:46:39.450 --> 00:46:40.630
Thank you so much.

00:46:42.100 --> 00:46:43.360
Now let's run this guy.

00:46:46.770 --> 00:46:49.710
There you go. Said sending all those
messages. Now let me show

00:46:49.760 --> 00:46:54.350
you what the receive code
looks like for this.

00:46:55.510 --> 00:46:59.710
This is the brand new API which
we have just released, the on

00:46:59.760 --> 00:47:02.010
message processing API.

00:47:03.430 --> 00:47:07.500
So in your Azure worker role,
let me change this too.

00:47:10.890 --> 00:47:14.690
In your Azure worker role, on the
on start method you would do

00:47:14.740 --> 00:47:19.540
the same, just check if the queue exists
and you create the qClient.

00:47:20.250 --> 00:47:24.120
In the run rule, you notice your
code gets even simpler.

00:47:25.610 --> 00:47:29.270
All you're doing is this
one register call.

00:47:29.900 --> 00:47:32.770
To say register a session handler.

00:47:33.670 --> 00:47:36.500
And that's it. No deceive
loops to write.

00:47:37.120 --> 00:47:38.950
No lifetime to manage.

00:47:39.580 --> 00:47:43.920
All of that is taken care of by
the client library for you.

00:47:43.970 --> 00:47:48.540
You just have to encapsulate all
your logic of how you're going

00:47:48.590 --> 00:47:53.790
to process that single stream in one
class called the my session handler.

00:47:54.700 --> 00:47:57.450
Let's walk through this class
and see what I'm doing here.

00:47:58.700 --> 00:48:02.660
The first thing is what do I do when
I actually get the message?

00:48:05.430 --> 00:48:09.430
On message, I'm just printing out
that I got the message and

00:48:09.480 --> 00:48:10.870
I'm increasing my count.

00:48:11.610 --> 00:48:15.320
That's all I'm doing in this class.
Count is a private member

00:48:15.370 --> 00:48:19.860
here and we're just saving that value.

00:48:21.090 --> 00:48:22.960
So we set count

00:48:24.710 --> 00:48:28.550
equal to zero and we just keep a count
so that's all my processing is.

00:48:29.270 --> 00:48:34.550
On closed session, closed session
is called when there are no

00:48:34.600 --> 00:48:38.750
more messages available for that
session or you have reached

00:48:38.800 --> 00:48:42.360
your maximum currency count. We talked
about how many concurrently

00:48:42.410 --> 00:48:43.630
you want to possess.

00:48:44.260 --> 00:48:48.230
So if you have reached your max
concurrency count of how many

00:48:48.280 --> 00:48:53.040
sessions to process, we will call
close on that session and open

00:48:53.090 --> 00:48:57.610
a new session depending on what messages
are available. So closing

00:48:57.660 --> 00:49:00.700
your opportunity to say that I've
gotten a set of messages, I've

00:49:00.750 --> 00:49:03.900
processed them for this particular
session, and now I should

00:49:03.950 --> 00:49:05.580
save that state of.

00:49:07.140 --> 00:49:10.730
And here you can see all I'm doing
is calling set state and get

00:49:10.780 --> 00:49:15.250
state which is on the session objection.
These are essentially streams.

00:49:16.050 --> 00:49:20.770
And storing away the count value
whenever the session is closed.

00:49:21.780 --> 00:49:26.130
And then the final one is the error
case of when a session is lost.

00:49:27.420 --> 00:49:31.050
Now remember, the reason you're able
to correlate all these messages

00:49:31.100 --> 00:49:34.310
is because we lock a session for
you. We make sure that you're

00:49:34.360 --> 00:49:38.790
the only receiver who is getting
messages for that subqueue,

00:49:38.840 --> 00:49:40.510
that subsession.

00:49:41.190 --> 00:49:43.780
And you can always lose a lock.
A lock would be lost because

00:49:43.830 --> 00:49:47.660
of a failure on the server. It could
be lost because of connection

00:49:47.710 --> 00:49:51.410
problems or maybe your processor
just hung and it lost the lock

00:49:51.460 --> 00:49:55.290
because it then do an operation
in there so you can always get

00:49:55.340 --> 00:49:58.940
this error called. In this case,
all you will do is abandon the

00:49:58.990 --> 00:50:03.500
local set of changes and move
on. In terms of that.

00:50:04.870 --> 00:50:07.150
So let's see what this looks like.

00:50:07.670 --> 00:50:08.790
An actual running.

00:50:10.210 --> 00:50:10.800
Was there a question?

00:50:10.850 --> 00:50:11.930
>> Does it work the same?


00:50:13.740 --> 00:50:17.500
>> So the question was: Will this work the 
same for subscriptions? Hundred percent.

00:50:17.550 --> 00:50:21.170
It's completely symmetric. Whether
you're receiving from a queue

00:50:21.220 --> 00:50:24.130
or you're receiving from a subscription.

00:50:25.440 --> 00:50:28.920
So here's my worker role now. Let
me actually quickly check what

00:50:28.970 --> 00:50:30.850
the queue count looked
like after the

00:50:32.060 --> 00:50:36.390
role was done sending. Looks like
it's got 3,700 messages right

00:50:36.440 --> 00:50:40.610
now, 33, looks like processing
has kicked in.

00:50:41.650 --> 00:50:56.690
Let me jump into there...
we go. It's coming up.

00:50:56.740 --> 00:51:03.350
Good. So right now, are my machine's
working through and you

00:51:03.400 --> 00:51:06.090
can see processing thousands
and thousands of messages.

00:51:06.890 --> 00:51:10.740
And the code which I wrote was
very, very simplistic thinking

00:51:10.790 --> 00:51:15.170
about just a simple session, a single
session and I didn't have

00:51:15.220 --> 00:51:18.800
to write a single receive loop. I just
had to register pipe handler.

00:51:19.200 --> 00:51:23.370
The handler which I showed you is
the simplistic case where you

00:51:23.420 --> 00:51:28.420
can have instances of this created and
you're not doing any initialization.

00:51:28.450 --> 00:51:32.020
We have a factory back up which
is available too. You can do

00:51:32.070 --> 00:51:36.960
a register handler factory and that
way you can control the creation

00:51:37.010 --> 00:51:38.700
semantics of that also.

00:51:40.370 --> 00:51:43.560
But here, you can see persist
state and closed session.

00:51:44.420 --> 00:51:48.340
Let me zoom in here so folks can
clearly see what happened here.

00:51:49.070 --> 00:51:54.740
If you see every session, session
state was 22 for session 21.

00:51:54.790 --> 00:51:57.810
Session state was 46
for session 45.

00:51:58.620 --> 00:52:03.770
Because that class got only messages
which belonged to that session.

00:52:04.200 --> 00:52:08.320
All that demuxing and muxing was
easy and everything was handled

00:52:08.370 --> 00:52:12.410
by Service Bus for you. So when
you think about multiplexing

00:52:12.460 --> 00:52:15.990
a large number of senders into
a small number of queues, know

00:52:16.040 --> 00:52:19.260
that you have not lost the simplicity
of being able to process

00:52:19.310 --> 00:52:23.800
them on the back end using
these individual streams

00:52:30.570 --> 00:52:34.260
in there.

00:52:37.740 --> 00:52:41.000
Session state we talked about.
Correlation using sessions.

00:52:41.350 --> 00:52:46.020
So just to summarize, actually
before we summarize, access.

00:52:46.590 --> 00:52:49.230
So another key consideration is
when you have these very, very

00:52:49.280 --> 00:52:52.530
large number of senders, what is the
auth model? What's the security

00:52:52.580 --> 00:52:55.750
model that you're going to use?
So I would say shared access

00:52:55.800 --> 00:52:58.420
signature is definitely the
recommend the auth model.

00:52:59.010 --> 00:53:02.150
There's a lot more detail. In fact
the deck has more detail on

00:53:02.200 --> 00:53:08.190
how to set up shared access signatures.
You can go to the portal

00:53:08.540 --> 00:53:10.040
and manage your queues.

00:53:10.910 --> 00:53:15.270
Here I have the IOT queue which you
guys are using from the website.

00:53:16.050 --> 00:53:18.850
And I can just go here and configure.

00:53:19.420 --> 00:53:23.650
Note that I had Oh,...

00:53:23.660 --> 00:53:23.720
[Indiscernible]

00:53:23.700 --> 00:53:25.290
>> I'm so sorry. I'm so sorry.

00:53:28.330 --> 00:53:33.790
So I jumped into the Azure portal
and I selected my IOT queue.

00:53:34.890 --> 00:53:38.340
And in this, when I go to the configure
tab, you see my shared

00:53:38.390 --> 00:53:42.420
access policy names here. So in
that website example which I

00:53:42.470 --> 00:53:45.240
showed you, if I called a receive,
shall this would actually

00:53:45.290 --> 00:53:49.650
fail because right now, the only
authorization on this is send.

00:53:50.890 --> 00:53:54.310
But I can easily go and add listen
authorization to that rule.

00:53:55.730 --> 00:53:56.440
Hit save.

00:53:57.340 --> 00:53:58.640
Saying updating the queue.

00:53:59.190 --> 00:54:00.050
And now any

00:54:01.700 --> 00:54:06.780
token which was generated for this
rule will have the ability

00:54:06.830 --> 00:54:11.480
to do both send and receive. So now
I can go here and click receive,

00:54:12.880 --> 00:54:15.660
well, there you go. Looks like folks
have been sending messages.

00:54:15.710 --> 00:54:18.320
So now you go to chat session going,
you guys can keep in touch

00:54:18.370 --> 00:54:20.210
with each other online.

00:54:21.490 --> 00:54:24.220
So shared access signature, are very,
very lightweight, are very

00:54:24.270 --> 00:54:28.290
easy to use model. If you need to
jump into an SDS kind of model,

00:54:28.340 --> 00:54:35.540
are ACS is fully supported. ACS is
still the right option there.

00:54:35.590 --> 00:54:37.660
You just saw with the queues.


00:54:39.580 --> 00:54:43.390
Just so summarize, we saw protocols.
Why they are relevant.

00:54:43.650 --> 00:54:47.970
We went through stream correlation,
why it is not required that

00:54:48.020 --> 00:54:50.860
you create a queue per device.
You don't want to be managing

00:54:50.910 --> 00:54:53.980
a million queues. But you don't
want to be writing code which

00:54:54.030 --> 00:54:57.760
has to be super complex too. So
both of those are very, very

00:54:57.810 --> 00:55:00.840
easily supported with Service
Bus messaging.

00:55:01.900 --> 00:55:05.320
Queues, topics, subscriptions.
Symmetric in all of them.

00:55:05.370 --> 00:55:08.990
Everything I showed you in terms
of what works with queues for

00:55:09.040 --> 00:55:12.280
sessions works the exact same way
with topics and subscriptions

00:55:12.330 --> 00:55:16.290
and filters. When you create a subscription,
you just say requires

00:55:16.340 --> 00:55:18.360
session on it equals to true or not.


00:55:21.680 --> 00:55:22.910
Scale on receivers.


00:55:27.210 --> 00:55:30.850
Visual Studio had this challenge
where you can launch a ton of

00:55:30.900 --> 00:55:34.520
instances of your IDE, and then
you might go and change your

00:55:34.570 --> 00:55:37.840
profile on one of them and you
want all of them to sync up.

00:55:38.640 --> 00:55:41.980
How are you going to go communicate
to all these instances?

00:55:42.490 --> 00:55:45.600
And these are dynamic instances
too because the number of VS

00:55:45.650 --> 00:55:49.910
instances which you've launched varies
depending on day of week.

00:55:49.960 --> 00:55:53.530
We actually have statistics
to show that, by the way.

00:55:53.580 --> 00:55:57.170
People open way more instances on Wednesday
than they open on Friday.

00:55:57.220 --> 00:56:04.740
So productivity is tanking by Friday. 
 So anyway, the problem can again

00:56:04.790 --> 00:56:07.440
be solved with topics where you
have millions and millions of

00:56:07.490 --> 00:56:11.110
end points. And you want all of
them listening to their own one

00:56:11.160 --> 00:56:14.070
single subscription for messages.

00:56:15.150 --> 00:56:19.140
Of whether those messages are generated
by the back-end based

00:56:19.190 --> 00:56:22.840
on some change in the system or
something like you want to send

00:56:22.890 --> 00:56:26.270
a notification to a user, you want
to notify them on a Windows

00:56:26.320 --> 00:56:30.510
7 box where you have no notification
service. You want to notify

00:56:30.560 --> 00:56:34.520
them and say hey there's a new update
available in Visual Studio,

00:56:34.570 --> 00:56:39.970
go download it. Or more importantly
give them a low latency sort

00:56:40.020 --> 00:56:43.890
of pipe where if they make changes
on one VS instance, the other

00:56:43.940 --> 00:56:45.430
VS instances sync up.

00:56:46.140 --> 00:56:48.340
You can sues topics and
subscriptions for that.

00:56:49.760 --> 00:56:52.470
So think of it conceptually
as a topic per-VS user.

00:56:53.200 --> 00:56:58.800
You have a subscription per-VS instance
always connected using MQP.

00:56:58.850 --> 00:57:03.260
So MQP gives us a lot of connection
efficiency where you have

00:57:03.310 --> 00:57:07.830
on us millions and millions of
concurrent sections with very,

00:57:07.880 --> 00:57:12.350
very low overhead, just waiting
for occasional notifications.

00:57:12.380 --> 00:57:14.840
That's the difference about notifications.
They're very occasional

00:57:14.890 --> 00:57:18.080
in nature. How often do folks
change their profile color?

00:57:19.770 --> 00:57:20.260
A day?

00:57:20.310 --> 00:57:22.960
Week? Hopefully not every day.

00:57:23.790 --> 00:57:25.160
Depends on your mood I guess.

00:57:26.260 --> 00:57:29.100
But doesn't happen very often.
How often do they have updates

00:57:29.150 --> 00:57:33.660
to push out? Not very often. But
you still have this BNS kind

00:57:33.710 --> 00:57:38.290
of infrastructure available for
you where you have connections

00:57:38.340 --> 00:57:41.780
waiting for that notification because
when that notification

00:57:41.830 --> 00:57:45.170
becomes available, you want it in
an instant. You want it right

00:57:45.220 --> 00:57:46.040
then and there.

00:57:51.000 --> 00:57:54.990
So here you need to really think
about and you need to think

00:57:55.040 --> 00:57:59.320
about message flows. Because today
topics allow you up to 2000

00:57:59.370 --> 00:58:03.170
subscriptions and when you're thinking
of scale on the number

00:58:03.220 --> 00:58:07.420
of receivers, 2000 may be enough
or 2000 may not be enough.

00:58:07.980 --> 00:58:10.910
If you think about Visual Studio,
a single person having more

00:58:10.960 --> 00:58:13.700
than 2000 instances of
the IDE running is

00:58:16.030 --> 00:58:20.210
next to impossible. I don't know maybe
it's possible but it doesn't happen.

00:58:20.520 --> 00:58:24.520
So for them, a topic per-VS user
is fine, but for you it may

00:58:24.570 --> 00:58:27.660
be that everyone is listening
to the same feed. You want to

00:58:27.710 --> 00:58:30.790
be able to send everyone send...
a single message and send it

00:58:30.840 --> 00:58:34.790
broad cast to everyone. Well, then
you want to chain topics together.

00:58:35.250 --> 00:58:38.680
And do you that using auto forwarding.

00:58:39.850 --> 00:58:43.350
I'm not going to jump into bunch
of these pattern details in

00:58:43.400 --> 00:58:45.280
terms of how to set up filters.

00:58:45.800 --> 00:58:48.520
All of these are samples on MSDN.com.

00:58:49.130 --> 00:58:55.380
This particular sample is called
list. There's a sample called

00:58:55.430 --> 00:58:58.720
publish subscribe. The full code
for these is available.

00:58:58.770 --> 00:59:02.570
Really encourage you guys to go take
a look, but with these topics

00:59:02.620 --> 00:59:06.190
you can really settle up these rich
flows are where every message

00:59:06.240 --> 00:59:09.930
doesn't have to get routed to all
the 2 million folks and then

00:59:09.980 --> 00:59:14.280
get dropped every time. It can get
routed to one person, to many

00:59:14.330 --> 00:59:18.680
people, or in a endless kind of case
where you just write address.

00:59:18.730 --> 00:59:19.660
Like e-mail.

00:59:20.190 --> 00:59:23.130
In this case it's like saying on
the message I can say first,

00:59:23.180 --> 00:59:24.230
comma, second.

00:59:25.130 --> 00:59:27.850
And I am addressing two devices,
the first device and the second

00:59:27.900 --> 00:59:30.770
device, or two subscriptions,
the first and the second.

00:59:30.820 --> 00:59:35.390
Because they have rules set up like
first and like second in there.

00:59:36.390 --> 00:59:40.470
So really, look at these
pub sub (indiscernible)

00:59:42.610 --> 00:59:47.050
auto forwarding. Very, very easy
to use. Essentially you create

00:59:47.100 --> 00:59:52.150
your destination queue first and
then on the source queue, you

00:59:52.200 --> 00:59:55.970
add a single property. The single property
is called source.ForwardTo,

00:59:57.220 --> 01:00:00.600
to the destination queue, and that's
it. All messages coming

01:00:00.650 --> 01:00:03.280
into the source queue go into
the destination queue.

01:00:03.330 --> 01:00:10.030
Sources can be subscriptions and
queues. Audits can be topics

01:00:10.080 --> 01:00:10.960
and queues.

01:00:13.190 --> 01:00:16.800
Completely symmetric, set up as many
font apologies as you would

01:00:16.850 --> 01:00:18.810
like and see that.

01:00:19.400 --> 01:00:22.540
If you have transient cases where
you have subscriptions that

01:00:22.590 --> 01:00:23.390
are going away,

01:00:24.660 --> 01:00:28.430
you can use auto delete on idle. So
this is also very neat feature.

01:00:28.480 --> 01:00:32.570
Let's you manage large number of
transient connections. In fact

01:00:32.620 --> 01:00:35.640
one of the key scenarios, this is
being used is by SignalR and

01:00:35.690 --> 01:00:38.590
Socket I/O. They are very,
very transient in nature.

01:00:38.640 --> 01:00:40.200
Connections come, connections go.

01:00:41.380 --> 01:00:43.700
Loads get added and nodes get removed.

01:00:44.600 --> 01:00:48.680
So they use Service Bus as a back
play where essentially they're

01:00:48.730 --> 01:00:52.540
writing a subscription per node whenever
a new subscription comes

01:00:52.590 --> 01:00:56.160
up not a new connection comes up
a new node comes up, when they

01:00:56.210 --> 01:00:57.260
add servers.

01:00:58.320 --> 01:01:03.210
And then they use topics and subscription
as the back pipe to

01:01:03.260 --> 01:01:05.970
transfer messages between nodes
and get broader scale.

01:01:07.010 --> 01:01:10.090
And then when a node goes away,
the subscription expires, those

01:01:10.140 --> 01:01:17.490
messages are gone in there. Both
of those code are open code.

01:01:17.540 --> 01:01:20.240
Both are available if you want to
scale out, signal out, or Socket

01:01:20.290 --> 01:01:24.720
I/O because you need a durable messaging
pipe at the end, Service

01:01:24.770 --> 01:01:27.980
Bus has implementations
for both of those.

01:01:29.920 --> 01:01:33.050
I did want to talk a little built
about availability so let me

01:01:33.100 --> 01:01:36.780
quickly cover that. I know
we're almost over time

01:01:38.830 --> 01:01:42.440
code needs to be resilient to operation
failures as well as connected

01:01:42.490 --> 01:01:43.470
with the issues.

01:01:43.990 --> 01:01:46.750
Deadletter queues, like I told
you really help you. They help

01:01:46.800 --> 01:01:50.790
you at application level where
decivilization of a message or

01:01:50.840 --> 01:01:51.830
something may fail.

01:01:52.980 --> 01:01:57.440
Every message that Service Bus throws
has a in transient property

01:01:57.490 --> 01:01:58.020
on it

01:01:59.480 --> 01:02:02.780
clearly and simply makes it easy
for you to know whether you

01:02:02.830 --> 01:02:04.350
have to retry it or not.

01:02:05.090 --> 01:02:08.560
By default, we actually are automatically
retried. So I talked

01:02:08.610 --> 01:02:12.090
about the time outs basically operation
time outs. By default

01:02:12.140 --> 01:02:15.190
operation time outs are set to
60 seconds which means if you

01:02:15.240 --> 01:02:19.720
make a send call, it may fail once,
we'll try it again after

01:02:19.770 --> 01:02:22.980
three seconds. It may file twice. We'll
try it again after ten seconds.

01:02:23.030 --> 01:02:27.840
In that 60 seconds you give us, we will
try to make that call succeed.

01:02:27.890 --> 01:02:29.740
And if not, we will send
it back to you.

01:02:31.320 --> 01:02:33.650
If you have some other place
to persist it, that's fine.

01:02:33.700 --> 01:02:36.920
Otherwise check as transient and
then you send it back, I guess.

01:02:38.160 --> 01:02:42.430
And partition queues and topics have
significantly improved availability.

01:02:43.080 --> 01:02:48.230
Order of magnitude improvement. So
highly, highly recommend using

01:02:48.280 --> 01:02:49.710
this feature.

01:02:51.830 --> 01:02:55.280
Retry policy on by default.
Don't turn it off, please.

01:02:57.200 --> 01:02:59.970
Paired name spaces. The last
thing I'll talk about today.

01:03:00.490 --> 01:03:03.540
If you have a Service Bus named space,
messages are nicely flowing

01:03:03.590 --> 01:03:08.210
through and then the entire data
center goes caput or at least

01:03:08.260 --> 01:03:13.570
the entire name space goes a caput.
Bad name space will create

01:03:13.620 --> 01:03:15.790
the back up name space. You create
the back up name space.

01:03:15.840 --> 01:03:19.190
You just provide it to us and we'll
start storing messages in

01:03:19.240 --> 01:03:23.440
the back up name space. So any
message that fails going into

01:03:24.140 --> 01:03:25.350
will go into the back up.

01:03:26.210 --> 01:03:29.450
At some point messages will start
flowing through. The system

01:03:29.500 --> 01:03:30.340
will come back.

01:03:31.350 --> 01:03:35.150
And at that point we have a siphon
will take messages from these

01:03:35.200 --> 01:03:39.110
transfer queues and reenqueue
them to the original queue.

01:03:40.650 --> 01:03:43.590
So for all of this, your sender code
doesn't change, your receiver

01:03:43.640 --> 01:03:46.370
code doesn't change. Your sender
and receiver, as if they are

01:03:46.420 --> 01:03:48.470
always talking to Service
Bus name space.

01:03:49.240 --> 01:03:54.700
Under the covers, we are creating
the transfer queues, moving

01:03:54.750 --> 01:03:57.870
the messages there and then pulling
them back out for you.

01:03:58.720 --> 01:04:03.160
And this is the only piece of
code that you have to modify.

01:04:03.740 --> 01:04:06.070
This is not the only work you have
to do. We'll talk about the

01:04:06.120 --> 01:04:08.520
considerations but this is the
only piece of code you have to

01:04:08.570 --> 01:04:13.330
modify which is that when you create
a factory, which is your

01:04:13.380 --> 01:04:17.690
run time send and receive code
class, you pair it with a name

01:04:17.740 --> 01:04:21.230
space, you say hey there's a second
factory, a second name space

01:04:21.280 --> 01:04:24.130
manager with which you want
to you be paired with.

01:04:24.660 --> 01:04:28.600
And everything else is done by the
client side. No sender change.

01:04:28.650 --> 01:04:31.470
No receiver change. Code
remains the same.

01:04:36.210 --> 01:04:41.520
Now, sender available is supported.
As you saw in the diagram,

01:04:41.570 --> 01:04:44.590
the receiver will not get the message
until the original name

01:04:44.640 --> 01:04:45.760
space comes back.

01:04:46.330 --> 01:04:49.340
So this is more from a send availability.
Right now that's why

01:04:49.390 --> 01:04:54.000
we call it send availability options.
Ordering may be lost because

01:04:54.050 --> 01:04:57.910
messages which are in the transfer
queue will not show up.

01:04:58.630 --> 01:05:02.360
And then end to ends receive latency
can of course be high.

01:05:02.410 --> 01:05:06.420
So there are some considerations
but really think of this as

01:05:06.470 --> 01:05:10.730
a key scenario of making
disaster recovery kind

01:05:12.070 --> 01:05:14.770
ever scenarios.

01:05:15.810 --> 01:05:18.710
So just to close out, we saw Azure
Service Bus can really scale

01:05:18.760 --> 01:05:21.870
in all dimensions. Lots of senders.
Lots of throughput.

01:05:21.920 --> 01:05:23.080
Lots of receivers.

01:05:23.730 --> 01:05:27.420
And you can improve reliability
both using the new features out

01:05:27.470 --> 01:05:31.950
of the box like partition queues
and paired name spaces or by

01:05:32.000 --> 01:05:37.320
making your code use patterns like
Async and batch and stuff.

01:05:38.100 --> 01:05:41.750
Tons of links. Tons of resources.
You have access to all of that.

01:05:41.800 --> 01:05:44.130
Thank you so much. I apologize
for whatever.

