Nice talk which covered the high level use case of this technology.
I'd also like to compliment you on his prior talk on Data Exposed, "SQL Server 2014 In-memory OLTP Logging" shown on 2014-06-11. We need more of those talks. A really detailed walk thru of how the internal work is really hard to find & immensely valuable. Especially as we know it is accurate when presented by the guy who designed it.
Please do more of the internals talks. esp on SQL 2016.
I'm concerned about the inability to control the names of the indexes you create. If I script it to control the index name I lose the benefit of the Index Wizard. Please consider providing some UI where the DBA has some control over the index that you create. At minimum an "advanced" option where we can a) change the name, b) turn on a "Unique" property. Clearly clearly a wizard can't recommend a unique index, but a DBA will have that business insight. Significantly improving the benefit of the index to the optimiser.
Why does the name matter? Many companies have naming standards &/or run automated metadata/schema documentation tools. These expose the names of indexes & constraints. They also make it easier to create automated scripts that anticipate the correct index name. And prevent the creation of multiple identical indexes. Those whose database schema is in source code &/or a DB project, will need to import these new indexes into their project.
Interesting but I'm concerned that it doesn't address the very common scheduling / time period overlap style queries. Example 1: Multiple Date Columns in a Row. Matching with some filter period. An Order Record has fields: OrderPlacedDate, ManufStartDate, ManufCompleteDate, OrderShippedDate, OrderRecievedDate, OrderPaidDate. (etc)
Query 1: At July 1, Show Value & Count of Active orders that had not started Manuf, Were being Manufactured, Were waiting for transport, Were in Transit, Were expecting payment, Were 90 days over due.
Query 2: List the orders that were in Manufacture sometime in the period Mon 4th - Fri 8th May 2015
Example 2. Comparing overlapping periods between 2 entities. Two Tables in a Hospital. PatientAdmissions: AdmissionDate, DischargeDate. Condition( Cancer | Burns | etc) DoctorRoster: ShiftStart, ShiftEndDateTime, Specialty(Cancer | Burns |etc)
Query 1: Show List of Burns Patients & Burns Specialists in Hospital at noon 25 Apr 2015. Query 2: Show Count of Burns Patients & Count of Burns Specialists, Grouped by Shift over the past 90 days. The objective is to discover if we have sufficient clinicians to provide adequate care to a patient type.
Nice to see you are catering for the "Justin B" scenario. Inevitably you get a long tail. Where most of your customers fit on 1 or 2 databases. And say, 10% of your bigger customers require 90% of your servers. Fortunately they often generate 90% of your profits too.
The key to this, is getting your primary key definitions correct. Especially when it comes to using Identity keys.
Hey Daniel, I disagree with your sentiment that Environment variables will reduce the need to have alternate Web.Config files. Typically when we deploy to the web, the complete VM is constructed from scratch. Dealing with environment variables would be a real pain. Please give some thought to solving this scenario.
The developer'sPC. Each has their own "local" values. Perhaps some have a local copy of SQL (LocalDB) others may point to SQL in a separate VM. (the DBA devs typically test with much more data)
The Deployment to a DEV or Intergtration server. Part of a CI process. (Debug release, with many Perf counters)
The Deployment to Cloud (UAT) Maybe Debug
Deployment to Cloud Staging & Production. Release build.
Currently this is done with VS Build configs &/or SlowCheetah.
Security Often Config Variables have Email & DB passwords. (As can't use AD). Please ensure we can have a config hook that points to a separate "secure" file.
It would be great if both these processes were first class citizens in the new environment.
Also check out Application Insights on Visual Studio Online. Currently what is exposed in the Azure Portal is a tiny bit of its capability. You can instrument your code to expose any metrics you desire. Including A/B testing. ie: Exactly what did they click on & how long did they look prior to clicking. Avg Price of each order etc.
It also gives you complete stack trace & can link it to a specific build of your code to help step thru debug your production code even if your developers are working on a much later build.
@Dave:Also if you do download them & play in Windows Media Player, it is then possible to change playback speed. Can either save time by increasing speed thru stuff you know or are following easily. And really slow it down, if you want to type along with it.
Please consider adding a 5th session on this that covers recommendations on efficient ways to read from Azure queues. Some areas:-
Polling vs Event driven??? Notification?
SQL Service Broker will activate a Queue Reader when a message arrives. Ensuring all messages are processed with minimal latency. Is there something similar?
Azure Queues only seem to offer polling, with some guidelines to poll less frequently if the queues are empty. The problem with that is massive latency in the off peak times, especially if the work needs to pass thru a number of queues before returning a result to the user.
Another issue is cost. Each poll counts as a Azure Storage I/O. Multiple instances from multiple roles each polling multiple different queues, can rapidly turn into a noticeable cost. Even if minimal messages where actually processed for most of the night.
(assume here that the queues are being used to distribute work (load balance) between multi-instance roles in an azure cloud solution. )
2. If not events. Is it possible to do some "long polling" on the queues to achieve a similar effect?