Loading user information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading user information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

The Future of SQL Data Services with Nigel Ellis

56 minutes, 32 seconds


Right click “Save as…”

I had the pleasure of sitting down with one of SQL Server’s brightest, Nigel Ellis, to discuss the future direction of SQL Data Services.  Nigel goes deep on the changes of SDS.  If you want to learn what’s really going on behind the scenes this is a great place to start.

Check out Nigel's MIX session here.

Follow the SQL Data Services team blog here.


Follow the discussion

  • Oops, something didn't work.

    Getting subscription
    Subscribe to this conversation
  • Is there any type of concurrency manager in the cloud ?

  • 10 Gigs really?  I mean I have used 2TB databases. I think you will find people will really need at least 100 Gigs.  Consiter something like Exchange backup where you got all kinds of emails and attachments being stored.  Something like that NEEDS a large database cap.  And the cop out of saying well you can create a bunch of little databases doesnt work as you cant run cross-database queries.
  • Buzza, for concurrently we have the same mechanism available to you as SQL Server.   SQL support optimistic (timestamps or value comparisions) or pessimistic concurrency models.  The presence of the Cloud doesn't change the model at all.

  • I agree there are cases where large databases are required.  Your example of Exchange backup (or other backup scenario) is a case for large blob data.  This is something supported using Azure blob storage - we will also be investigating supporting the SQL Server RBS interface which would allow seamless storage of large blobs alongside your structured data.    10GB of structured data is a great deal of information and covers our initial target application segments which shows most applications have databases in the order of < 3GB in size.

    Remember any limits are just a starting point not an end.


  • William Staceystaceyw Before C# there was darkness...
    Really good.  Thanks Nigel and great questions Zach.  Am very happy to see move to TDS.  I never could really bite into the soap SDS thing when I played with it, as simple actually became complicated because the model did not allow enouph flex.  This new model will be great and being able to use SSMS makes so much sense and people will just get it.  I also love the seemless integration with Astoria as it gives you a nice remote compute ability and rest.  IMHO, large blob storage should be totally abstracted by the system.  So a large varbinary(max) should just be stored in blob storage as needed and dev should not even know or care - it just appears as a varbinary and not counted in your 10GB (but maybe as table storage).  I think a natural model is pay-to-play in terms of storage above 10GB.

    I also would vote for easy tenant virtualization in terms of applications.  So I make a cloud app, but I need to support X different customers.   I need each tenant/customer "virtualized" (and seperate) without having to update my tables and queries to support tenant IDs.  All my queries "route" to proper virtual db based on login id.  This would abstract uneeded complexity I think.
  • For all of the mortals out there (like myself) here is a link to some information about SQL Server 2008 Concurrency which essentials deals with how the SQL Server handles locking.

    Buzza, is there a particular situation unique to the Cloud that you are concerned about related to Concurrency?

  • Nigel,
    10GB of "pure" data may sound like a reasonable starting point, but if the data are moderately indexed, the space would run out much-much faster.

    Are you planning to charge for the total consumed space (data and indexes), or table pages only?

    Are you planning to offer space compression by any chance?

  • One other question.  How will (if it will) the filestream type work.  Normaly this stores it outside the database in normal NTFS.  In the cloud would you have some kind of Azure Blob Storage that would store these files?  Would they count twords the 10gig max on SDS?
  • > Are you planning to charge for the total consumed space (data and indexes), or table pages only?
    The exact billing model is still under discussion.  The size caps would apply to the physical database size so it would include all indexes defined.

    > Are you planning to offer space compression by any chance?
    This is not currently in scope for our initial release.


  • >  How will (if it will) the filestream type work.
    FILESTREAM (see http://technet.microsoft.com/en-us/library/bb933993.aspx) will not be supported in SDS v1.   There are some unique challenges with supporting this in our SDS cluster environment.   We are however considering building a SQL RBS provider (see http://msdn.microsoft.com/en-us/library/cc905212.aspx) that would allow you to store blob data within Windows Azure and manage link level consistency from within SQL Data Services.


Remove this comment

Remove this thread


Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.