If I understand correctly Managed disks charge for the size provisioned which is rounded up to the next 32, 64, 128, 512 or 1024GB. Why would you create anything other than the full size you are paying for?
With unmanaged disks you paid for Data Stored so it made more sense to restrict disk size in the hope that it compelled some staff to tidy up after themselves.
This seems like a huge step backward. The "Numbered based Migrations" approach have all sorts of limitations that Database Projects solved. In particular.
Ability to compensate for "drift" in Production systems. eg: Changes to production that didn't go thru the release process that cause the new release scripts to fail. (not supposed to happen in a rigorous DevOps shop, but does.)
The inability to detect that some "migrations" make some older Migrations redundant. eg: Migration 10 adds a column, Migration 12 Indexes the column, Migration 31 drops the table as it is no longer required. (You still have to execute all of them even though only the last change is needed.)
Using the query parser of a real instance of SQL Server to validate & generate the correct SQL for the SQL Target (Retail SQL or Azure DB)
Don't get me wrong RedGate make some handy tools, but the "Migrations" pattern common in most ISV tools is vastly inferior & I'm concerned that MSFT might stop developing Database projects as the Dev Team incorrectly assume they have a new product that fits the DBA DevOps need.
It is handy to know see the issues by version, so you can see how many versions you could upgrade a database before running into issues.
That said, often the complexity arises when a solution comprises of multiple SQL servers. So the upgrade has dependencies on a Mirrored Cluster with Log-Shipping, replicated to a Reporting / DW. Some distributed queries & Service Broker in the mix as well. The order you upgrade becomes vital if you don't want to rebuild everything
Personally I find the older style Azure Portal much easier to use & find things than the "New Look" Azure portal.
I'd also like to have the results export direct to a SQL Database Table of my choosing rather than load to Excel & import.
I'd also hope you follow the lead of the Best Practice Analyzer in Windows Server's Server Manager tool. When it displays an error. I get a link to a web page. This gives me background on the issues & step by step instructions on how to fix it. Often with either screenshots or Powershell commands.
I would like this feature to be optimized for bandwidth sensitive sites too. Would it be possible to skip a backup if nothing has changed?
Example: Full Backup Daily, Log Backup scheduled every 2 hours. But this is a 9-5 Mon-Fri operation. Outside of those hours no data modification happens, but maybe an occasional query.
Could you check the LSN against the LSN of the prior Backup. And just skip the operation if not required? Clearly its not a huge win for a log backups but it may reduce clutter in tables that track the backups taken. Avoiding the unnecessary backups on Sat & Sun could result in a 28.5% saving in bandwidth & storage.
As much as we all like to get excited about 7*24 systems. There are still a large number of systems that are only active in office hours. Similarly there are tons of Reporting & DW systems that only get changed at night.
Nice, Seems we've come full circle. Like SQL 2000's SQL Notification Services, it uses SQL Queries for rapid development. But takes advantage of Azure framework to overcomes the clunky setup the plagued SQL NS.
Constraining the images to be contained in an image library seems quite a limitation. Personal Libraries lock you into a single User. Public Libraries solve that issue but are limited to a single drive. Often the C Drive, often quite full. Much more flexible to have an option to point to any file location. That way I could access images from a large cheap external USB drive or fileserver.
Bye Nathan. We'll miss you. I'll look forward to the new enhancements you'll be involved in making to the Azure platform.
PS: It would be REALLY nice if you could get some kind of Event Notification / Event Handler feature on the Azure Storage queues. Polling queues sucks. SQL's Service Broker is a significantly more efficient model & much easier to program too.