I bet they can do it just that as was posted a lot of stuff on azure probably is way less than 50 megs.
I know how you may feel i have a db that will have to have old data moved to a second server soon, it's 100 Gigs and growing...
They can't do this. The cap is setup because of the way they handle replicating the data for you. Of course, there are other companies and hosting services that could let you put this data up *in the cloud*, but Azure maintains at least three copies of your
data and handles a variety of automatic replication and backup services; and to do that they cap the size of the database to ensure they can provide the appropriate level of service to all their customers.
Database sharding/partioning is one way to take a multi-TB database and push it across many 50GB database instances...
Question though, what is in this database that is taking up so much space? If it is just regular records then sharding/partioning is probably the route you want to go, but if you are storing FILESTREAM or blob data you could look at pushing that data into
other storage systems (blob storage for example).