One of the core requirement of being an active DBA is to understand what are some of the backup features to use in your environments. The need to take a backup has been in the industry since relational databases were invented. Hardly we see enhancements coming into this capability as this is so basic. With SQL Server 2016, I was pleasantly surprised to see a number of interesting enhancements coming into this space in this release of SQL Server. In this blog, I thought it would be great to bring out some of the salient features enhanced for backups.
Extension to Managed Backups
Managed backups were introduced in SQL server 2014. It was an amazing feature where we could just point our database for backups and it would automatically take backups to a blob location. This was periodic and as an DBA, you can sleep in peace because you know SQL Server is taking the backups behind the scene. In SQL Server 2016, managed backups get a boost and at an high level the enhancements include:
- Introduced the backup support for system databases
- backup support for databases using simple recovery model from SQL 2016
- additional customizations added to backup schedules
I personally felt these were some powerful enhancements getting added to the kitty of backups.
Backup to Block Blobs
Before we get into the specifics, it is important to understand what is the blob concept. As per the documentation:
Azure Blob storage is a service for storing large amounts of unstructured data, such as text or binary data, that can be accessed from anywhere in the world via HTTP or HTTPS. You can use Blob storage to expose data publicly to the world, or to store application data privately. The storage service offers two types of blobs, block blobs and page blobs. You specify the blob type when you create the blob. Once the blob has been created, its type cannot be changed, and it can be updated only by using operations appropriate for that blob type, i.e., writing a block or list of blocks to a block blob, and writing pages to a page blob.
SQL Server 2014 introduced the concept of backup to URL wherein we could take a backup of a database to page blobs on Azure. Though this was quite handy, the need to extend it to Block Blobs was on the cards. However, a Page blob max size can be 1TB whereas Block blob maximum size is only 200GB. To address this issue we already have the functionality of user dividing existing database into stripes (max 64). You will also be able to select multiple backup files during the restore operation.
In addition, SQL Server 2016 integrated the authentication with Active Directory Authentication Library thereby moving away from publishing profile and certificates. This will also provide the opportunity to utilize more throughput during backup operations, where in now we can push I/O requests to multiple files reducing the chances where in multiple I/O requests to a single file can always get throttled at the Azure Storage layer.
Some of the added advantage because of this include:
- Backup/Restore to/from striped backup sets. Backup to URL does not support striping. This improvement will reintroduce the ability to stripe a backup set which will allow for backups of up to 12.8TB (64 stripes of up to 200GB each).
- Backup to Block will have backward compatibility to restore backups that were taken via BackupToUrl on Page Blobs.
- This backup feature will be integrated with existing SQL XI implementation except for the fact that Backup will use Block Blobs instead of Page Blobs.
- At this point in time, appending to existing backup set is not supported.
Though this is not all the extensive improvements introduced inside SQL Server 2016, I thought it was a good starting point to understand what is available in SQL Server in the next release. I am interested in knowing how many of us really use the backup to URL or to blobs currently inside your environments? How useful has it been for you? Do you think some of these enhancements will help you? Do let me know via the comments.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine there are over one hundred logins in the source server and you need to migrate them all over to the destination server. Wouldn’t it be awesome if we could automate the process by generating the scripts for the required tasks?