In case you don’t already have enough initialisms in your life – IIRC, IRA, PCB, NASDAC, NFL, NHL, NBA, NASCAR, ad nauseam, ad infinitum – let’s talk some MSSQL! In this post we’ll touch on RDBMS, WAL and ACID. Then we may use SSMS and write some TSQL. All in the service of an introduction to a new SQL 2014 feature, Delayed Durability (yes, it’s all spelled out, you’ve got to draw the line somewhere.)
RDBMS stands for Relational Database Management System. You are probably familiar with the concept; SQL Server is an RDBMS. The important thing to note is the idea of RDBMS comes out of Dr. E.F. Codd’s work in 1969 after which he went on to set down the 13 rules of RBDMS. These rules help define an RDBMS (in part so no vendor could make false claims about their system being one.) It’s an interesting read but even if you just scan through you’ll note the use of the word transaction. Of course for any data system there has to be some way to get data in, change data, and get data out and a transaction is just that – the method to INSERT, UPDATE and DELETE. Which brings us to…
ACID stands for Atomic, Consistent, Isolated, and Durable and it is these properties, defined by Jim Grey in the ‘80s, that describe the ideal rules for a unit of work applied against an RDBMS. I’m working on a series of posts that examines how SQL Server complies with, or works-around, each of the ACID properties via concurrency control, constraints, error handling, and – for lack of a better term – memory features. Obviously, for this post we’ll be discussing durable, but let’s quickly overview them all:
Atomic – all or nothing, if one part of a transaction fails, the whole things fails.
Consistent – when a transaction changes data within an RDB, the end state of the data must adhere to the same rules as the original data. For an example, you may not update a date with a street name.
Isolated – in theory, transactions are executed one at a time, serially, so that no two transactions can work on the same data at the same time. In practice, it need only appear this way and that’s where transactional concurrency comes into play.
Durable – once a transaction commits, its changes are set in stone. Here is the Wiki quote:
“…once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.”
The bold phrase here is the important one. How does SQL Server achieve this Durability? Well, that would be …
WAL stands for Write Ahead Logging. It would be way, way, way too slow (and complicated) to write each change to RDBMS data to disk as it occurred. So the data pages are maintained in the data buffer pool and periodically flushed to disk in a big batch. This is much more efficient, but breaks the durability rule since, if the server crashes or the power goes out, the data changes recorded in memory at that time are lost. So, to achieve durability, on commit, each transaction (to be exact, a description of each transaction) is written to disk in the transaction log file. Crash or power outage before the data in the buffer pool is written to disk? No problem, that change is recoverable since it has been hardened on to disk within the transaction log file. With write ahead logging in effect, durability is achieved. You can see the drawback though, now we have returned to tediously writing to disk for each change as it occurs. And, especially for a database with high transaction rates, that can be a through-put bottleneck.
What if you could save up those WAL writes to the transaction log in memory and dump them to disk in one big batch similar to what is going on in with data buffer pool? Sounds good, but again, that is trading durability in for performance. Definitely not 100% durability… yet still… potentially more durable than without WAL. And what if that durability for performance exchange is worth the risk?
In short, this is what Delayed Durability does; it eliminates or reduces the WAL disk-write bottleneck but with the trade-off that a committed transaction may be lost if something “bad” happens. If you have a highly transactional system with high WRITELOG waits indicating a performance bottle neck on the transaction log aand you are ok with the potential of losing a transaction or two, or three, etc. (or, better yet, you can manage durability at the app layer), then Delayed Durability might just be for you!
And more good news! It is a Standard Edition feature.
One cool trick about Delayed Durability is that you enable it at the database level, but you can control it on the more granular level of a transaction itself. To enable, but not turn on, delayed durability for an entire database, use some TSQL like this:
ALTER DATABASE [TESTDB] SET DELAYED_DURABILITY = ALLOWED
You could also force all transactions to use Delayed Durability by setting FORCED. This could be handy for – say – a dedicated loading database that is part of an ETL process. However, sticking with our ALLOWED example above, once enabled, we can commit just specific transactions with delayed durability without compromising the durability of other, perhaps more critical, transactions:
COMMIT TRANSACTION WITH (DELAYED_DURABILITY = ON)
How can we find the needle-in-the-haystack transactions where a Delayed Durability commit might help? I suggest an Extended Events session where the wait type of each transaction can be captured. More on that idea in a future post.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine there are over one hundred logins in the source server and you need to migrate them all over to the destination server. Wouldn’t it be awesome if we could automate the process by generating the scripts for the required tasks?