SQL Server: In-Memory and Durable. How is That Possible?

By | In Blog, Database Administration, SQL Server | February 10th, 2015

SQL Server: In-Memory and Durable. How is That Possible?One of the major myths about SQL Server In-Memory OLTP is that since the data resides in memory, it would be lost once SQL Server is restarted. Well, as we mentioned – it is a myth. In reality, SQL Server guarantees changes made to durable tables are permanent and can survive a database or SQL Server restart.

To understand the durability of data in SQL Server, we need to recall the concept of transaction logging. Whenever there is a modification done for disk based table or durable in-memory table, it would generate one or more log records. These log records are hardened in the transaction log file (also known as T-Log or LDF file) before commit is confirmed by SQL Server to the client application. Notice that we’ve said “durable” in-memory table. This is because SQL Server In-Memory OLTP also supports non-durable in-memory tables. From this point onwards, if we say, “in-memory” it should be assumed as “durable in-memory table,” unless otherwise specified.

There are some differences in disk-based table logging and in-memory table logging. In case of disk based table committed and uncommitted changes are logged which is different for in-memory table. Whenever there is any change committed to in-memory table, it is logged to a transaction log file. The format of log records is different for in-memory.

In-Memory by code

Have a look at a demo to understand the logging performed by in-memory OLTP. First, create the database which contains in-memory table.

The key to the above code is the place where we specify the InMemory OLTP filegroup with (CONTAINS MEMORY_OPTIMIZED_DATA). Now, we will create a table in this database.

Let’s insert 100 rows in the above table in a transaction.

Now, we can read transaction log to check the logging which is performed.

See operation “LOP_HK” (Hekaton operation) in SQL Server In-Memory

Here we can see operation “LOP_HK” in which HK stands for Hekaton operation. Hekaton was the code name of this project when Microsoft was developing it. If we pick the LSN which is highlighted and use that to break apart the 6380 byte record using fn_dblog_xtp undocumented command.

See logging of In-Memory OLTP in SQL Server.

We can see 100 inserts, which we performed in loop.

By looking at the above demo, we can point out few salient points about logging of In-Memory OLTP in SQL Server:

  • Multiple changes in the table are merged into single log record. We demonstrated this above by showing that 1 record = 100 mini records.
  • Log records are not generated for any modification of Index. The index on the table is rebuilt during startup of the database. In the demo, we can see that we don’t have any index modification written in the transaction log. We only see “HK_LOP_INSERT_ROW”.
  • Transaction log records are written only if the transaction on in-memory table is committed. This means that no UNDO information would be logged. This can be done by trying the same demo and using rollback instead of commit.

Due to all of the above reasons, logging is more efficient for in-memory tables as compared to disk based tables and since it is logged in transaction log, it is durable and persisted.

As a closing note, I should point out that in-memory tables can be made non-durable as well. This is done by defining durability = “SCHEMA_ONLY” while creating the table. This option is useful in situation where we want lightning fast inserts for staging tables used in loading phase of Data warehouses, ASP Session state database, or Temporary logging tables. In all these cases, data can be reconstructed and not very important to store permanently. In this blog, we saw how InMemory OLTP is durable when working with SQL Server 2014.

Contact Us
Pinal Dave
Consultant to Datavail
Pinal Dave is a technology enthusiast and an independent consultant. He has authored 11 SQL Server database books, 21 Pluralsight courses and have written over 3800 articles on the database technology on his blog at a http://blog.sqlauthority.com. Along with 14+ years of hands on experience he holds a Masters of Science degree and a number of database certifications.

Leave a Reply

Your email address will not be published.
Required fields are marked (*).