In SQL Server 2014 Microsoft introduces In-Memory-OLTP, also known as Hekaton. This technology allows the DBA to configure a specific table to reside in memory full time. Of course, if the data is always in memory we can count on increased throughput since we’ll never have to go to disk for that data.
But wait – if your server has enough RAM your instance may be running with a good part of the database already persistent in-memory, so you might be thinking; ‘Hekaton -what is the big deal’?
In fact, there is a lot more going on than what you’d see in an IMDB. In-Memory-OLTP is an entirely new SQL engine standing side-by-side with, and more-or-less seamlessly integrated with the ‘old’ standard engine. Rather than pinning a table in the buffer pool (no, it is not just a technology recycle of DBCC PINTABLE, see myth#2) and settling for the performance improvements associated with reduced disk IO, In-Memory-OLTP has, broadly, three ways help your database perform faster. To get an overview of the feature, and to see just where it can improve performance, let’s take a look at each of these facets and compare and contrast the new In-Memory to the traditional disk-based functionality.
In a disk-based system when any changes made to a data page in the buffer pool (update, delete, etc.) SQL Server must carefully ensure the changes are durable in case of a system crash. It does this via Write-Ahead-Logging, which ensures that the changes are hardened to disk in the transaction log before the transaction is committed. Then the changed page is marked as dirty and the next checkpoint flushes it to disk.
There is a bit more to it though, since each change recorded in the log must also include a corresponding undo record in case the transaction rolls-back or the system crashes before the transaction commits. Furthermore, if the changed data has any associated indexes, those too must be updated and will create additional log records.
Depending on the disk sub-system where the transaction log resides we might just expect to see some WRITELOG and LOGBUFFER waits.
To understand how In-Memory-OLTP side-steps these potential issues, first things first, set aside your understanding of the buffer pool as a repository of logical 8KB data pages. There is no concept of data pages in In-Memory-OLTP and there is no buffer pool as a logical intermediary between the transaction and the disk. Rather we have rows addressed to physical memory and associated via a hash-index. Whereas in disk-based systems SQL Server uses the transaction log to ensure durability in the gap between a data page change and that page being persisted to disk, In-Memory-OLTP simply waits until the transaction commits and the row is changed, then records the change to the transaction log. In-Memory-OLTP maintains durability but there is no gap and therefore no need for undo information in the transaction log. As for the indexes, they do still get updated, but since In-Memory-OLTP recreates these on every database start up, it does not bother to record those changes to the transaction log.
Last, but not least, In-Memory-OLTP employs a new method to consolidate log records. Whereas a disk based system may log a record (at least) for each row changed, In-Memory-OLTP can consolidate many changes into a single log record.
Taken together this all results in some serious streamlining for transaction log writes. It’s still best practice to put your log records onto the fastest write disk available, SSD if possible. Especially since there is still only one transaction log per database (In-Memory-OLTP transactions share the log with disk-based transactions, they both write to the same old transaction log). But given a decent write speed, and an appropriate design for in-memory tables, we might not see those Log Wait stats around so much anymore.
Keep an eye out here for posts on two additional ways Hekaton helps build faster databases: by reducing latching, locking and blocking and by optimizing CPU time.
To learn more about In-Memory OLTP you can get a start reading Microsoft’s data platform SQL Blog here as well as MSDN here.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine there are over one hundred logins in the source server and you need to migrate them all over to the destination server. Wouldn’t it be awesome if we could automate the process by generating the scripts for the required tasks?