Select Page

Hekaton Helps Build Faster Databases In 3 Ways (Part 2)

Author: Andy McDermid | | January 21, 2014

Think back to a time before Google, Wikipedia, and the internet -a time even before computers- and imagine a library. Make it a large research library full of rare and sensitive documents. It’s an important archive and management does not allow the public to browse the stacks; you must approach a librarian to make your request.

Typically, using the methods of the day, the librarian must look up the location of the document of interest in a card catalogue, then note the floor, section, aisle and shelf, then disappear for some time to scan for and locate the required title, then finally return with the document you requested –- well in fact, it is only a facsimile of those papers since no patrons are allowed to handle the genuine articles.

As you may have figured out by now, this library is today’s metaphor for SQL Server and in this post we’ll use it to understand a bit more about SQL Server 2014’s new feature, In Memory OLTP (IM-OLTP).

IM-OLTP has, broadly, three ways help your database perform faster. To get an overview of the feature, and to see just where it can improve performance, we are taking a look at each of these facets to compare and contrast IM-OLTP to traditional disk-based functionality. In the last post our topic was Log Writes. In this post we’ll look at Latching and Locking.

Now then, getting back to our library….


There is a new standard of service in place and each librarian must be able to deliver some of the archived information off the top of their heads – they are not required to know everything about everything (as opposed to nothing) but they must memorize enough to satisfy the most common requests.

Now imagine you approach the librarian in the hushed lobby and make a request. Instead of consulting her cards, the librarian looks you directly in the eye, and – throwing all normal library hush-hush protocol to the wind – loudly and in an instant blurts back everything you wanted to know! Having the information memorized has greatly speed up the service and a good part of that improvement –  to correlate this back to SQL –  is due to the fact that our librarian has not needed to take any latches.

In the SQL world, “Latches are lightweight synchronization primitives that are used by the SQL Server engine to guarantee consistency of in-memory structures including; index, data pages and internal structures such as non-leaf pages in a B-Tree”.

Just like the librarian must physically take hold of an index card from the card catalogue and peer at it over their bifocals to know the location of a document, SQL Server must latch a page before it can be read. And just like the librarian might walk to the floor, aisle, shelf and so on to find the correct document, SQL must latch each page of an index as it traverses down to the leaf level.

IM-OLTP? No pages, no latches. (Remember from the previous post that there is no concept of data pages in In-Memory-OLTP, instead there are rows addressed to physical memory and associated via a hash-index).

The librarian’s “latch” keeps the other librarian’s from writing to or deleting the index card while they read it and, similarly, SQL threads might take shared, exclusive or update latches on a page depending on the operation. Like locks, latch types are not always compatible – no other librarian is going to be able to hold the card while our librarian is writing in a new shelf number. This latch contention increases with the current trend in hardware of ever higher CPU core counts – more librarians bumping into each other as they access the information.

IM-OLTP? No pages, no latches, no latch contention.

Latches: defined as “short lived” and “lightweight”, they may not seem a significant performance consideration, but given a hot table and a high CPU core count, the wait time adds up. Eliminating latch waits via IM-OLTP takes a bottleneck completely out of the picture.

Which tables in your environment have the most latching and might benefit the most from IM-OLTP? Microsoft has you covered there with their new AMR tool which consists of a set of data collectors and MDW reports to allow you to evaluate what tables might be candidates for migration to IM-OLTP.


Remember in the introduction that our librarian only allows patrons to handle a copy of the original document? Ok, so the metaphor starts breaks down a little here since we are moving away from the librarian’s memory. But, if you allow that phrase -“facsimile of those papers” – as a jumping off point, we might still get a little more use out of our library metaphor.

Imagine we’re requesting the document so we can make a few edits. The librarian retrieves a facsimile for us and we get started. Meanwhile, anyone else wanting to access those same documents – let’s say its Joe – has no problem since the librarian is able to make another copy for them from the original. Rather than having to say ‘Sorry Joe, that document is currently checked out’ the library management is betting that whatever I am doing with my version of the document, it will not conflict with Joe’s copy.

In (simplified) SQL Server terms, here we are looking at optimistic multi version concurrency control. This term might ring a bell since it’s been around in the form of snapshot isolation and row committed snapshot isolation beginning with SQL 2005. Transactions work with copies (versions) of the row, so readers do not block writers and writers do not block readers. However, those disk-based row-versioning implementations use tempdb to maintain row versions which – like other disk-based DBs – can still suffer waits from latching and latch contention. Additionally, you may still see blocking in disk based snapshot isolations when there are write-write conflicts, since ultimately the conflicting transactions are attempting to modify the same physical database structure (i.e. the page).

IM-OLTP tables are always use optimistic multi versioning regardless of their isolation level (for reasons we can’t get into in this short post there are just three: SNAPSHOT, REPEATABLE READ and SERIALIZABLE). Write-write transaction conflicts cannot block because IM-OLTP’s flavor of row-versioning does not have any ultimate physical structure to update – rather each row version becomes the new ‘true’ version with links back to previous versions (which may still be valid for older transactions) and conflicting transactions immediately fail if their time-stamp pre-dates the newest version of the row.

There is no locking, and by extension no blocking and no dead-locking. To put it concisely, there is no waiting for DML operations in IM-OLTP, making it a powerful potential solution for contentious workloads.

Keep an eye out here for the next and final post on how Hekaton helps build faster databases; by optimizing CPU time.

To learn more about In-Memory OLTP you can get a start reading Microsoft’s data platform blog here.

How to Solve the Oracle Error ORA-12154: TNS:could not resolve the connect identifier specified

The “ORA-12154: TNS Oracle error message is very common for database administrators. Learn how to diagnose & resolve this common issue here today.

Vijay Muthu | February 4, 2021

Data Types: The Importance of Choosing the Correct Data Type

Most DBAs have struggled with the pros and cons of choosing one data type over another. This blog post discusses different situations.

Craig Mullins | October 11, 2017

How to Recover a Table from an Oracle 12c RMAN Backup

Our database experts explain how to recover and restore a table from an Oracle 12c RMAN Backup with this step-by-step blog. Read more.

Megan Elphingstone | February 2, 2017

Subscribe to Our Blog

Never miss a post! Stay up to date with the latest database, application and analytics tips and news. Delivered in a handy bi-weekly update straight to your inbox. You can unsubscribe at any time.

Work with Us

Let’s have a conversation about what you need to succeed and how we can help get you there.


Work for Us

Where do you want to take your career? Explore exciting opportunities to join our team.