Select Page

Hekaton Helps Build Faster Databases In 3 Ways (Part 3)

Author: Andy McDermid | | March 19, 2014

One nifty thing about Hekaton in SQL2014 is its seamless integration into the existing SQL Server product. Envision the SQL2014 SQL engine as a three part unit comprised of the familiar, ‘standard’ engine, a Columnstore Index engine, and an In-memory-OLTP engine. Now consider an incoming query which joins a standard disked-based table with a Columnstore indexed table with an In-memory-OLTP table – the three part engine works as a unit to return the data in the most efficient way possible given the source tables identified. (We are not getting into Columnstore indexes in this post but I wanted to include it here for completeness). No special software add-ons or code changes required. If you have SQL2014, you have Hekaton – right out of the box.

SQL Server’s In-memory-OLTP (IM-OLTP) technology has, broadly, three ways help your database perform faster. To get an overview of the feature, and to see just where it can improve performance, we are taking a look at each of these facets to compare and contrast IM-OLTP to traditional disk-based functionality. In previous posts we looked at Log Writes and Latching & Locking. The focus for this post is CPU and how IM-OLTP technology can make data access faster by saving significant processor time.

Native vs InterOP

Consider an ad-hoc query which joins a disk-based table and an IM-OLTP table. As mentioned above, under the hood and invisible to the client, the SQL engine is able to access both types of tables and return results. This is termed Interoperational access, or InterOP TSQL, since it merges the traditional, interpreted TSQL access methods with the new IM-OLTP operations.

But what if the query only needs to access the IM-OLTP table (or tables)? In that case, if we wrap our query up into a stored procedure and create it with Native Compilation, we have accessed the data via Native T-SQL. The stored procedure DDL is processed and compiled as a dll. The dll includes all the usual Interpreted TSQL steps; parse, algebrize, optimize and compile, pre-packaged into compiled machine code at stored procedure create time as opposed to execution time.  The dll itself is saved as a file and loaded into memory. When the stored procedure is called, the CPU is spared all the work that usually goes into Interpreted TSQL access because the work is already completed and saved in the dll. (Note, IM-OLTP table and index DDL are also compiled and saved as dll files).

The benefit of the first case is that the interface to IM-OLTP tables is seamless and good-old standard TSQL works just fine to access that data. An IM-OLTP adopting DBA can move an existing table into IM-OLTP and, without any TSQL code changes, can realize some level of performance improvements (e.g. eliminate latch contention).

This is a great method to safely phase-in the technology for an existing SQL Server instance; first identify the tables where IM-OLTP is a good fit and move those tables into memory, next- and this is where the second case comes in – plan and implement code and schema mods as required to take full advantage of IM-OLTP Native TSQL.

Microsoft is suggesting IM-OLTP adopters can see 5-10X throughput improvements for simply moving key, ‘hot’ tables into memory. As for the next step there could be some ground work to complete first; taking advantage of Native TSQL data access in IM-OLTP tables may require reworking configurations and code to account for a number of limitations at the instance, database and object levels. However, for successful implementation, Microsoft reports seeing resulting throughput increases in the range of 30X. This amazing potential improvement is simply due to saved CPU cycles that otherwise are used for query interpretation.

That wraps up the series. To review, Hekaton helps build faster databases by optimizing CPU time, reducing latches and locking, and optimizing transaction log writes.

To learn more about In-Memory OLTP you can get a start reading the In-Memory OLTP and Memory-Optimization post here.

How to Solve the Oracle Error ORA-12154: TNS:could not resolve the connect identifier specified

The “ORA-12154: TNS Oracle error message is very common for database administrators. Learn how to diagnose & resolve this common issue here today.

Vijay Muthu | February 4, 2021

Data Types: The Importance of Choosing the Correct Data Type

Most DBAs have struggled with the pros and cons of choosing one data type over another. This blog post discusses different situations.

Craig Mullins | October 11, 2017

How to Recover a Table from an Oracle 12c RMAN Backup

Our database experts explain how to recover and restore a table from an Oracle 12c RMAN Backup with this step-by-step blog. Read more.

Megan Elphingstone | February 2, 2017

Subscribe to Our Blog

Never miss a post! Stay up to date with the latest database, application and analytics tips and news. Delivered in a handy bi-weekly update straight to your inbox. You can unsubscribe at any time.

Work with Us

Let’s have a conversation about what you need to succeed and how we can help get you there.

CONTACT US

Work for Us

Where do you want to take your career? Explore exciting opportunities to join our team.

EXPLORE JOBS