One nifty thing about Hekaton in SQL2014 is its seamless integration into the existing SQL Server product. Envision the SQL2014 SQL engine as a three part unit comprised of the familiar, ‘standard’ engine, a Columnstore Index engine, and an In-memory-OLTP engine. Now consider an incoming query which joins a standard disked-based table with a Columnstore indexed table with an In-memory-OLTP table – the three part engine works as a unit to return the data in the most efficient way possible given the source tables identified. (We are not getting into Columnstore indexes in this post but I wanted to include it here for completeness). No special software add-ons or code changes required. If you have SQL2014, you have Hekaton – right out of the box.
SQL Server’s In-memory-OLTP (IM-OLTP) technology has, broadly, three ways help your database perform faster. To get an overview of the feature, and to see just where it can improve performance, we are taking a look at each of these facets to compare and contrast IM-OLTP to traditional disk-based functionality. In previous posts we looked at Log Writes and Latching & Locking. The focus for this post is CPU and how IM-OLTP technology can make data access faster by saving significant processor time.
Native vs InterOP
Consider an ad-hoc query which joins a disk-based table and an IM-OLTP table. As mentioned above, under the hood and invisible to the client, the SQL engine is able to access both types of tables and return results. This is termed Interoperational access, or InterOP TSQL, since it merges the traditional, interpreted TSQL access methods with the new IM-OLTP operations.
But what if the query only needs to access the IM-OLTP table (or tables)? In that case, if we wrap our query up into a stored procedure and create it with Native Compilation, we have accessed the data via Native T-SQL. The stored procedure DDL is processed and compiled as a dll. The dll includes all the usual Interpreted TSQL steps; parse, algebrize, optimize and compile, pre-packaged into compiled machine code at stored procedure create time as opposed to execution time. The dll itself is saved as a file and loaded into memory. When the stored procedure is called, the CPU is spared all the work that usually goes into Interpreted TSQL access because the work is already completed and saved in the dll. (Note, IM-OLTP table and index DDL are also compiled and saved as dll files).
The benefit of the first case is that the interface to IM-OLTP tables is seamless and good-old standard TSQL works just fine to access that data. An IM-OLTP adopting DBA can move an existing table into IM-OLTP and, without any TSQL code changes, can realize some level of performance improvements (e.g. eliminate latch contention).
This is a great method to safely phase-in the technology for an existing SQL Server instance; first identify the tables where IM-OLTP is a good fit and move those tables into memory, next- and this is where the second case comes in – plan and implement code and schema mods as required to take full advantage of IM-OLTP Native TSQL.
Microsoft is suggesting IM-OLTP adopters can see 5-10X throughput improvements for simply moving key, ‘hot’ tables into memory. As for the next step there could be some ground work to complete first; taking advantage of Native TSQL data access in IM-OLTP tables may require reworking configurations and code to account for a number of limitations at the instance, database and object levels. However, for successful implementation, Microsoft reports seeing resulting throughput increases in the range of 30X. This amazing potential improvement is simply due to saved CPU cycles that otherwise are used for query interpretation.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
With serious financial penalties, SOX audits can be intimidating — but they don’t have to be. Find out how you can use Datavail’s software to automatically prove SOX compliance.