One of the ‘best practice’ practices I like to consider putting in place on a SQL instance is multiple tempdb files to eliminate any chance of tempdb allocation contention. Not that I’ve seen this allocation issue “in the wild” so much (although Paul Randal mentions it is pretty common in his experience), but having a few, right-sized tempdb files (a.) doesn’t hurt (b.) surely does help if there is an allocation issue and (c.) brings an instance into what I consider a standard configuration. What I’d call *take-it-or-leave it configuration options (e.g. optimize for ad hoc workloads) sometimes is just as well to take-it since at least it indicates someone considered the whole picture of the serverinstance and applied best practices.
Here is my version of a quick rundown to summarize the tempdb allocation issue: Every time that tempdb is used, the thread accessing tempdb must first find enough free space to build the temp object. It determines where it can locate this space by reading special pages – GAM, SGAM and PFS – which are evenly spaced throughout a database file. These pages serve as a kind of index to unused and available disk space. Every database has these pages, but tempdb is a special case since it is a shared resource which potentially any and every user may utilize. Herein lies the trouble; a bottleneck can develop when many threads need tempdb space and so require latches on these special pages.
The resolution is simply to distribute the load by adding more tempdb files. If the need for multiple tempdb files is evident, the current suggestion is 1 file per 1 logical CPU up to 8 logical CPUs, then, if contentions still exists, add 4 files at a time (up to the count of logical CPUs). All files should be the same size because this will allow SQL’s proportional fill algorithm to evenly allocate space in the files. But note, if there are too many files this can be a performance problem as well since sort memory spills may spend extra time doing round robin allocations among many tempdb file. Note: trace flag 1118 forces all db allocations, system wide, to be in extents (not pages) and so is an option to specifically reduce contention on SGAM pages, including (but not limited to) tempdb SGAM pages.
So, how do you know if an instance has this kind of thing going on? Check out the query in this blog post by Robert L Davis aka SQL Soldier or look back on the query in the SQL Skills post mentioned above. Both use sys.dm_os_waiting_tasks to ‘catch’ tempdb PAGEIOLATCH_ % waits red-handed. Pretty cool, but one drawback is that you have to go looking for these waits – polling the waiting tasks every-so-often. If you’d rather just sit back and let those waits come to you – so to speak – check out Kehayias’ SimpleTalk article ‘Optimizing tempdb configuration with SQL Server 2012 Extended Events.’
Let’s throw in one more link of reference for good measure. This is Adam Mechanic’s explanation in the context of his sp_whoisactive tool.
What do you think friendly blog reader? Is there a set of ‘standard’ configurations you like to put in place on an instance? Perhaps regardless of whether or not their implementation is ‘proven’ as necessary? Or is it just as well to manage a distinct set of configuration options for each instance? If you have instances that are less used or non-production is it worth the effort to standardize them? Or is it better to stamp every instance out of the same template regardless of use? If not, wherehow do you draw the line between ‘must adhere to standards’ instances and ‘no big deal’ instances?
The next blog in this series “Right-Sizing Multiple tempDB Files” explains the importance of getting file sizing correct and identifies when multiple tempDB files are necessary. Thanks for reading and feel free to contact Datavail’s DBA experts or leave a comment below with any questions.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine over one hundred logins in the source server, you need to migrate them to the destination server. Wouldn’t it be awesome if we could automate the process?