Right-sizing Multiple tempdb Files

By | In Blog, SQL Server | March 09th, 2015

Right-sizing Multiple tempdb Files | DatavailTo follow up on my last post, “TempDB and Instance Standards,”  here is a look at how to get the file sizing correct if or when it is determined that multiple tempdb files are a necessity. See that post for a discussion as to why multiple tempdb files might be needed. This post just concerns methods to get there. In this scenario we have a single tempdb data file and the challenge is to split it evenly into a number of same-sized files.

Why would we need the same size tempdb data files?

Because of the way SQL decides which file to use for any given write. SQL uses a proportional fill method to evenly distribute writes across a set of data files within a filegroup. The files with the greater percent of free space get chosen for writes. Ideally this helps keeps a set of – let’s say – four files evenly filing up the file free space as SQL ‘round-robins’ it’s way through the files, selecting each as a write target in turn. It also ensure writes are distributed among the files and that no single file becomes a hot spot for writes.

However, this write-by-percent-free scenario can backfire. If one of the four files is misconfigured as significantly larger and has a larger percent for free space, that file gets the write every time (at least until the file free space percentages more or less match up, then it’s back to round-robin.) In that case we may as well be back to only having a single data file since only one out of the four is getting writes. In fact, any file size or free space lopsidedness may inhibit the ideal round-robin write distributions. In the end, we really want evenly sized files (and, by the way, the same growth settings for each.)

So, if you have one file and you need three more, just add three more files of the same size and let the proportional fill thing do its work, right?

Sure, that could work, but there is more to it. Let’s say our original tempdb data file (the mdf) is 40GB –  if we add 3 more evenly-sized files (the ndfs) – now we immediately have a 160GB tempdb. If we needed a larger tempdb anyway then that works out great, but it is not so great if tempdb is already appropriately sized.

It would be much easier if we could count on shrinking tempdb. Sticking to our example we might first shrink the tempdb data file to 10GB, then add three more ndf files at 10GB each. But it is not always so easy to shrink tempdb – especially if we have a very specific size in mind (10GB), and especially for a system with lots of tempdb use where the benefits of multiple tempdb files might be realized. In short, we do not want to just shrink the file to whatever, we want to re-size the file to particular size.

Let’s pause here and take a quick look at the toolset available to re-size SQL Server DB files.

  • DBCC SHIRINKFILE (<filename>,<filesize>) – the shrink file operation not only shrinks the file as much as possible, it also resets the data file size as you can see in the ‘size’ column if you query sys.database_files andor sys.master_files like this:

  • ALTER DATABASE <dbname> MODIFY FILE (NAME = <filename>, SIZE = <filesize>MB) –In contrast to DBCC SHRINKFILE, this option allows you to grow a file by resizing it. When you grow a file by this method, it behaves just like the DBCC SHRINKFILE command – the file size is reset to match the resize command. However, if you try to resize the file to a smallersize, you get an error:

MODIFY FILE failed. Specified size is less than or equal to current size.

And that’s the way it is for user files… but not for tempdb.

You mean to say that tempdb allows a DBA to alter a data file and re-size it smaller than the current size?

Yes. But not just that, we don’t have to apply the shrink immediately, we can defer the tempdb data file downsizing until the next sql service startup when the tempdb database file will be created new and at exactly the size we want. This is actually explained in a KB here, but it is not very obvious.

Let’s try it:

Find the current size of tempdb mdf. For our running example this shows 40000MB (40GB)

Re-size to increase the file by 10GB

Check the file size again – we should see 50000MB

Re-size to “shink” the file to 10GB

Check the file size again. We should see 40000MB as if the alter statement failed.

But if we look to sys.master_files to see our most recent modification we find the size is just what we asked for, 10000MB

You will not see this same discrepancy if you use SSMS to size the files, its TSQL only.

Interesting… So, how can we leverage this quirk of tempdb to solve the current issue – namely that we need to divide a single tempdb file evenly among several new tempdb files?

There are two ways to go here; one is very hands-off but requires a bit of extra disk space on the tempdb drive, the other requires almost no additional space on the drive hosting tempdb beyond what tempdb is already taking, but will need to do some hands-on management after the sql service restart. Yep, sorry; to shrink and divvy up an existing single tempdb data file in to multiple evenly-sized files is going to require a restart. But wait, it might not be as bad as you think, read on….

Option 1: figure the current tempdb data file size and divide by the number of files which will ultimately be needed. This will be the files size of the new files. In our running example, we have a 40GB tempdb and we want 4 files. So we’ll add three 10GB files. This is where that extra disk space comes in – we need 30 free GB to create these files.

Then use the ALTER DATABASE command as described above to downsize the original mdf file to match the others. For our example we’ll re-size the file down from 40GB to 10GB. According to sys.database_files the mdf size is still 40GB, but the size in sys.master_files will be 10GB and that is the size the file will be created as when tempdb is created at start-up.

So now for the hands-off part; once this is set up (as long as no one tinkers with our size settings) we can walk away. The next service restart will re-create the mdf at 10GB and the 3 new ndfs at 10GB each for a total tempdb size of 40GB. Until then, writes will be distributed around the files, probably somewhat unevenly, but that should not do any harm and it is temporary. And once the sql service bounces per a planned maintenance, cluster failover, or whatever other reason, our multi-file tempdb will be in good shape.

Option 2: follow the same steps as option one here right up to sizing the 3 new files. In this case we just set all new ndf files to 1MB. Then “shrink” (re-size) the mdf as described.  Now we have three 1MB ndfs and one mdf of 40GB (but remember it is only 10GB according to sys.master_files). While we wait for the next service restart, writes will still mostly go to the mdf according to the proportional fill rules, so we haven’t really changed much of anything. After the reboot, we end up with a 10GB mdf file and 3 small 1MB ndf files which we can easily and quickly grow to match the mdf. Although arguably, it might be just as well to add and size the files after the service restart.

And finally, can you please tl;dr summarize?

To wrap it up: The tempdb database is a bit different when it comes to file sizing. In short this is because it is recreated on every start up and must look to the master database for its start-up configuration. Any tempdb file growth via ALTER DATABASE works like a user DB, i.e. the data file grows and new size becomes the initial start-up size. It’s the same for shrinks via DBCC SHRINKFILE – the shrink operation reduces the file size and the new size becomes the initial start-up size. Where tempdb differs is that it allows ALTER DATABASE command to re-size a data file to a smaller value, without actually changing the current file size. Instead, that new smaller value is held in master as the initial start-up size and leveraging this detail makes adding files to tempdb more precise and perhaps eases some of the hassles.

If you are interested in learning more, please visit my recent other SQL Server blog post for Datavail.

 

Contact Us
Andy McDermid
Principal SQL Server DBA
Andy is a MCITP certified MS SQL DBA who delivers and manages delivery of DBA services to many diverse clients. He enjoys helping his clients by finding and deploying pragmatic and practical solutions for their database issues. Andy is always working to improve and expand his DBA skills and he likes to share the experience via writing.

Leave a Reply

Your email address will not be published.
Required fields are marked (*).

2 thoughts on “Right-sizing Multiple tempdb Files”
  1. Andy, nice blog explanation . when we split the tempdb to multiple data files as recommendation for cpu count and even using the 1118 trace flag which makes the processor SQLIO to use all the data files equally, but when we see the initial file size is set to something than the regular size as a part of volume maintenance, this grows anyway and all the file sizes are varied but we shrink it or reboot the server it wont return to the size what we initially set to, so why does this happens why not all the files grow equally? even after trace flag and all the files are starting at same initial size? can i do any thing else to change this behavior? please let me know … appreciate any comments.

    1. Hi Jay, thanks for your comment.
      To clarify -T1118 controls page allocations, forcing each new allocation to be a full extent (rather than sharing a mixed extent).
      The other flag to consider here is -T1117 which forces all files in a file group to grow at the same time. For what you are trying to do here I think you may want -T1117 which could solve your issue.
      Even without -T1117 the files should be more or less even over the long term (given same initial size and same growth settings) but at any given point in time one may be larger (i.e. the most recently grown file). This can be exaggerated by the file growth setting (for example file growth in percent could start to skew growths). Perhaps something along those lines is what you are seeing?
      In general, to simply all this, if\when possible, I try to set the size large enough so it never needs to auto grow.
      For more (better) explanation check out JK’s article here: http://bit.ly/2lBEAwp . Key points: 1. round robin file grow is based on free space within file, 2. “potential for hot spotting of a single data file when auto grow occurs”, 3. you can trace file grows in XE to troubleshoot.
      One last note of interest since we are on this topic- . -T1117|8 are not nec. in SQL2016 http://bit.ly/2lbNWxK
      Good luck,
      Andy