NTFS Fragmentation

 

So what has this to do with the DBA you might say? Well I mostly make my living from tuning databases and applications. I always start with a hardware audit as it makes little sense to spend lots of money on a consultant when a simple hardware upgrade might resolve the problems.

It is very very important to realise that many SQL Server implementations do not follow best practices, this can be for any number of reasons, many of which are often beyond the control of the DBA.

For instance I was †faced with Servers with six disk slots and no external storage. Whichever way I decide to go is going to be a compromise. It's very easy to be the purist but this doesn't help real world.

After showing all that go wrong I'll make a few suggestions as to how you may be able to mitigate some of the negative effects of fragmentation.

Remember don't bring problems bring solutions!!

A quick check with Disk Defragmenter, available from My Computer, Manage, will give a quick indication of anything untoward.

 

You'll hopefully see something like this

 

 

However you might see this

 

 

But so far so good, thereís 40% free space so a defrag will be fine.

Except what has happened is that the fragmentation is so interwoven there is no contiguous free space to enable defragmentation of files, and I have 65% free space in the example below !!

 

 

 

And sometimes you just canít get defrag to run!

 

 

 

Database files are very prone to suffering fragmentation due to auto grow, the more databases upon your server the more likely the problem. As database files occupy a greater percentage of the available disk space the issue of defragmentation becomes more critical. It is quite possible to have adequate free space but no large enough contiguous space. ( If you have a 80Gb file which is fragmented you need 80Gb contiguous free space for a fast and effective defrag ) This can occur with fragmentation of the MFT, these are system files so will not be defragmented and a badly fragmented MFT really gives you little option other than to reformat your disk. ( There are some third party tools which will deal with MFT fragmentation I believe )

 

 

Bad file fragmentation report ( above )

 

 

Fragmented MFT ( above )

 

 

 

 

So having established that even regular maintenance still shows severe fragmentation what steps can you take to lessen the effects?

  1. Make sure that auto shrink is not enabled on any of your databases
  2. Make sure you're not running any regular shrink file jobs - this is no different to a controlled auto shrink.
  3. Set the growth of your database devices to large amounts - If your database is 10Gb make the growth step 1 Gb - as DBA you need to be in control of growth. Back in the days of SQL 6.x there was no auto grow, growth had to be pre allocated, pre allocated growth allows you to control fragmentation much better. Better one 1Gb growth then 100 10Mb growths.
  4. Set your system databases including tempdb the same .. ideally you want 0% dynamic growth of these databases. I usually set master and msdb to an initial 25Mb with a 10Mb log and growth of 10Mb. The actual values will depend upon your usage of these databases, I have a custom stored procedure which clears old data from msdb, this helps maintain size, after all do you really want 5 years of database backup information?
  5. You might want to consider migrating your larger databases to multiple files ( not file groups ) It would be easier to defragment 8 10Gb files than one 80Gb file.
  6. Use file groups to separate static from dynamic/growing table - you might also want to use multiple files within your file groups, again on the basis of ease of working on smaller files.
  7. Add more memory - I read so many posts concerning database segmentation to avoid "hot spots" but ultimately the easiest way to avoid hotspots is to put the data in cache. Consider a 100Gb database residing on a server with minimal memory, say 3Gb, at best the data cache will be about 1.25Gb, actual value will vary but whichever way you slice and dice your memory it supports no more than around 2% of your data. Taking the memory to say 32Gb will give you a 30% data cache. Does it work? Well I upgraded an 8 way box from 8Gb to 32Gb , cpu dropped by 75% and actual throughput ( transactions etc.) jumped by around 50% , i/o was turned around with minimal read activity compared with before. As with all changes to server hardware you may well introduce a new bottleneck!
  8. If possible replace your disks with larger spindle sizes, having more free space makes defragmentation easier. It used to be recommended to leave at least 25% free space on a disk/array, I personally think that for databases this figure should be higher, but again "it just depends" if your databases don't grow then it's not so much a problem, however if they don't grow you won't get fragmentation of the database files, just backups.

 

 

(c) grumpyolddba.co.uk 2007.

Edited from a previous blog post.