+44 (0)115 784 6434 hello@n4stack.co.uk

Suffering SQL Server Performance Issues?: A story of Olympic sprints and handy SQL Server hints!!!

Usain Bolt, the world’s fastest runner, now boasts his third successive 100m Olympic title. That level of performance, the confidence in success when the pressure is at its greatest is an amazing achievement. However, the ability to start writing a blog on the performance level of Usain Bolt and then draw a very iffy link to SQL Server performance tuning surely has to surpass this. Some may say it’s just a dirty, low down, good for nothing tactic …. Well, yes, but, but, but…

Read on, because when your business and colleagues are looking at you in anticipation, hoping, expecting, needing the database to perform during a critical trading period. Wouldn’t it be great to smile confidently and if the feeling takes you pull out an impromptu “Lighting Bolt” swagger in the boardroom.

Of course it would, so here are 5 handy hints to help you on the road to achieving stunning SQL Server performance:

1) Review your index usage: Make sure you review the missing/redundant indexes as well as overlapping/duplicate indexes which would cause unnecessary resource usage, thus preventing the best efficiency.

2) Make sure that your IO doesn’t s*ck: Whilst PAGEIOLATCH is usually associated with disk IO throughput issues, it can also be caused by excessive TSQL query parallelism. This, however, is usually seen on systems with many logical cores where the max degree of parallelism is set to the default value of 0 (unlimited across all available cores). The disks (storage) can be overwhelmed by too much concurrency. Use Windows Perfmon to determine your read/write latency.

3) Check tempdb is configured properly – default may not be the best option: The tempdb is not just used for the temporary tables you create but SQL Server engine uses it for sorting/joining data, read committed snapshot isolation, availability groups, rebuild indexes in temp, etc. It’s important that the tempdb is configured properly. Prior to SQL Server 2016, by default you only start with 1 data file. But a good rule of thumb is:

  • if you have less than 8 logical processors (with hyperthreading enabled), then use the same number of equally spaced data files as number of logical processors;
  •  If you have more than 8 logical processors then use 8.

Baseline and then check the performance upon change by monitoring Transactions/sec counter against tempdb database on Windows Perfmon. Trace flags 1117 and 1118 could also help to get further performance benefits out of your tempdb.

4) Avoid zero-initialisation of data files: Another default setting of 1MB or 10% file autogrowth is one to change. A recommended approach to setting autogrowth would be as follows:

Datafiles for user databases
• Primary should be 20mb, auto-growth 10%, size unlimited (it won’t be used for user data)
• Create secondary file group and make it the default file group (size unlimited) for the database with either:
– 1 * 20mb file (increment by 10mb) for very small database <= 1GB
– 1 * 100mb file (increment by 50mb) for small database <= 10GB
– 2 * 500mb files (increment by 250mb) for medium database <= 100GB
– 4 * 1GB files (increment by 500mb) for large database <= 500GB
– 6 * 2GB files (increment by 500mb) for extra large database <= 1000GB
– 8 * 3GB files (increment by 500mb) for super large database <= 1500GB

Transaction logs for user databases
• Create transaction log file with either:
– 1 * 50mb with 50mb (4 VLFs) increment for small database OR database in SIMPLE recovery mode, size unlimited
– 1 * 500mb with 500mb (8 VLFs) increment for everything else, size unlimited

When the data file grows SQL Server fills the newly allocated part of the file with zeroes, which is quite unnecessary. To prevent this part of the operation, you need to enable instant file initialisation i.e. grant the SQL Server service account “Perform Volume Maintenance tasks” privilege. This would mean that the newly created space can be used immediately. This only applies to the data file and not the transaction log file, which has to be zeroed out.

5) Schedule maintenance routines for optimal performance: De-fragmenting indexes and updating statistics on a regular basis is vital in enabling SQL Server to make accurate decisions on the execution plans to use.

Hopefully these tips come in useful and enjoy the rest of the Olympic games! If you do need a hand with a particularly difficult database performance issue then please give N4Stack a call.

Share This