Some Suggested SQL Server 2008 R2 Instance Configuration Settings

Depending on your anticipated workload and which SQL Server components you have installed, there are a number of instance level settings that you should consider changing from their default values. These include Max Server Memory, optimize for ad-hoc workloads,  default backup compression, max degree of parallelism, and the number of TempDB data files for the SQL Server instance.

Max Server Memory

Max Server Memory controls how much memory can be used by the SQL Server Buffer Pool, which is where cached data is held in memory after it is read in off of the disk subsystem. This is only for use by the relational Database Engine. It has no effect on other SQL Server components, such as SSIS, SSRS, or SSAS. It also does not affect other Database Engine components such as Full-Text Search, the CLR, connection memory, etc.

The idea here is to limit how much memory the buffer pool can use in order to avoid starving other SQL Server components or the operating system for memory. The table in Listing 1 has some suggested starting values for a dedicated database server that is only running the Database Engine, with no other SQL Server components running. If you have other components installed, you should adjust the Max Server Memory setting downward. The setting is dynamic, meaning that you can change it and it will take effect immediately, with no restart of SQL Server required.

Optimize for ad-hoc workloads

Optimize for ad-hoc workloads is a new instance level setting that was added in SQL Server 2008 which is designed to help control the amount of memory that is used by single-use, ad-hoc query plans in the procedure cache. It allows SQL Server to only store a small stub of an ad-hoc query plan in the procedure cache the first time the ad-hoc plan is executed, which reduces the memory required by the plan in the procedure cache.

With SQL Server 2005, it was very common to see very large amounts of memory being used by single-use, ad-hoc query plans (often in the 6 to 8 GB range). Later builds of SQL Server 2005 have changes that reduced this problem somewhat, but it was still a big issue. Interestingly, one of the biggest offenders that generated ad-hoc query plans in SQL Server 2005 was SQL Server Agent! Another big offender was SharePoint.

In my opinion, you should always enable this setting on SQL Server 2008 and above. I really cannot think of a good reason not to do this.

Default backup compression

If you have a Version and Edition of SQL Server that supports native backup compression, I believe you should enable this setting as shown in Listing 7.1. This simply means that all SQL Server backups will use native backup compression by default. You can always over-ride this setting in your backup T-SQL command. SQL Server 2008 Enterprise Edition supports native backup compression, as does SQL Server 2008 R2 Standard Edition.

Max degree of parallelism

Max degree of parallelism is an instance level configuration setting that is meant to control whether and how much the Query Optimizer will attempt to spread a complex or expensive query across multiple processor cores to run in parallel. The default setting is zero, which allows SQL Server to parallelize queries across as many cores as it sees fit. This default setting is usually the best choice for DW and reporting workloads, but can often be problematic for OLTP workloads. With OLTP workloads, sometimes the Query Optimizer will choose a parallel query plan when a non-parallel query plan would have actually have been less expensive. If this happens a lot, it can cause a high percentage of CXPACKET waits for the instance.

In the past, many people would advise you to immediately change your max degree of parallelism setting to a value of 1 if you had an OLTP workload and you saw a high percentage of CXPACKET waits in your cumulative wait statistics for the instance. I think you should look a little deeper to try to find the actual source of the problem before you make that change. In my experience, it is quite common for “missing indexes” for important queries to cause the Query Optimizer to choose a parallel plan to try to compensate for the missing index. If SQL Server has to do a very large index or table scan, the Query Optimizer may think that it will be less expensive to parallelize the query.

If that is the case, it would probably be better to create the missing index to alleviate the source of the problem, instead of setting max degree of parallelism to 1 to treat the symptom. It is certainly worth some investigation. One big exception to this advice is if you are working with SharePoint databases (for either SharePoint 2007 or 2010). In that case, you should definitely set max degree of parallelism to 1 for the instance.

Number of TempDB data files

By default, SQL Server will create one small TempDB data file (and one log file) in the default location for TempDB on your SQL Server instance. If you ignored my advice about default file locations, the TempDB files will be in a sub-directory on the same drive where your SQL Server binary files are located. This is likely your C: drive on your database server, which is not a good location! If your TempDB is on your C: drive, you need to move it someplace better, which is hopefully a fast, dedicated logical drive.

You also need to create some additional TempDB data files, which should all be the same size. If all of the TempDB data files are the same size, SQL Server will use all of them equally. The reason to have multiple data files is to reduce possible allocation contention, as objects are created and destroyed in TempDB.

The old guidance from Microsoft was to create one TempDB data file per physical processor core. Now in the days of eight and twelve core processors, I think this is excessive. A general consensus seems to be forming that you should start out with perhaps four or eight TempDB data files (making them all the same size), and then look for signs of allocation contention before you consider creating any more TempDB data files.

Listing 1 shows some queries that let you check your current instance configuration settings, and some examples of how you might change them. These are example settings only; you need to use your own judgment for what values you choose.

 

-- Get configuration values for instance
SELECT name, value, value_in_use, [description] 
FROM sys.configurations
ORDER BY name;

-- Set max server memory = 59000MB for the server 
-- (example value only)
EXEC sp_configure 'max server memory (MB)', 59000;
GO
RECONFIGURE;
GO


-- Some suggested Max Server Memory settings
-- Physical RAM     Max Server Memory Setting
--    4GB            3200
--    6GB            4800
--    8GB            6200
--    16GB           13000
--    24GB           20500
--    32GB           28000
--    48GB           44000
--    64GB           59000
--    72GB           67000
--    96GB           90000


-- Enable optimize for ad-hoc workloads 
-- (new in SQL Server 2008)
EXEC sp_configure 'optimize for ad hoc workloads', 1;
GO
RECONFIGURE;
GO

-- Enable backup compression by default 
-- (new in SQL Server 2008 Enterprise Edition)
-- (added to SQL Server 2008 R2 Standard Edition
EXEC sp_configure 'backup compression default', 1;
GO
RECONFIGURE;
GO

-- Set MAXDOP = 1 for the server
-- Depends on workload and wait stats
EXEC sp_configure 'max degree of parallelism', 1;
GO
RECONFIGURE;
GO

Listing 1: SQL Server Instance Configuration Setting Queries

This entry was posted in SQL Server 2008, SQL Server 2008 R2 and tagged . Bookmark the permalink.

28 Responses to Some Suggested SQL Server 2008 R2 Instance Configuration Settings

  1. SQLRockstar says:

    Glenn,

    If I have 32GB of RAM, and I allocate 29 to SQL Server, leaving 3 for the O/S, then shouldn’t I expect to see an O/S Memory consumption of close to 100% most of the time? Also, do those memory settings differ for x32 versus x64 versions, or from win2k3 or win2k8?

    Thanks for the post.

    • Glenn Berry says:

      Thomas,

      Well, it depends on your workload. These are just suggested starting settings for a dedicated server that is only running the database engine. The Max Server Memory setting is dynamic, so it is pretty easy to change. The idea is to have roughly 2-4GB of free memory after the system is under a load for a while. I would use a lower setting for Windows Server 2003 vs. Windows Server 2008 and above. I would also use a lower setting for SQL Server 2005 vs. SQL Server 2008 and above. The newer versions have better memory management, and the OS and SQL Server communicate with each other better.

      I would use the same settings for x86 and x64. AWE memory in x86 can only be used for the buffer pool, which is what we are trying to control here.

  2. Jeff York says:

    Glenn-

    Very nice. Good information presented clearly and succinctly. Thanks!

  3. John M. Couch says:

    Awesome post! Here are the settings I typically use when setting up a new SQL Server Instance, assuming it is the only thing running on the server (meaning single Instance).

    OS Reserved Memory:

    • 3rd Party Applications (i.e. Backup, Virus Scan…etc) – 1-3GB
    • OS – 1-2GB
    • MPA’s – 1GB
    • Worker Threads – See Chart…

    Number of CPUs (Cores) 32-Bit 64-Bit
    <= 4 128 (MB) 1 (GB)
    8 144 (MB) 1 (GB) (Rounded Down)
    16 176 (MB) 1.5 (GB) (Rounded Up)
    32 240 (MB) 2 (GB) (Rounded Up)

    Note: Start with min amount, evaluate, and adjust as necessary. Each server may be different depending on the Number of Workers, MPA’s and 3rd Party Applications required for the box.

    On a server with 32GB of RAM and 16 Cores, I would start with 4-4.5GB free for the OS leaving 27.5-28GB assigned to SQL Server. Adjust higher if necessary. It sounds like a lot to leave for the OS, but I have found it to work perfectly in my environments.

    For TempDB, I have always used the philosophy of starting out with 2 data files, and wait to see if I see any contention. Then add 1 file at a time until the contention subsides. I have rarely seen any reason to go beyond the 2 data files myself, but as we all know, these settings are truly environment/workload specific.

    Just thought I would share what I do, and thanks again for the awesome article.

  4. Glenn, thanks for sharing this info – very useful.

    Do you think the suggested settings will change when dealing with sharepoint? I realize database server is a database server – regardless of what’s using it but still…

    • Glenn Berry says:

      Some settings will change when dealing with SharePoint. A good example is max degree of parallelism, which (according to Microsoft and my own experience) should be set to 1, when you have SharePoint databases on your instance.

      • Alex Bransky says:

        This is bad advice in universal terms but may be fine in general terms. I set my heavily used SharePoint 2010 instance to max degrees = 1 and it became unusable. I had to set it back to 4 (obviously not ideal for everybody) to get things working again. We’re storing huge lists in SharePoint though, with many wide columns, which I don’t recommend at all.

  5. Pingback: @GlennAlanBerry posts Some Suggested SQL Server 2008 R2 Instance Configuration Settings | sqlmashup

  6. Rowland Gosling says:

    Thanks for another good article Glenn.

    As I recall the MAXDOP setting of greater than 1 isn’t supported on some 3rd party products (such as Sharepoint 2010 and Microsoft CRM) due to locking problems. Nice point on the missing index aspect!

    As to the number of data files in TempDB, 8 seems to be the magic number around here.

  7. Anandan Kanagarajan says:

    Hi Glenn,
    In a recent training about SQL Server 2005/2008 Performance Tuning provided to our company staff exclusively, as per the Trainer, increasing the RAM value in 32-bit Operating System will lead to decrease the Buffer Pool size. Also, he justified this comment by providing the below mentioned formula for the Buffer Pool size calculation.
    Bpool size = Min (Min(MSM,RAM), (2GB-(MTL+MpBuffer+50))
    MSM – Maximum Server Memory
    2 GB – User Mode VAS
    MTL – Memory to Leave in Bpool
    I could be missed to jolt down some information/part of formula. I am expecting the reply for the below mentioned questions.
    1. Increase of RAM in a 32-bit Windows based Server (dedicated SQL Server) will increase / decrease the performance?
    2. BPool size calculation and it is relation to Max Server Memory Configuration.

  8. Kevin Boles says:

    Good stuff as usual Glenn.

    1) I would make a mention about monitoring for paging and memory issues in the max memory section. I also think your max settings are a tad high from my experience.

    2) Consider making a statement about NUMA boxes and setting MAXDOP equal to the number of cores in each node. A valid best practice I think, and NUMA boxes are becoming much more common these days.

  9. Mark Shay says:

    Great suggestions! I have been using “Optimize for ad-hoc workload” setting with my benchmark testing and have seen some nice performance benefit.. I have also used PARAMETERIZATION FORCE on my user database for load testing and have seen some nice results with that as well.

  10. Glenn Berry says:

    Mark,

    Good to hear those results. Using forced parameterization at the database level can be a little more risky in terms of the effect it has on performance, but I have used it myself in many cases with good results.

  11. Pingback: Something for the Weekend – SQL Server Links 04/03/11 | John Sansom - SQL Server DBA in the UK

  12. zahid khan says:

    I have a server that is dedicated to only reporting services. How much maximum memory should I allocate for my SQL server?

  13. zahid khan says:

    sorry! I have forgot to mention that the total memory to the server is 16GB.

    • John Couch says:

      Where I work, we leave 4GB minimum free to the OS regardless of what the instance supports. At max we have left 6GB free. What else is running on the server? What version of SQL Server? x64/x86?

  14. Orlando says:

    The configuration of “cost threshold for parallelism” is also a good item to consider when thinking about changing maxdop.

  15. Traci Dukes says:

    Can someone provide what the recommended setting would be for Windows Server 2008 32-bit.
    Thanks Traci

  16. wim robbrecht says:

    Hi, a question.

    I have a server wich will host our ERP and production databases.
    The server i have is a blade server (hs22), with 24 processors and 96GB ram
    my databases will be on a SAN (netap 3240) and the os and tempdb will be on seperate SSD disks.
    => what is the best configuration? (should i configure a ram disk of about 20GB for my temp db)?
    => what is the maxdop i should place (i have 2 physiscal processords) (should i put it on 8).
    => should i devide the tempdb files in 8?
    => my databases are on sql 2000 compatability mode (i can’t change it due to support issues of the erp)

    regards

    • Glenn Berry says:

      I think it is best to start out with MAXDOP set at the default value of zero. You should probably start out with 4 or 8 TempDB data files. Where you place your TempDB data files (and your user database data files and log files) depends on your workload, and what storage resources you have available.

  17. Kevin Boles says:

    I think you need to get some professional help on board to help you analyze your NEEDS, configure and MONITOR performance, and MENTOR you on how to do the same. Any thing other than that will be nothing but (educated?) guessing.

  18. Jeff Bennett says:

    Glenn,

    Another page by you to bookmark. Thanks for writing this up. Any rules-of-thumb for VM’s that differ from physical servers?

  19. keelyp says:

    Hi Glenn
    What a great post, can I ask you if I have a number on instances on a SQL sever should I divide up the max memory figure per instance or if I set them all to the MAX number is there some SQL algorithm that works it out or is it likely that one instance could capture too much memory?

    Thanks in advance
    Paul Keely

    • Glenn Berry says:

      Paul,

      If you have multiple instances and set the Max Server Memory to a high enough number, the instances will essentially compete for the available memory. What people typically do in that situation is lower the Max Server Memory for each instance to a low enough number so that even if all of the instances are using that amount, the OS will not be under pressure. Rather that dividing the total equally, sometimes you might want to give one instance priority by setting its Max Server Memory to a higher amount. There is no SQL Server algorithm that takes care of this automatically.

  20. keelyp says:

    Thanks very much for the response Glenn. Just ordered a copy of your SQL hardware book so am hoping its as good as the blog!

    Best wishes
    Paul

  21. Glenn Berry says:

    Thanks for buying the hardware book, I hope you will enjoy reading it!

Leave a comment