Using Synthetic Benchmarks To Help Evaluate Database Server Hardware Choices

As a DBA, I think it is very important to stay current on trends in x64 based hardware. The DBA is much closer to the hardware than many other I.T. Professionals, and improperly chosen and sized hardware can cause lots of pain for both the DBA and everyone who depends on the system. Whether you are evaluating existing hardware or looking to purchase new hardware, knowing the differences between different processor families, chipsets, memory types, etc. is very important if you want to make intelligent hardware choices.

I am purposely ignoring the I/O subsystem for the purposes of this discussion, which you would not want to do in real life (especially for a database server).  I have seen far too many people buy a “big” new database server with multiple, multi-core processors and lots of fast RAM, and then try to use only the internal drives on the server for the entire I/O subsystem. This is like a weight lifter who only works on his upper body, while ignoring his legs. The end result is not pretty…

One useful, easy to use synthetic benchmark is GeekBench, by Primate Labs. You can download and try the 32-bit evaluation version (which works just fine on x64 versions of Windows). This is a cross-platform, processor and memory specific benchmark. Here is how Primate Labs describes it:

Geekbench provides a comprehensive set of benchmarks engineered to quickly and accurately measure processor and memory performance. Designed to make benchmarks easy to run and easy to understand, Geekbench takes the guesswork out of producing robust and reliable benchmark results.

The nice thing about this benchmark is that it has no configuration options whatsoever, and it only takes about two to three minutes to run. I like to run it at least three times on an otherwise idle system, and then average the results. This quick and dirty test gives me a pretty good indication of the CPU and memory performance of a given system. This is important even for database servers (which are often I/O bound) since some types of queries are very CPU dependant, and with things like data and backup compression in SQL Server 2008 Enterprise Edition, it is nice to be able to trade CPU for I/O and storage space reductions.

I have been putting together a performance lab at work, using some retired production servers that are three to five years old. The servers I had to choose from were all Dell eighth generation servers (PowerEdge 1850, 2850, and 6800) that all have Pentium 4-based Xeon processors that support the first generation of Intel hyper-threading (which did not work so well for SQL Server workloads). All of these processors are x64 capable, so I am able to run x64 Windows Server 2008 SP2 or Windows Server 2008 R2 (which is x64 only).

I ran GeekBench 2.1.4 on several of these machines, with the results shown below:

   System                                                                       GeekBench
(1) 3.2GHz Xeon with 2MB L2 cache, HT enabled                   1737
(2) 2.8GHz Xeon with 2MB L2 cache, HT enabled                   2047
(2) 3.2GHz Xeon with 2MB L2 cache, HT enabled                   2459
(4) 3.4GHz Xeon 7140 with 16MB L3 cache, HT disabled         5282

This shows that these older Xeons do not scale very well as you add a second processor, and they are quite under-powered compared to a modern workstation or server.

By comparison, here are the results for a few more modern systems:

(1) 2.83GHz Xeon 5440 quad-core                                       4897
(2) 2.83GHz Xeon 5440 quad-core                                       7953
(2) 2.66GHz Xeon 5550 quad-core, HT enabled                    12458
(1) 2.66GHz Core i7 920, HT enabled                                    7141
(1) 2.8GHz Core i7 860, HT enabled                                      6960
(1) 2.83GHz Core2 Quad Q9550                                           5078
(1) 3.0GHz Core2 Duo E8400                                               3411

These modern, Core2 and Nehalem based processors scale much better as you add a second processor, and they perform much better than their predecessors (at least on this benchmark).  This type of information is very useful when you are doing capacity planning, server consolidation, or trying to justify new hardware. 

This entry was posted in Computer Hardware. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s