Three Sessions at Spring 2012 Connections Conference

I will be a busy little beaver today, presenting three sessions at the Spring 2012 Connections Conference, the first one at 8AM! Hopefully enough people will be able to get up by then and join me. It should be a lot of fun!

SQL301: Scaling SQL Server 2012
Glenn Berry
How can you scale SQL Server 2012? Scaling up is relatively easy (but can be expensive), while scaling out requires significant engineering time and effort. If you suggest hardware upgrades you may be accused of simply “throwing hardware at the problem,” and if you try to scale out, you may be thwarted by a lack of development resources or 3rd party software restrictions. As your database server nears its load capacity, what can you do? This session gives you concrete, practical advice on how to deal with this situation. Starting with your present workload, configuration and hardware, we will explore how to find and alleviate bottlenecks, whether they are workload related, configuration related, or hardware related. Next, we will cover how you can decide whether you should scale up or scale out your data tier. Once that decision is made, you will learn how to scale up properly, with nearly zero down-time. If you decide to scale out, you will learn about practical, production-ready techniques such as vertical partitioning, horizontal partitioning, and data dependent routing. We will also cover how to use middle-tier caching and other application techniques to increase your overall scalability.

The deck for Scaling SQL Server is here.

SQL303: Hardware 301: Diving Deeper into Database Hardware
Glenn Berry
Making the right hardware selection decisions is extremely important for database scalability. Having properly sized and configured hardware can both increase application performance and reduce capital expenses dramatically. Unfortunately, there are so many different choices and options available when it comes to selecting hardware and storage subsystems, it is very easy to make bad choices based on outmoded conventional wisdom. This session will give you a framework for how to pick the right hardware and storage subsystem for your workload type. You will learn how to evaluate and compare key hardware components, such as processors, chipsets, and memory. You will also learn how to evaluate and compare different types of storage subsystems for different database workload types. This session will give you the knowledge you need to make sure you get the best performance and scalability possible from your hardware budget!

The deck for Hardware 301 is here.

SQL302: DMV Emergency Room!
Glenn Berry
If you have ever been responsible for a mission critical database, you have probably been faced with a high stress, emergency situation where a database issue is causing unacceptable application performance, resulting in angry users and hovering managers and executives. If this hasn’t happened to you yet, thank your lucky stars, but start getting prepared for your time in the hot seat. This session will show you how to use DMV queries to quickly detect and diagnose the problem, starting at the server and instance level, and then progressing down to the database and object level. Based on the initial assessment of the problem, different types of DMV queries will help you narrow down and identify the problem. This session will show you how to assemble and use an emergency DMV toolkit that you can use to save the day the next time a sick database shows up on your watch in the database ER!

The deck for DMV Emergency Room is here.

The script for DMV Emergency Room is here.

This entry was posted in Computer Hardware, Microsoft Training, SQL Server 2008, SQL Server 2008 R2, SQL Server 2012, Storage Subsystems and tagged . Bookmark the permalink.

9 Responses to Three Sessions at Spring 2012 Connections Conference

  1. Chris Yates says:

    Hey Glenn, thanks so much for this. I did have one question. I am running the script on a 08 R2 environment and when I get to this section the s.database_id in the select returns a doesn’t exist. I changed that to an r. want to make sure that is what the intention was. Again many thanks for providing these types of helpful scripts. Hope you have a great one!

    — Who is running what at this instant (run multiple times)
    SELECT st.[text] AS [Command Text],[host_name], r.session_id AS [SPID],
    r.[status], db_name(s.database_id) AS [DatabaseName],
    ISNULL(r.wait_type, N’None’) AS [Wait Type],
    r.logical_reads, r.cpu_time, r.total_elapsed_time
    FROM sys.dm_exec_requests AS r
    INNER JOIN sys.dm_exec_connections AS c
    ON r.session_id = c.session_id
    INNER JOIN sys.dm_exec_sessions AS s
    ON s.session_id = r.session_id
    CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st
    WHERE r.session_id > 50
    AND r.session_id @@SPID — eliminate current connection
    AND s.is_user_process = 1
    ORDER BY r.[status] OPTION (RECOMPILE);

  2. David Washington says:

    Glenn,
    Thank you for creating all the very useful scripts that you develop for us DBAs. Just wanted to let you know there was a small error in the Script for DMV Emergency Room query. The section where db_name(s.database_id) should be db_name(r.database_id). See below:

    — Who is running what at this instant (run multiple times)
    SELECT st.[text] AS [Command Text],[host_name], r.session_id AS [SPID],
    r.[status], db_name(r.database_id) AS [DatabaseName],
    ISNULL(r.wait_type, N’None’) AS [Wait Type],
    r.logical_reads, r.cpu_time, r.total_elapsed_time
    FROM sys.dm_exec_requests AS r
    INNER JOIN sys.dm_exec_connections AS c
    ON r.session_id = c.session_id
    INNER JOIN sys.dm_exec_sessions AS s
    ON s.session_id = r.session_id
    CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st
    WHERE r.session_id > 50
    AND r.session_id @@SPID — eliminate current connection
    AND s.is_user_process = 1
    ORDER BY r.[status] OPTION (RECOMPILE);

    Thanks again for the tremendous work you do for us.

  3. Daniel Imamura says:

    In your script, there is a list of missing indexes by index advantage. Would you consider those listed as indexes that should be considered for creation?

    Thanks!

    • Glenn Berry says:

      They should be considered, but not automatically created. You need to evaluate your existing indexes and your overall workload before you decide to add any new indexes.

      • Daniel Imamura says:

        With regard to index_advantage values, assuming bigger # means more bang for for the buck, is there any rule of thumb regarding those index_adv values?

        Thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s