Well, we are on the final week of the DMV a Day series for the month of April. I will be posting a recap of the series on May 3, that will have links and a summary for each post in the series. I really appreciate all of the comments and suggestions I have received about the series. Now, lets get back to the series.
The DMV for Day 26 is sys.dm_exec_procedure_stats, which is described by BOL as:
Returns aggregate performance statistics for cached stored procedures. The view contains one row per stored procedure, and the lifetime of the row is as long as the stored procedure remains cached. When a stored procedure is removed from the cache, the corresponding row is eliminated from this view. At that time, a Performance Statistics SQL trace event is raised similar to sys.dm_exec_query_stats.
This DMV allows you to discover a lot of very interesting and important performance information about your cached stored procedures. This DMV only works with SQL Server 2008 and 2008 R2. It requires VIEW SERVER STATE permission.
-- Top Cached SPs By Total Logical Reads (SQL 2008). Logical reads relate to memory pressure SELECT TOP(25) p.name AS [SP Name], qs.total_logical_reads AS [TotalLogicalReads], qs.total_logical_reads/qs.execution_count AS [AvgLogicalReads],qs.execution_count, ISNULL(qs.execution_count/DATEDIFF(Second, qs.cached_time, GETDATE()), 0) AS [Calls/Second], qs.total_elapsed_time, qs.total_elapsed_time/qs.execution_count AS [avg_elapsed_time], qs.cached_time FROM sys.procedures AS p INNER JOIN sys.dm_exec_procedure_stats AS qs ON p.[object_id] = qs.[object_id] WHERE qs.database_id = DB_ID() ORDER BY qs.total_logical_reads DESC;
Depending on what columns you include, and which column you ORDER BY, you can discover which cached stored procedures are the most expensive from several different perspectives. In this case, we are interested in finding out which stored procedures are generating the most total logical reads (which relates to memory pressure). I always like to run this query, but I would especially want to run it if I saw signs of memory pressure, such as a persistent low page life expectancy and/or persistent values above zero for memory grants pending. This query is filtered by the current database, but you can change it to be instance wide by removing the WHERE clause.
One big caveat for this query is that you need to pay close attention to the qs.cached_time column as you compare rows in the result set. If you have stored procedures that have been cached for different amounts of time, that will skew the results. One easy (but perhaps controversial) solution to that problem is to periodically blow out your procedure cache with a SQL Agent job. That will cause all of your stored procedures to recompile and have their plans go back into the cache the next time they are executed. That means that most of the stored procedures that are run frequently and are part of your normal workload will have a similar cache time, and this query will be much easier to interpret (until and unless they get recompiled again for some other reason).
The controversy may come from the oft-repeated warning to never run DBCC FREEPROCCACHE (or its more selective cousin DBCC FREEPROCINDB(db_id) ) on a production server. Recompiling all of the query plans causes some extra work for your processor(s), but in my experience, modern processors shrug this off with almost no measurable effect. I think the other benefits (such as removing the single-use ad-hoc query plans) far outweigh the very slight amount of extra CPU required to recompile the query plans.