I sincerely wish that everyone enjoyed their recent holidays and recent downtime with family, and pass on my hope for peace, happiness and health for 2013.
The goal of this post is to aid in the understanding of the procedure cache and execution plans to ensure we use fewer resources, with the desire for database queries to run optimally. Better means higher throughput, more concurrency and fewer resources – as described by SQL Server MVP Joe Webb during a session at SQLTeach 2009 in Vancouver (my original hometown, photos from/of new convention centre below), where I first took a serious look at this subject. Of course, this is simply a quick blog post, so for those of you desiring far greater detail, please see Grant Fritchey’s 2nd edition, and FREE e-book on Execution Plans.
|Vancouver's New Convention Centre (Dexigner.com)
From the result set tabs provided from the query below (thanks to the Dynamic Management View Exec Query Stats), the last column on the right produced by querying the procedural cache, provides a graphical text-based, or XML-based representation for the data retrieval methods chosen by the Query Optimiser. In understanding the execution plan, we read it from right to left, playing close attention to the relative execution costs (if high) for specific statements that are displayed: such as physical operation, logical operation, i/o cost, cpu cost, rows, and row size. For easier readability, zooming in and out are functionalities available too, as well as the properties window,which I usually have hidden unfortunately...and forget about (to my own disappointment).
-- view all the recent queries, top fifty in this case
SELECT TOP 50 DB_Name(qp.dbid) as [Database] , qp.number , qt.text as [queryText],qs.total_logical_reads as [Reads], SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1,
END - qs.statement_start_offset)/2) + 1) as [StatementText],qp.query_plan
For example, as part of your stored procedure, you are doing recurrent updates, and then perhaps you may have changed up to 50% of your rows, and thus the table statistics have changed consecutively. Erstwhile, the plan that created originally has changed due to the statistics. Maybe now there is a better index to use or foundation has changed to the point where the stored procedure needs to recompile. In reality, it’s best to make your decisions on the Actual plan, and not the estimated plan – to see what I mean, in SQL Server Management Studio click Query, and select Include Actual Execution Plan, or, better yet, to see the full before/after story compare with select Query, Display Estimated Plan.
You will want to watch out for clustered index scans, since they are synonymous with a Table scan, and that is to be avoided or indicates there is a problem with the query. If you see a table scan, change the heap to a clustered index, or add an index to the specific column necessary - clustered index seeks are a good sign optimisation wise. If the tables are of an insignificant size however, don't bother splitting hairs.
BTW - Alternate methods to produce execution plans are by using: Profiler, performance dashboard and 2008`s data collector.