The goal of this post is to understand the procedure cache and execution plans to ensure we use fewer resources and that queries run better. Better means higher throughput, more concurrency and fewer resources – as described by MVP Joe Webb during SQLTeach in Vancouver just last month.
In the last column on the right produced by querying the procedural cache (see below), it provides a graphical text-based, or XML-based representation for the data retrieval methods chosen by the Query Optimiser. In understanding the execution plan, we read it from right to left, playing close attention to the relative execution costs (if high) for specific statements that are displayed: such as physical operation, logical operation, i/o cost, cpu cost, rows, and row size. Zooming in and out are functionalities available too, as well as the properties window (which I usually have hidden...and forget about).
There may be a thousand ways to resolve a query, or to the find one that’s good enough to return the results in the most efficient way – this is what the query optimiser does. Uses a cost-based algorithm which estimates the best way, it then passes onto the part of SQL Server that processes the query and there it may change the plan, or re-estimate what the execution plan would be. For example, as part of your stored proc., you are doing recurrent updates, and then you have changed up to 50% of your rows, and thus the table stats have changed consecutively. Erstwhile, the plan that created originally has changed due to the statistics. Maybe now there is a better index to use or foundation has changed to the point where it needs to recompile. In reality, it’s best to make your decisions on the Actual plan, and not the estimated plan (in SSMS click Query, Include Actual Execution Plan, or to see the full before/after story compare with Query, Display Estimated Plan). You will want to watch out for clustered index scans, since they are synonymous with a Table scan, and that is to be avoided or indicates there is a problem with the query. If you see a table scan, change the heap to a clustered index, or add an index to the specific column necessary - clustered index seeks are a good sign optimisation wise. If the tables are of an insignificant size however, don't bother splitting hairs.
As mentioned by Brad McGehee back in December 2008 at SQL teach in Montreal, an interesting way to do trail and error is to make different versions of the query and let it give you estimated/actual plans for all the batches and to compare each iteration of the result set. Furthermore, to quickly read the execution plans findings, the tooltips are a lot better and give way more details in 2008 (which even tells you now, correlating with an index-related DMV, if an index needs to be created).
following query allows you to view all the recent queries (top 50 in this case) that have run on your instance, as well as their graphical plans, which are stored in XML (have to double-check this, but I'm pretty sure the tooltips work in the XML too).SELECT TOP 50 DB_Name(qp.dbid) as [Database] , qp.number , qt.text as [queryText],qs.total_logical_reads as [Reads], SUBSTRING(qt.text, (qs.statement_start_offset/2) + 1, ((CASE statement_end_offset WHEN -1 THEN DATALENGTH(qt.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) as [StatementText] ,qp.query_plan FROM sys.dm_exec_query_stats as qs CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) as qp CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY qs.total_logical_reads DESC
Happy Canada Day! :)
BTW - Alternate methods to produce execution plans are by using: Profiler, 2005 performance dashboard and SQL 2008`s data collector.
No comments:
Post a Comment