The goal of this post is to explain how to take advantage of Auto-Retry and why you will want to use it. I hope to clarify in which circumstances an auto-retry works best and when not to use it. The (disclaimer!) point is that every job has its own constraints, requirements, and has to be evaluated individually for whether an auto-retry will work. I will try and keep this summary short and crisp, but still with enough detail to understand auto retry best.
Auto retry can save some pretty complicated bottlenecks: once I had to keep a dedicated sql agent job server in check with respect to the temp database use on SQL 2000. For the occasional job scheduling conflict that used the temp db too much at the same time, auto-retry came in very handy. To put this in perspective, before when had no disk space on a dedicated sql agent job server, and the temp db data file was always full. Since I implemented the use of retries, I was timing the auto-retry so that a temp db shrink will have a chance to run during the wait before retry, depending on which job it is running, of course. Often the retry is simply due to a deadlock (does not mean instantly use nolock as a solution btw), so a retry is very appropriate for this. This way, the temp db data files came under control, with a shrink using set deadlock_priority low. I already mentioned this to several clients, and they are profiting from its use for production systems still in SQL 2000.
From a management point of view, if you can program the job to restart itself automatically, why would you wait until someone gets around to it, especially on a server dedicated to SQL Server Jobs? Sometimes we have to watch out for a long wait for retry - maybe for some steps take forever, on a specific job (hence exceptionally long running steps should be split up), since it takes so long we can take the retry option off in this case. However, if we want us to disable auto retry for specific steps we must check to see what the errors were in the past, b/c if they are all deadlock e.g., then auto retry is a very good idea. If there are validation errors on a regular job whether it's on auto-retry or not, the job should be disabled until fixed - and not simply rerun with the hope it'll be a nicer error next time around :)
It's for sure that in the case of jobs those run every hour and have to be timely, that a retry gets to fix the job much faster than human, right? Human intervention is far more costly also. Anyway, most application owners should be monitoring the jobs with SQL Server and will see the retry usually happens (and hopefully read the error log). I'm sure many of the application owners in a job-intensive environments are having less job failures to deal with since retry option has been used, considering their work-stress load is lighter now - and the application owners can get onto more challenging work instead of restarting a job. This being the information [deluge] age, since we're overloaded as it is, we shouldn't be doing restart job work that can be automated. Examples: to deal with and bring down a problematic SQL 2000 database to size with DBCC - it was thanks to a automatic rerun on a dbcc shrinkfile emptyfile several times that we regained huge space inside a filegroup. In this way it is very important to use auto retry when we are trying to get control of our disk space. Shrink temp db is yet another example where the low_priority setting makes sure the job doesn't take priority over critical processes, and thus waits to rerun later - while still keeping that critical database operations going. All potentially long maintenance jobs on a dedicated server running jobs, should have a low priority set when possible – or better yet combine the job step`s use with TRY/CATCH (see previous blog post).
So, to recap, if you are worried about a job that fails frequently, analyse it to see if auto-retry is effective or not - b/c of course, there are exceptions.
sql server optimisation, sql server database management, sql discussions, help, clusters, backups, expansion, usefull tools, mcitp certification, server management, meilleures pratiques sql server, best practices.disaster recovery, deployment management, mirroring best practices and high availability, database security best practices
Wednesday, August 13, 2008
SQL Server Query Hints : the use of WITH (NOLOCK)
Since we all want to see our queries run faster (at least for most scrum development houses), I am suggesting the use of WITH (NOLOCK), but with loads of cautionary nuances.
There are certainly times when it can be used in production (closely monitored!) as long as the data in which you are querying is not at risk of change, because the database engine will spurt out dirty reads and will even give you duplicate rows or even skip rows. In a Datawarehouse, for example, you'll see a performance enhancement (thanks to Itzak Ben-Gan for the tip) since the data should be set to read only after a daily early-morning load, assuming one would load the data during non business hours. The distaster I wish you all to avoid is placing a view in production, deciding to use the query hint, and then find data movement errors (page splits are occuring due to inserts/updates while you are reading the table). The most important avoidance for NOLOCK is it's potential to flip cluster nodes (error 7105 excessive cpu usage) when used on tables that have NTEXT, TEXT or IMAGE datatypes - at least from what I saw first hand on a SQL 2000 cluster.
Another caveat with respect to nolock use: The Database Engine might have to acquire locks when reading metadata, even when processing a select with a locking hint that prevents requests for shared locks when reading data. For example, a SELECT using the NOLOCK hint does not acquire share locks when reading data, but might sometime request locks when reading a system catalog view. This means it is possible for a SELECT statement using NOLOCK to be blocked! Thus, if you are feeling iffy about a process or specific stored procedure, run the following:
-- Get information about the locks held by
-- the transaction.
SELECT *
FROM sys.lock_information
WHERE request_spid = @@SPID
It's okay to use NOLOCK when you know the table is being pounded and you're just wanting to check some data without affecting production. From Microsoft literature they say basically that because the SQL Server query optimizer typically selects the best execution plan for a query, they recommend that NOLOCKbe used only as a last resort by experienced developers and database administrators if you do not care about inconsistent data or if the data is guaranteed to be static.
Beware the trends in some companies where developers have decided that it's good to use it all the time, which is, for the reasons mentioned above, a very bad idea in production. Whereas in a datawarehouse, it can be set for the entire area, since the data is created to stay static until the next day or month's load anyway.
http://www.sqlservercentral.com/articles/Performance+Tuning/2764/
http://www.sqlservercentral.com/Forums/Topic330889-192-1.aspx
There are certainly times when it can be used in production (closely monitored!) as long as the data in which you are querying is not at risk of change, because the database engine will spurt out dirty reads and will even give you duplicate rows or even skip rows. In a Datawarehouse, for example, you'll see a performance enhancement (thanks to Itzak Ben-Gan for the tip) since the data should be set to read only after a daily early-morning load, assuming one would load the data during non business hours. The distaster I wish you all to avoid is placing a view in production, deciding to use the query hint, and then find data movement errors (page splits are occuring due to inserts/updates while you are reading the table). The most important avoidance for NOLOCK is it's potential to flip cluster nodes (error 7105 excessive cpu usage) when used on tables that have NTEXT, TEXT or IMAGE datatypes - at least from what I saw first hand on a SQL 2000 cluster.
Another caveat with respect to nolock use: The Database Engine might have to acquire locks when reading metadata, even when processing a select with a locking hint that prevents requests for shared locks when reading data. For example, a SELECT using the NOLOCK hint does not acquire share locks when reading data, but might sometime request locks when reading a system catalog view. This means it is possible for a SELECT statement using NOLOCK to be blocked! Thus, if you are feeling iffy about a process or specific stored procedure, run the following:
-- Get information about the locks held by
-- the transaction.
SELECT *
FROM sys.lock_information
WHERE request_spid = @@SPID
It's okay to use NOLOCK when you know the table is being pounded and you're just wanting to check some data without affecting production. From Microsoft literature they say basically that because the SQL Server query optimizer typically selects the best execution plan for a query, they recommend that NOLOCK
Beware the trends in some companies where developers have decided that it's good to use it all the time, which is, for the reasons mentioned above, a very bad idea in production. Whereas in a datawarehouse, it can be set for the entire area, since the data is created to stay static until the next day or month's load anyway.
http://www.sqlservercentral.com/articles/Performance+Tuning/2764/
http://www.sqlservercentral.com/Forums/Topic330889-192-1.aspx
Meilleure pratique UNION au lieu de CASE, et Comment bien remplir une table Temporaire
Best Practice Juin 2008 / June 2008
Ce mois ci, je vous donne deux brefs exemples pour vous inspirer lors de la création de vos objets/requêtes. This month, here are two simple examples to inspire you during the creation of your objects/queries.
Le premier exemple concerne l’utilisation de UNION au lieu de CASE (se retrouve très souvent). The first example is how you can use a UNION select instead of CASE (which I see over-used in many environments).
transformez vos CASE... /tranform the CASE...
SELECT CHB_NUMERO,
CASE CHB_ETAGE
WHEN 'RDC' THEN 0
WHEN '1er' THEN 1
WHEN '2e' THEN 2
END AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
ORDER BY ETAGE, CHB_COUCHAGE
...en UNION / ...into UNION
SELECT CHB_NUMERO, 0 AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
WHERE CHB_ETAGE = 'RDC'
UNION SELECT CHB_NUMERO, 1 AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
WHERE CHB_ETAGE = '1er'
UNION SELECT CHB_NUMERO, 2 AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
WHERE CHB_ETAGE = '2e'
ORDER BY ETAGE, CHB_COUCHAGE
La performance sera nettement améliorée si la colonne CHB_ETAGE a un index.
Of course, you want to make sure there is an index on CHB_ETAGE field to obtain the best performance (unless your table is insignificant in size).
Le deuxième exemple concerne le remplissage d’une table temporaire :
The second example involves the desired use of temporary tables (avoiding into #temp as much as possible) :
Create table #tempZ
(
col1 DataType
,col2 DataType
,col3 DataType
)
INSERT #tempZ
(
col1
,col2 -- éviter les NULLs avec ISNULL(col,0)
,col3
)
EXECUTE NomBD.dbo.cp_MonPrpc parm1, param2
Afin d’éviter des procédures stockées qui dépassent 500 lignes. On met le Select en procédure stockée.
To avoid very long stored procedures I recommend placing the select statement that matches the insert by using execution of a stored procedure to populate temporary table.
Ce mois ci, je vous donne deux brefs exemples pour vous inspirer lors de la création de vos objets/requêtes. This month, here are two simple examples to inspire you during the creation of your objects/queries.
Le premier exemple concerne l’utilisation de UNION au lieu de CASE (se retrouve très souvent). The first example is how you can use a UNION select instead of CASE (which I see over-used in many environments).
transformez vos CASE... /tranform the CASE...
SELECT CHB_NUMERO,
CASE CHB_ETAGE
WHEN 'RDC' THEN 0
WHEN '1er' THEN 1
WHEN '2e' THEN 2
END AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
ORDER BY ETAGE, CHB_COUCHAGE
...en UNION / ...into UNION
SELECT CHB_NUMERO, 0 AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
WHERE CHB_ETAGE = 'RDC'
UNION SELECT CHB_NUMERO, 1 AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
WHERE CHB_ETAGE = '1er'
UNION SELECT CHB_NUMERO, 2 AS ETAGE, CHB_COUCHAGE
FROM T_CHAMBRE
WHERE CHB_ETAGE = '2e'
ORDER BY ETAGE, CHB_COUCHAGE
La performance sera nettement améliorée si la colonne CHB_ETAGE a un index.
Of course, you want to make sure there is an index on CHB_ETAGE field to obtain the best performance (unless your table is insignificant in size).
Le deuxième exemple concerne le remplissage d’une table temporaire :
The second example involves the desired use of temporary tables (avoiding into #temp as much as possible) :
Create table #tempZ
(
col1 DataType
,col2 DataType
,col3 DataType
)
INSERT #tempZ
(
col1
,col2 -- éviter les NULLs avec ISNULL(col,0)
,col3
)
EXECUTE NomBD.dbo.cp_MonPrpc parm1, param2
Afin d’éviter des procédures stockées qui dépassent 500 lignes. On met le Select en procédure stockée.
To avoid very long stored procedures I recommend placing the select statement that matches the insert by using execution of a stored procedure to populate temporary table.
Subscribe to:
Posts (Atom)