The goal of this post is to explain how to take advantage of Auto-Retry and why you will want to use it. I hope to clarify in which circumstances an auto-retry works best and when not to use it. The (disclaimer!) point is that every job has its own constraints, requirements, and has to be evaluated individually for whether an auto-retry will work. I will try and keep this summary short and crisp, but still with enough detail to understand auto retry best.
Auto retry can save some pretty complicated bottlenecks: once I had to keep a dedicated sql agent job server in check with respect to the temp database use on SQL 2000. For the occasional job scheduling conflict that used the temp db too much at the same time, auto-retry came in very handy. To put this in perspective, before when had no disk space on a dedicated sql agent job server, and the temp db data file was always full. Since I implemented the use of retries, I was timing the auto-retry so that a temp db shrink will have a chance to run during the wait before retry, depending on which job it is running, of course. Often the retry is simply due to a deadlock (does not mean instantly use nolock as a solution btw), so a retry is very appropriate for this. This way, the temp db data files came under control, with a shrink using set deadlock_priority low. I already mentioned this to several clients, and they are profiting from its use for production systems still in SQL 2000.
From a management point of view, if you can program the job to restart itself automatically, why would you wait until someone gets around to it, especially on a server dedicated to SQL Server Jobs? Sometimes we have to watch out for a long wait for retry - maybe for some steps take forever, on a specific job (hence exceptionally long running steps should be split up), since it takes so long we can take the retry option off in this case. However, if we want us to disable auto retry for specific steps we must check to see what the errors were in the past, b/c if they are all deadlock e.g., then auto retry is a very good idea. If there are validation errors on a regular job whether it's on auto-retry or not, the job should be disabled until fixed - and not simply rerun with the hope it'll be a nicer error next time around :)
It's for sure that in the case of jobs those run every hour and have to be timely, that a retry gets to fix the job much faster than human, right? Human intervention is far more costly also. Anyway, most application owners should be monitoring the jobs with SQL Server and will see the retry usually happens (and hopefully read the error log). I'm sure many of the application owners in a job-intensive environments are having less job failures to deal with since retry option has been used, considering their work-stress load is lighter now - and the application owners can get onto more challenging work instead of restarting a job. This being the information [deluge] age, since we're overloaded as it is, we shouldn't be doing restart job work that can be automated. Examples: to deal with and bring down a problematic SQL 2000 database to size with DBCC - it was thanks to a automatic rerun on a dbcc shrinkfile emptyfile several times that we regained huge space inside a filegroup. In this way it is very important to use auto retry when we are trying to get control of our disk space. Shrink temp db is yet another example where the low_priority setting makes sure the job doesn't take priority over critical processes, and thus waits to rerun later - while still keeping that critical database operations going. All potentially long maintenance jobs on a dedicated server running jobs, should have a low priority set when possible – or better yet combine the job step`s use with TRY/CATCH (see previous blog post).
So, to recap, if you are worried about a job that fails frequently, analyse it to see if auto-retry is effective or not - b/c of course, there are exceptions.
No comments:
Post a Comment