Wednesday, September 10, 2014

A Wonderful Weekend in Riyadh, Kingdom of Saudi Arabia, Speaking at SQLGulf #1, Alfaisal University


Thanks to the perseverance of SQL MVP Shehap El-Nagar’s invite last year, this brave man from the Saudi Ministry of Higher Education, managed to organise a great inaugural event.  After the longest visa request process in my life, and twice, since originally his goal was to host this event in December 2013, it finally happened: - the 1st SQLGulf event held in Riyadh, K.S.A. this past Saturday at the beautiful new campus of AlFaisal University, Riyadh. Link to Highlights of the event

As one of six speakers, I gave two Presentations:
1) SQL Server Database Security Practices (on LinkedIn, or on SkyDrive ) with code sample.
AlFaisal University's courtyard (built 2008)
2) Compression for SQL Server (SkyDrive) but for more on compression, please see the updated background blog post for code and details.
For Saudis, the weekend begins on Friday, and thus we faced the challenge of attracting people to give up their Sunday. It is all a little confusing for outsiders to the Kingdom, since this change is only recent. U.K.-based SQL MVP Satya Shaym Jayanty explained to me how a previous SQLSaturday event where he was speaker actually was on Thursday (!) since that was the beginning of the weekend for them, and this time around, we were able to hold it on an actual Saturday (but is still to Saudis, the last day of rest before the new week begins).  This cultural difference proved helpful for us Internationally-based speakers since it only involved leave from work for a day or two, including travel time.

International speakers joined us from Jordan, the United Kingdom, Egypt and the U.S.A.also, and I was the token Canuck. Here was the full schedule of sessions:

What AlFaisal University look like once finished. 60% done now
  I think this was the first event where I finally broke through with speaking (based on the feedback), because experience has certainly done its part. I am a long way since SQLTeach in Vancouver, 2009 in June where there was a tough crowd and one man who seemingly had a rough flight in from the UK and trashed all the speakers - who were not paid, and came to simply give back to the community.


 More photos of the short, but wonderful trip.  Especially a whole afternoon tour with Ahmed, a great taxi driver originally from Peshawar, Pakistan, who made sure to let me see all these beautiful parts of the capital.
The Prince`s SkyBridge, for great views.
AlFaisal's ultra-modern Mosque



Saudi Ministry of Higher Education
 In Riyadh, for SQL Gufl 1, all sessions were recorded in high definition (as have many already Shehap has organised before for SQLSaturday Riyadh), thus I leave it up to readers to decide whether I can honestly say that the presentation made the grade (link to come).  Shehap, who works in this building to the right, was certainly not alone in his efforts (see group shot below) to start this series of events which we hope will jump from city to city across the Persian Gulf. 

The Entire SQL Gulf #1 Team at the end of the day, AlFaisal University Saturday, 30th, 2014


After first presentation with Mostafa Elmasry, blogger at SQLServer-Performance-Tuning.net

To come: SkyBridge view, and National Library.
Needless to say, cannot wait for the next SQL GULF!!!!


Selfie, in front of the late King AlFaisal's Palace, which is surrounded by the new University of the same name

Friday, August 22, 2014

Security Updates released in MS14-044, and An Approach to Microsoft’s Incremental Servicing Model

On August 12th, 2014, another infamous ‘patch-Tuesday,’ Microsoft released a series of Security Updates for multiple versions of SQL Server, to deal with potential Denial of Service attacks and an exploit in Master Data Services. After attempting to make my way through hundreds of instances already, and all the respective environments, with a recent applicable Cumulative Update, the release of all these Security Updates has most definitely thrown a wrench in the patching plans. Here are the details for this specific bulletin. https://technet.microsoft.com/en-us/library/security/ms14-044.aspx

The question is, if you’re a DBA, how to you make sense of all the Cumulative Updates (CUs which contain critical on demand updates from clients), Service Packs (SP), Security Updates (SU), General Distribution Releases (GDR), and the acronym I have only noticed recently - QFE (most have heard of hotfixes, but this particular one means a Quck Fix [from] Engineering) updates. This is where this explanation of Microsoft`s Incremental Servicing Model from the SQL Server Team steps in to help, in fact, after 15 years of administering SQL Server, I have not found a page with such an updated description of how SQL is patched, and this is thanks to a recent visit from Felix Avemore, a Microsoft SQL Server Premier Field Engineer based in Quebec City.

For Microsoft Premier Field Engineers for SQL Server, it’s clear, your priority is to apply important Security Updates before anything else, but often those updates require the application of a CU or an SP as a pre-requisite, which makes patching a painful affair when you have the daunting task of updating 3-400 servers!  That is where updated/clear documentation, system backups, and checklists come in rather handy, and perhaps deeper recommendations from the vendor to validate registry keys if your system is in production and ultra-sensitive. If ever you arrive with a corrupt master, attempt restore but always remember you can rebuild the instance cleanly with the exact Configuration.ini file found within the setup bootstrap folder (please see a previous post on command line installs for more).
To decide which updates to apply, depends on what build you are at,
therefore for 2008-2014, here’s a quick guide:

SQL Server Version
General Distribution Release (GDR)
Quick Fix [from] Engineering (QFE)
2014
RTM
SP1 (without any CUs)
SP2 (..)
CU1 - CU2
SP1 CU1-11
SP2 CU1-13
2012
2008 R2
2008
SP3 (..)
SP3 CU1-CU17
              
Note that If you are on SQL 2014 RTM CU3, or SQL 2012 SP2 you are covered already at those build levels.

There are clear arguments, as laid out well by Glenn Berry here, that you should apply recent Cumulative Updates to maintain proper performance, stability, and regular maintenance of your SQL Server Instances.
Are QFEs cumulative? By their build level, it would appear so, and after reading several definitions, I can confirm that they are indeed cumulative hotfixes also.

Hope this clears up some of the muddy pathway you`ll find attempting to keep up with patches on SQL Server. 
Happy Patching


This post was given a mention in DatabaseWeekly.com 's edition of September 1st, 2014:




Tuesday, May 13, 2014

How to Fix that Never-Ending Join Dragging Down the Whole DB Server Instance – Pre-Populate the Largest Bugger in Temp with Indexes


Now that it has been a good five years I have been blogging away here and on SSC, the editors recently thanked us for our work. They also provided valuable feedback that we should give real-world situations that DBAs encounter.  The following has a target of optimising performance, from an actual task that has re-occurred several times since I first wrote on the subject, in various production environments, on an instance that is bogged down by that one massive query within a stored procedure that has to run all the time, yet is so huge, important and/or complex everyone is afraid or unsure how to resolve.


In this post I hope to clearly explain how the combination of the use of data definition language for your temporary tables and non-clustered indexes can improve the performance of stored procedures that join data from one or many large tables by up to seventeen times (at least that was the case previous time I saw this type of query to optimise) - as I have seen on stored proc.s that work with tables in the tens of millions.


Temporary tables, if used frequently or in stored procedures, in turn, end up with significant input/output disk consumption. To start, one thing we should be aware of is that they are also created as a heap by default.  As experience has shown, if you are cutting up a very large table and using the temporary database, it is best to first do your DDL (data definition language) before running through the rest of your operation with the temporary data - as opposed Select * INTO #temp. Thus, we should avoid Select * into #temp as much as possible, unless the number of rows is insignificant, because being in a single statement, it will create great disk contention within the temp database:

(N.B. the assumed pre-requisite is that you've identified the worst query from your plan cache or have seen the code from Recent Expensive queries listed in Activity Monitor, sorted by worst performing resource)

CREATE TABLE #MyLot  -- you’ll see that we only need a few columns join in the end
       (
       [LotId] [int] IDENTITY(1,1) NOT NULL,
       [LotCode] [nvarchar](10) NOT NULL
       )

INSERT into #MyLot ( LotId, LotCode )
 -- e.g. you can also avoid NULLs by using ISNULL(col,0)
Select LotId, LotCode
from MyBigLotTable
Where clause / matching variables
 -- this is where you found out what joins this massive table with the others and slice it up
 -- horizontally and vertically
 b
efore (!) making that big join,
 -- and that is where we obtain the significant performance gains

Create NonClustered Index #idx_Lot on #MyLot ([LotCode] ASC )
-- create index on matching column used in the 'big' join (this case it was a 5 table join)
-- the glaring ID field
-- integrate all this preparation of #MyLot into the main super slow query
INSERT INTO @result
([Number],[LocId],[BinId],[LotCode],[LotId],[PCode],[PId],[Stock],[StatusCode],[UnitCode])

SELECT
[BIResult].[Number], [Loc].[LocId], [BLoc].[BILocId],[BIResult].[LotCode], #MyLot.[LotId],[BIResult].[PCode],[P].[PId],[BIResult].[Stock],ISNULL([BIResult].[StatusCode],[BIResult].[UnitCode]
FROM OPENXML (@handle"WITH"
                        (
                              [Number] SMALLINT N'@Number'
                              [LocID] NVARCHAR(10) N'@LocID'
                              [PCode] NVARCHAR(18) N'@PCode'
                              [LocCode] NVARCHAR(4) N'@LotCode'
                              [PCode] NVARCHAR(10) N'@LotId'
                              [Stock] NVARCHAR(MAX) N'@Stock'
                              [StatusCode] NVARCHAR(3) N'@StatusCode'
                              [UnitCode] NVARCHAR(1) N'@UnitCode'
                        ) AS [BIResult]
                JOIN [Pt] ON [Pt].[Number] = [BIResult].[Number]
                LEFT JOIN #MyLot --[Lot] was here before, the huge table
                            ON #MyLot.[LotCode] = [BIResult].[LotCode]
                JOIN [P] ON [P].[PtId] = [Pt].[PtId]
                 AND [P].[PCode] = [BIResult].[PCode]
                JOIN [SLoc] ON [SLoc].[PtId] = [Pt].[PId]
                 AND [SLoc].[SLocCode] = [BIResult].[SLocCode]
                JOIN [BLoc] ON [BLoc].[LocId] = [Loc].[LocId]
                 AND [BLoc].[BLocCode] = [BIResult].[BLocCode]
               WHERE CAST([BIResult].[Stock] AS DECIMAL(13)
              )

-- always explicitly/clearly drop the temp at the end of the stored proc.
drop table #MyLot -- and the respective index is dropped too with it


Happy optimising!

Shed the heavy weight of that extra slow query bogging your instance down.



Tuesday, January 14, 2014

Microsoft Project Migration Template for the move to SQL 2012


For those planning a move to SQL Server 2012, although this process can apply for many Database Migrations, perhaps this Microsoft Project Migration Template could help? *
In this plan, there are so many possible steps, it is better to trim down from too many to just those applicable instead of missing steps. As an experienced migrator, you may already know of an even better order of tasks to accomplish a successful migration. By all means share with us below in the comments.   My approach here is to ensure that I delve well into the domain of Project Management as a DBA must/should do from time to time, so that if an official project manager is assigned to you, this document could be handed over to them as a good way of organising the upcoming work.
A quick nota bene at this planning stage of a project is that you should not skip the time estimations, which in turn lead to the analysis of the critical path.  There might be a point where you have to pool resources with another team-member or pull in a consultant to ensure project delivery timelines are met.  Somewhere along the critical path of the project you might want to take this proactive step to mitigate deadline risk. In this way, whole project planning with respect to time estimations is a predecessor to accomplishing this task.
And sorry, for the notes sometimes being in FR, I just tend to mix up the sources/references often enough. This template has a little bit of history. While migrating databases in early 2005 for the Artificial Insemination [of cows] Centre of Quebec (CIAQ)  Mathieu Noel (his profile on linkedin.com) helped me out greatly while writing this document. This version has had major revisions four times so far, with the most recent being this one for SQL 2012.   
 * To view an MPP file without installing Project itself, you can use this free viewer. Exports of the Migration project plan to PDF and Excel are also available on my SkyDrive.
PS as with all migrations, one should constantly try and adhere to the keep it simple rule (K.I.S.S.). Even this old post about a simple Oracle 10 migration to SQL Server 2008 was no exception, so what we did from the very beginning was create insert scripts of all the data into the tables (not a huge database, in the tens of megabytes only), since the schema export was already done for us by a vendor (to which I only had to do minor tweaks to appreciatively).  Before actually going through each table insert script one by one to adjust the fully qualified table names, add Set Identity_Insert On/off statements, with a quick truncate before the begin tran/inserts/commit batch, I had scripted out, in a previous step, all the drop/create foreign key and constraints statements to bring all the data in quickly without per-table FK/Constraint drop/recreation.