Friday, December 18, 2009

Congratulations to UK Information Commissioner Christopher Graham for Taking Companies to Task About Protecting Data Privacy



The New Information Commissioner for the United Kingdom is Mr. Christopher Graham, he's working from Wilmslow, near Manchester, with a team of hundreds to bring responsibility back to private companies' [mis]management of customer data - and he already took on T-Mobile for a customer sell off by employees debacle that, which of course, they tried to conceal.  He's out to enforce Parliament's Data Protection Act, and he's sending out warrants and private investigators as he goes! Bravo, the Database Cops are coming and they are armed with the DPAJ - too bad QC couldn't follow along and enforce Bill C-198 amongst government institutions.


Hats off to the UK government's efforts to protect data the way it should be and reinforce data stewardship, for more details please see: http://www.computerweekly.com/Articles/2009/11/27/239490/ICO-publishes-guide-to-quash-business-ignorance-on-data.htm

Here's an excerpt from the BBC article, which states that T-Mobile's employees were simply: 'Exploiting data'

The Ministry of Justice has been consulting on tougher penalties for illegal trade in personal information.

The Data Protection Act bans the selling on of data without prior permission from the customer and a fine of £5,000 can be imposed following a successful prosecution."

It is amazing to see such a brave man publically speaking out against things that simply have to be stopped - data protection must be taken more seriously by the business community (as well as the government itself sometimes).  Environments that do not adhere to Auditing Compliance, the principle of least privileges or the Segregation of duties are a clear target for the Information Commissioner to start, and I will let him know personally :)

Please note that I am in transition to a permanent job currently with Canadian printing Giant Transcontinental, and am training on Oracle 11g, Release 2, thus I shall resume more frequent blogging shortly. Final French version of the DRP for SQL Server was submitted last week to Simple Talk also.





Troubleshooting SQL Server Database Mirroring Problems with Best Practices: Requirements, and the 64-bit Benefit

As mentioned in previous disaster recovery posts and article, during my current mandate I have been tasked with what will ultimately be mirroring between redundant data centres. Mirroring has been chosen thanks to its easy setup and automatic failover option. The following is an overview of what should be taken care of to ensure a stable Mirroring setup.

Prerequisites for Mirroring are that you ensure that your database is optimised already, because mirroring an unoptimised database is just double the potential problems (in terms of file space).  In this way, expected disk space growth should be analysed thoroughly.   As for the build level, I am just waiting to apply SQL 2005 sp3, cu6(?) before running a mirrored set up in production. I figure having the same highest-available build level is the best way to start a mirroing infrastructure.

Initially there were rumours that there was a maximum value to the number of databases mirrored, but that turned out to only be applicable to 32-bit systems (which is a platform you probably woudn't wan to have heavily used databases on still anyway), which has a limit of ten. On 64-bit database systems there is no documented limit, therefore instances that hold many databases are without issue, apart from the typical I/O, network, and processor utilisation.  In 2008, there has been an improvment on the compression of the log before it is applied to the mirror, but after testing with three times the average traffic with SQL Stress and RedGate's Data Generator, e.g. pumping three million inserts across with a bunch of large selects on the worst tables, we had only a max wait time to apply the log of only 1.2 seconds as the exception, whilst the Database Mirroring Monitor reported mostly under 200ms.

Start with a general verification that the ports are open on the remote EndPoint by running from the Run dialog box/cmd line:
telnet RemoteServerName 5022

Verifying Port Availability - When you are configuring the network for a database mirroring session, make sure the database mirroring endpoint of each server instance is used by only the database mirroring process. If another process is listening on the port assigned to a database mirroring endpoint, the database mirroring processes of the other server instances cannot connect to the endpoint.

To display all the ports on which a Windows-based server is listening, use the netstat command-prompt utility. YOU can identify the listening ports and the processes that have those ports opened, follow these steps:
1.
       
Obtain the process ID of the respective instance of SQL Server, connect to that instance and use the following Transact-SQL statement:
SELECT SERVERPROPERTY('ProcessID')
2.
       
Match the process ID with the output of the following netstat command:
netstat -ano

If ever you arrive with an error like this: Database Mirroring login attempt failed with error: 'Connection handshake failed. There is no compatible encryption algorithm. State 22.'.  [CLIENT: IPaddress...]
This is probably happening because your encryption is setup differently on the mirror/principal. The system table sys.database_mirroring_endpoints will show different algorithms- Mirror-encryption_algorithm_desc=RC4 and Principal- encryption_algorithm_desc=NONE
Therefore it is best to issue a:
Drop Endpoint Mirroring 

command on both mirroring parters (to start from scratch, beware, this blows away all mirroring on the endpoing), so that when you try and set up mirroring again, you are not stopped by a difference regarding encryption.  If DROP ENDPOINT is not possible, meaning you have other database Mirroring established already with another server, and then the other option would have been to run ALTER ENDPOINT on both instances For more information regarding Mirroring, check out the System Tables:select * from sys.database_mirroring_endpoints

SELECT e.name, e.protocol_desc, e.type_desc, e.role_desc, e.state_desc, t.port, e.is_encryption_enabled, e.encryption_algorithm_desc, e.connection_auth_desc 
FROM sys.database_mirroring_endpoints e JOIN sys.tcp_endpoints t ON e.endpoint_id = t.endpoint_id
select
* from sys.endpoints
select
* from sys.database_mirroring where mirroring_state is not null

-- which ones are in the mirroring state

 --to see who has granted Mirroring and the grantee

SELECT EP.name, SP.STATE,  CONVERT(nvarchar(38), suser_name(SP.grantor_principal_id))  AS GRANTOR, SP.TYPE AS PERMISSION, CONVERT(nvarchar(46),suser_name(SP.grantee_principal_id)) AS GRANTEE 
FROM sys.server_permissions SP , sys.endpoints EPWHERE SP.major_id = EP.endpoint_idORDER BY Permission,grantor, grantee;

  I shall be updating this further once I have had success in production. I hope this helps, if you are aiming on taking advantage of  this great functionality



Notes on a Migration from Oracle to SQL Server



I have to admit, when asked to perform a migration from Oracle to SQL Server over the past couple of months, I was a bit apprehensive.  After having passed an all-nighter to finish the project off recently, starting at 3pm, and finally heading home around 6:30am, there was an unsuccessful attempt by the ‘I don’t use a plan for my project’ person on the other end of the line to get me to turn around to fix a single column size because one report did not work (which of course, was not noticed in pre-production/test environment, and later was fixed without my intervention). Luck won at that stage, but it is not always that way as a DBA without doubt.


First of all, to anyone who has just worked two shifts in a row; do not attempt to call them back to fix a minor thing (respect?), especially a column size fix that at least two other people (in this case) can do, not just because I’m a windging, grumpy, even zombie-like DBA at 6am in the morning, but most importantly, the due to the high risk of messing something up in production.  There are physical limitations that we must respect of any employee or contractor performing their duties while manipulating a critical production database instance - sleep is not negotiable. This should be needless to say, but after repetitively being asked to forego this basic need, I am blogging it!  

Secondly, as with all migrations, one should constantly try and adhere to the keep it simple rule (K.I.S.S.) – this migration was no exception, so what we did from the very beginning was create insert scripts of all the data into the tables (not a huge database, in the tens of megabytes only), since the schema export was already done for us by a vendor (to which I only had to do minor tweaks to appreciatively).  Before actually going through each table insert script one by one to adjust the fully qualified table names, add Set Identity_Insert On/off statements, with a quick truncate before the begin tran/inserts/commit batch, I had scripted out, in a previous step, all the drop/create foreign key and constraints statements to bring all the data in quickly without per-table FK/Constraint drop/recreation.

Finally, I created a checklist with all the table row counts printed out for manual check-off as I made progress through the 50+ tables to load into SQL Server 2008.  It took all night, as mentioned before, so running in the background Youtube’s links to three long series on Franco-Norman history, around and just after the Invasion, permitted my non-technical historical enrichment to benefit at the same time J

Now, I can honestly say that I look forward to migrating more databases over from Oracle.

  

A DBA's Role with Respect to Improving Microsoft Dynamics CRM 4.0 Performance

1linerForward: here's a webinar from Microsoft regarding this subject, the following is a summary of what was required to improve CRM performance.  

Microsoft's Customer Relationship Management platform runs on SQL Server and falls under most of the typical performance improvement techniques (and should be regularly checked with tools like Activity Monitor, or Server Statistics and Performance), but a few things came up for the DBA supporting CRM to focus on. Please note, this work is performed closely with your local friendly CRM application server system administrator.


First, a major server configuration preference we noticed, that may be different from your typical parameters, was setting the maximum degree of parallelism to 1. Memory wise, CRM is pretty memory intensive, so a typical instance should have at least 12GB to 18GB physical RAM available if the application is [hopefully since it is quite good] widely used.



As far as the CRM system admin's help is concerned, there are two particular optimisations regarding the AsynchOperations table (and its two related tables) that need to be done hand-in-hand with the Microsoft-created database optimisation script below, they are: http://support.microsoft.com/kb/957871/EN-US/ and http://support.microsoft.com/kb/968755/


Before you run the script mentioned below in this post, co-ordinate with your System Admin to make sure they stop the Microsoft CRM Asynchronous Processing Service (run during maintenance windows) and take a database backup just before purging the typical hundreds of thousands of records, thus assuming this would be the first time performed, and in the future by regular maintenance job.



USE [master]
GO
ALTER DATABASE [ORGNAME_MSCRM] SET RECOVERY SIMPLE WITH NO_WAIT
GO
-- now run the cleanup of the AsyncOperationBase table  
-- reference http://support.microsoft.com/kb/968520
-- Stop the Microsoft CRM Asynchronous Processing Service while you run this script. 
use [ORGNAME_MSCRM]
go 
IF EXISTS (SELECT name from sys.indexes
                  WHERE name = N'CRM_AsyncOperation_CleanupCompleted')
      DROP Index AsyncOperationBase.CRM_AsyncOperation_CleanupCompleted
GO
CREATE NONCLUSTERED INDEX CRM_AsyncOperation_CleanupCompleted
ON [dbo].[AsyncOperationBase] ([StatusCode],[StateCode],[OperationType])
GO

declare @DeleteRowCount int
Select @DeleteRowCount = 2000
declare @DeletedAsyncRowsTable table (AsyncOperationId uniqueidentifier not null primary key)
declare @continue int, @rowCount int
select @continue = 1
while (@continue = 1)
begin
      begin tran
      insert into @DeletedAsyncRowsTable(AsyncOperationId)
      Select top (@DeleteRowCount) AsyncOperationId
      from AsyncOperationBase
      where OperationType in (1, 9, 12, 25, 27, 10) AND StateCode = 3 AND StatusCode in (30, 32)     
 
      Select @rowCount = 0
      Select @rowCount = count(*) from @DeletedAsyncRowsTable
      select @continue = case when @rowCount <= 0 then 0 else 1 end     
 
        if (@continue = 1)
        begin
            delete WorkflowLogBase from WorkflowLogBase W, @DeletedAsyncRowsTable d
            where W.AsyncOperationId = d.AsyncOperationId
            
delete BulkDeleteFailureBase From BulkDeleteFailureBase B, @DeletedAsyncRowsTable d
            where B.AsyncOperationId = d.AsyncOperationId
 
            delete AsyncOperationBase From AsyncOperationBase A, @DeletedAsyncRowsTable d
            where A.AsyncOperationId = d.AsyncOperationId            
 
            delete @DeletedAsyncRowsTable
      end
 
      commit
end

--Drop the Index on AsyncOperationBase

DROP INDEX AsyncOperationBase.CRM_AsyncOperation_CleanupCompleted
--- after testing this being run in pre-prod, it took 17 hours

-- Rebuild Indexes & Update Statistics on AsyncOperationBase Table 
ALTER INDEX ALL ON AsyncOperationBase REBUILD WITH (FILLFACTOR = 80, ONLINE = OFF,SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = OFF)
GO 
-- Rebuild Indexes & Update Statistics on WorkflowLogBase Table 
ALTER INDEX ALL ON WorkflowLogBase REBUILD WITH (FILLFACTOR = 80, ONLINE = OFF,SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = OFF)
-- final optimisation, although done automatically usually
UPDATE STATISTICS [dbo].[AsyncOperationBase] WITH FULLSCAN
UPDATE STATISTICS [dbo].[DuplicateRecordBase] WITH FULLSCAN
UPDATE STATISTICS [dbo].[BulkDeleteOperationBase] WITH FULLSCAN
UPDATE STATISTICS [dbo].[WorkflowCompletedScopeBase] WITH FULLSCAN
UPDATE STATISTICS [dbo].[WorkflowLogBase] WITH FULLSCAN
UPDATE STATISTICS [dbo].[WorkflowWaitSubscriptionBase] WITH FULLSCAN

-- after everything is finished
USE [master]
GO
ALTER DATABASE [ORGNAME_MSCRM] SET RECOVERY Full WITH NO_WAIT
GO



 Thousand Islands National Park, Canada


For more information regarding a DBA's role in CRM optimisation:
Microsoft Dynamics CRM 4.0 Performance Toolkit from the Team Blog:  http://blogs.msdn.com/crm/archive/2008/02/29/microsoft-dynamicstm-crm-4-0-performance-toolkit.aspx

The performance toolkit itself - http://www.codeplex.com/crmperftoolkit

Wednesday, September 09, 2009

Disaster Recovery for SQL Server Databases

Well, it finally happened, the complete disaster recovery article was reviewed, reviewed again, rewritten, revisited…you get the picture…and published.  Originally, this was my reactionary attempt to better SQL Server Replication (which I had lived through nightmares with prior) by means of a more reliable storage replication, now it's onto Mirroring in the current mandate to see how it compares with the latter.
Thank you so very much to Chris Massey and Andrew Clarke for their great help, and to Brad McGehee for pushing me in the SSC/Simple-Talk.com direction.  
The full English version is finished et une traduction en français est déjà en cours (disponible d'ici la fin septembre/début octobre).

Enjoy, and looking forward to your comments!


http://www.simple-talk.com/sql/backup-and-recovery/disaster-recovery-for-sql-server-databases-/ ( and currently front page of Simple-Talk.com )


Don't be caught without a DRP, it could mean your companies' operations end up washed up on the rocks. 

Pictured here is one of the first panoramas (albeit only two shots taken with the old '71 Pentax Spotmatic 2) I'd ever taken at 15 years, on this location South Coogee Cliffs, Sydney - early morning 1990, while working as Moving Jockey (Aussies say Removals ) for Austwide Removals, Gary Buter's old company. Gary was my host in Melbourne at the time, allowing me to share his great flat in St. Kilda while attending Elwood High School.

Monday, August 31, 2009

Disaster Recovery Scenarios with Hewlett Packard's Data Protector

Let's say it is a late Monday morning, because something always comes up before you are required at your vocation, and by means of unfortunate circumstances, upon arrival you are faced with a clustered server that has provided you with a beautiful stack dump thanks to an e-mail Alert sent from SQL Response (not a pitch, just a real world example of what I rely on).  The Alert indicates to you that virtual memory at the time was, let's say, three times the physical RAM made available to the specific instance in question - ouch (ah, the morning couldn't have started off easier, eh?).  Then you've logged into the cluster and noticed several databases in Suspect Mode, okay, bring it on then, this is going to be a fun one :)



Since restoring local backups is preferable and that you've tried to copy them from your network backup location and you're still stuffed (as in the Scotch phrase), then, another solution would be to try and reattach the databases (since the files were there still, but access denied error prevented me from using them, of course) – because for whatever reason, the cluster has flipped your instance onto another node and the databases are not available (!) folder issues, not configured right (arghhh). That didn't work…so now, a direct restore from tape ends up as the only choice.  This is where a your Disaster Recovery Solution should kick in, as setup, in this case, with Hewlett Packard's Data Protector. It's a decent tool that can help out to recover the databases from tape - as long as you are thoroughly familiar with how to use it (as Brent Ozar has explained in this great post, please make sure you run through restore drills...so that the real disaster recovery event itself is not a disaster either).

 If you don't have a backup plan your company's operations could grind to a halt, or you could be in a mega-billion dollar issue like Venice to control the water level in the huge marshy bay.

A prerequisite for work with this tape backup management tool is to ensure that the licensed client tools are installed on the server in question: namely the Disk, Media/Support, User Interface and MS SQL Integration. If you are to use Hewlett Packard's Data Protector tool frequently, which is a pretty straight-forward, create a SQL backup group (chose your in-house nomenclature) in your HP Data Protector Manager client tool (current version installed A.06.10). Within the business day, those who are managing your tape solution should be able to get to you the specific tape in question (if not already loaded) – this is assuming that you do not have the ideal Direct Attached Storage to Tape solution to that specific server, and your HP DP is centralised across your SQL Server Infrastructure (the gateway, or Cell server, as they describe in their techie defs.). Locating the offending server in the backups group, all you have to do is click on the instance, and then click on the database to be restored and there you go (next, finish), problem goes away…unless you have to restore onto a database with a different name - now that's the only real pesky issue (from the point of view of a lazy point-and-click DBA).



What I would consider the worst case scenario with respect to depending on HP Data Protector would be when you must restore a database overtop of another database that holds a different name, and resides on another server entirely. HP Data Protector will give you a replace error from the DP Manager tool if you try this, and there is no easy work-around via this GUI tool (as you may also be used to doing with the SSMS GUI, or by code, here), even when you use the same statement you see in the logs. Thankfully, there is a command line solution – see below for full details, quick example next.



Let's say you are doing a database restore, where you need to restore overtop of another database with the backup from one holding different names, please observe this restore command example:

Omni -mssql -barhost SERVERNAME -base DBNAME my_session_id -asbase NEWDBNAME -file logical_datafilename Drive:\datafilelocation.mdf -file logicalLogfile_name Drive:\logfilelocation.ldf -replace



Here below are more details on how to restore using the Command Line Client Utility of HP data protector, taken directly from page 53 of their own documentation

( C:\Program Files\OmniBack\Docs\IntegrationMS.pdf ):



Restoring using the Data Protector CLI

From the Data_Protector_home\bin directory, run:

omnir -mssql -barhost ClientName [-destination ClientName]

[-instance SourceInstanceName] [-destinstance

DestinationInstanceName] {—base DBName [—session SessionID]

[MSSQL_OPTIONS]... | —base DBName —datafile

GroupName/DataFileName —session SessionID

[DATAFILE_OPTIONS]...} MSSQL_OPTIONS

-asbase NewDBName {-file LogicalFileName1 PhysicalFileName1

[-file LogicalFileName2 PhysicalFileName2]...} -replace -nochain -recovery {rec | norec}

-standby File

DATAFILE_OPTIONS -replace -nochain -recovery {rec | norec}

Provide the Session_ID of the backup session. For object copies, do not use the

copy session ID, but the object backup ID (equals the object backup session ID).

Integration guide for Microsoft applications 53

For description of the CLI options, see the omnir man page or the HP Data Protector

command line interface reference.

Examples

To restore the database RONA running on the SQL Server ALMA to the same

destination, run:

omnir -msssql -barhost ALMA -base RONA

To restore the data file DATAFILE_01 in the file group FILEGROUP_02 of the

database RONA running on the SQL Server ALMA to the same destination, run:

omnir -MSSQL -barhost ALMA -base RONA —datafile

FILEGROUP_02/DATAFILE_01 —session 2008/10/17-3

Restoring to another SQL Server instance or/and another SQL Server

Prerequisites

Both SQL Servers must have the same local settings (code page and sort order).

This information is displayed in the session monitor for each backup.

The target SQL Server must be configured and reside in the same Data Protector

cell as the original SQL Server.

Hope this helps in deciding what disaster recovery solution you chose to go with.

I'd like to hear from my readers of which is their favorite if you would be so kind.

P.S. Make sure to have client tools installed on ALL nodes in your cluster, right click on the client and Check Installation to be sure...then run a single small database backup (live drill to tape basically) to verify that it is all okay.