SQL Server Login Transfer

I like to use the the script below to transfer sql server logins to other servers. This script preserves the sid and the passwords so you don’t get orphaned logins.

This script is from the ITPro website.

SET NOCOUNT ON
SELECT 'EXEC sp_addlogin @loginame = ''' + loginname + ''''
,', @defdb = ''' + dbname + ''''
,', @deflanguage = ''' + language + ''''
,', @encryptopt = ''skip_encryption'''
,', @passwd ='
, cast(password AS varbinary(256))
,', @sid ='
, sid
FROM syslogins
WHERE name NOT IN ('sa')
AND isntname = 0


Source: SQL Server Login Transfer

Identify SQL Server Indexes With Duplicate Columns

I found this awesome script for finding duplicate queries located here: Identify SQL Server Indexes With Duplicate Columns but it has a small bug in it. It should be ordering by key_ordinal rather than index_column_id. With that incorrect ordering, some of the column orders are not displayed correctly. Other than that, the script is great for finding duplicate indexes.

Here is a corrected version of the script:

select t1.tablename,t1.indexname,t1.columnlist,t2.indexname,t2.columnlist from
(select distinct object_name(i.object_id) tablename,i.name indexname,
(select distinct stuff((select ', ' + c.name
from sys.index_columns ic1 inner join
sys.columns c on ic1.object_id=c.object_id and
ic1.column_id=c.column_id
where ic1.index_id = ic.index_id and
ic1.object_id=i.object_id and
ic1.index_id=i.index_id
order by key_ordinal FOR XML PATH('')),1,2,'')
from sys.index_columns ic
where object_id=i.object_id and index_id=i.index_id) as columnlist
from sys.indexes i inner join
sys.index_columns ic on i.object_id=ic.object_id and
i.index_id=ic.index_id inner join
sys.objects o on i.object_id=o.object_id
where o.is_ms_shipped=0) t1 inner join
(select distinct object_name(i.object_id) tablename,i.name indexname,
(select distinct stuff((select ', ' + c.name
from sys.index_columns ic1 inner join
sys.columns c on ic1.object_id=c.object_id and
ic1.column_id=c.column_id
where ic1.index_id = ic.index_id and
ic1.object_id=i.object_id and
ic1.index_id=i.index_id
order by key_ordinal FOR XML PATH('')),1,2,'')
from sys.index_columns ic
where object_id=i.object_id and index_id=i.index_id) as columnlist
from sys.indexes i inner join
sys.index_columns ic on i.object_id=ic.object_id and
i.index_id=ic.index_id inner join
sys.objects o on i.object_id=o.object_id
where o.is_ms_shipped=0) t2 on t1.tablename=t2.tablename and
substring(t2.columnlist,1,len(t1.columnlist))=t1.columnlist and
(t1.columnlist<>t2.columnlist or
(t1.columnlist=t2.columnlist and t1.indexname<>t2.indexname))

Big Rig RV Park Campground – Sturgis, SD

Big Rig RV Park Campground is a website that Pure Consulting designed and deployed.  It is based on Microsoft ASP.Net/MVC4 and SQL Server 2017.  Our plan is to migrate it to .NET Core and SQL Server on Linux.

If you’re into Harley Davidson motorcycles and touring the Black Hills of South Dakota, Big Rig RV Park Campground is a great place to stay for the Sturgis Motorcycle Rally. Big Rig Rv Park Campground – Sturgis, SD

Automated script-generation with Powershell and SMO

Recently, I had the requirement to compare SQL Server database schemas in several fire-walled environments.  That meant that I had to export the schemas and then pull the schemas down locally to a central location and compare them.  I used one of the scripts from Simple Talk article to help me with the script generation: Automated Script-generation with Powershell and SMO – Simple Talk

 

I modified the script with a foreach loop iterate my list of databases.

$Filepath=’C:\DatabaseSchemas’ # local directory to save build-scripts to
$DataSource=’.’ # server name and instance

$Databases = “db1”, “db2”, “db3”

foreach($Database in $Databases)
{
# set “Option Explicit” to catch subtle errors
set-psdebug -strict
$ErrorActionPreference = “stop” # you can opt to stagger on, bleeding, if an error occurs
# Load SMO assembly, and if we’re running SQL 2008 DLLs load the SMOExtended and SQLWMIManagement libraries
$ms=’Microsoft.SqlServer’
$v = [System.Reflection.Assembly]::LoadWithPartialName( “$ms.SMO”)
if ((($v.FullName.Split(‘,’))[1].Split(‘=’))[1].Split(‘.’)[0] -ne ‘9’) {
[System.Reflection.Assembly]::LoadWithPartialName(“$ms.SMOExtended”) | out-null
}
$My=”$ms.Management.Smo” #
$s = new-object (“$My.Server”) $DataSource
if ($s.Version -eq $null ){Throw “Can’t find the instance $Datasource”}
$db= $s.Databases[$Database]
if ($db.name -ne $Database){Throw “Can’t find the database ‘$Database’ in $Datasource”};
$transfer = new-object (“$My.Transfer”) $db
$CreationScriptOptions = new-object (“$My.ScriptingOptions”)
$CreationScriptOptions.ExtendedProperties= $true # yes, we want these
$CreationScriptOptions.DRIAll= $true # and all the constraints
$CreationScriptOptions.Indexes= $true # Yup, these would be nice
$CreationScriptOptions.Triggers= $true # This should be included when scripting a database
$CreationScriptOptions.ScriptBatchTerminator = $true # this only goes to the file
$CreationScriptOptions.IncludeHeaders = $true; # of course
$CreationScriptOptions.ToFileOnly = $true #no need of string output as well
$CreationScriptOptions.IncludeIfNotExists = $true # not necessary but it means the script can be more versatile
$CreationScriptOptions.Filename = “$($FilePath)\$($Database)_schema.sql”;
$transfer = new-object (“$My.Transfer”) $s.Databases[$Database]

$transfer.options=$CreationScriptOptions # tell the transfer object of our preferences
$transfer.ScriptTransfer()
}

 

Are you running CHECKDB on your Availability Group replicas?

Here is an article by Brent Ozar that discusses why it’s expensive to run SQL Server in Azure. My one big beef about SQL Server. I think that many DBAs and their managers are annoyed that they have to purchase a SQL Server license in order to run  DBCC CHECKDB and RESTORE commands on secondary replicas.  Here is a link to that article:  An Expensive Reason To Avoid AGs In Azure – Brent Ozar Unlimited®