Some queries Neal Caselton has come across over the years to identify highest resource databases and queries of his SQL server instances.
SQL Server Compare Users Script is a great blog post about getting the differences between two SQL Server user accounts.
Microsoft SQL Server is a rich database management system and as a Database Administrator you should know as much information about your server environment as possible. You cannot be expected to keep up with the enormous amount information on these servers, especially when you are not the only person making changes to those systems. These T-SQL commands will help you to get information from SQL Server.
I will list the 10 T-SQL statements that are useful for SQL Server database administrators. The T-SQL statements are Server level, Database Level, Backup level and Process level commands. I read these commands years ago, and publish them here as a helpful tool to the users and administrator.
#1 – SQL Server Instance Information
The following T-SQL statement retrieves information that includes Host, Instance, SQL Server Edition, ServicePack, etc. and can be particularly useful for doing a server inventory on a periodic basis. “Edition” will give information on a 64-bit or 32-bit and “Productlevel” gives information about what service pack your SQL Server is on. It also displays if the current SQL Server is a clustered server.
SELECT SERVERPROPERTY('MachineName') as Host, SERVERPROPERTY('InstanceName') as Instance, SERVERPROPERTY('Edition') as Edition, -- shows 32 or 64 bit SERVERPROPERTY('ProductLevel') as ProductLevel, --RTM, SP1,etc. Case SERVERPROPERTY('IsClustered') when 1 then 'CLUSTERED' else 'STANDALONE' end as ServerType, @@VERSION as VersionNumber
#2 – Settings Information
Server level configuration controls some of the features and performance settings of SQL Server. It is important for a SQL Server Administrator to know the server level configuration information for each instance. The following SQL Statement will give all of the information related to Server level configuration.
SELECT * from sys.configurations order by NAME
If you are using SQL Server 2000, you can execute the following command instead.
SP_CONFIGURE 'show advanced options',1 go RECONFIGURE with OVERRIDE go SP_CONFIGURE go
#3 – User Information
Security is a very important aspect that you should know. You need to be aware of all user setting, specially any user login with the sysadmin or security admin level role.
SELECT login.name, login.denylogin, login.isntname, login.isntgroup, login.isntuser FROM master.dbo.syslogins login WHERE login.sysadmin = 1 OR login.securityadmin = 1;
#4 – Enable Trace
Another important bit of information that you need to know is all of the traces that are enabled. The following T-SQL statement will list all of the trace flags that are enabled gloabally on the server.
The following T-SQL statement will list all the trace flags that are enabled on the current sql server connection.
#5 – Database List
Getting Database level information is equally as important as Server level information. The following T-SQL statement gives information on the database names, their compatibility level and also the recovery model and their current status. You can also use this T-SQL Statement will help you to determine if there is any compatibility level updates required. It also lists the online status of the database as well as helping you see if you need to update the recovery model.
SELECT name, compatibility_level, recovery_model_desc, state_desc FROM sys.databases;
If you are using SQL Server 2000, you could execute the following T-SQL Statement. Refer Fig 1.6
SELECT name, cmptlevel, DATABASEPROPERTYEX(name,'Recovery')AS RecoveryModel, DATABASEPROPERTYEX(name,'Status') as Status FROM sys databases;
#6 – Database File Information
The next level of information related to database that is needed is the location of the database files. The following T-SQL Statement provides the logical name and the physical location of the data and log files for all the databases available in the current SQL Server instance.
SELECT db_name(database_id) as DatabaseName, name, type_desc, physical_name FROM sys.master_files;
If you are using SQL Server 2000, you could execute the following T-SQL Statement.
SELECT db_name(dbid) as DatabaseName, name, filename FROM master.dbo.sysaltfiles;
#7 – Database File Groups
A database may contain filegroups other than just the primary file group. The following T-SQL Statement gets executed against each database on the server and displays the file groups.
EXEC master.dbo.sp_MSforeachdb @command1 = 'USE [?] SELECT * FROM sys.filegroups'
#8 – Backup Status
Performing database backups is the foundation of successful database administration. The following statement lists all of the databases on the server and the day the last successful backup occurred. This will help you check the backup jobs and also help verify backups are scheduled for all the production databases.
SELECT db.name, case when MAX(b.backup_finish_date) is NULL then 'No Backup' else convert(varchar(100), MAX(b.backup_finish_date)) end AS last_backup_finish_date FROM sys.databases db LEFT OUTER JOIN msdb.dbo.backupset b ON db.name = b.database_name AND b.type = 'D' WHERE db.database_id NOT IN (2) GROUP BY db.name ORDER BY 2 DESC;
#9 – Backup Location
Once you have established the backups are successful, you need a command that shows the administrator the location of all the backup files. You don’t want the backups to go to the local drive or to the Windows drive, so you need o verify the path and filename of the backup. The following statement gets all the information related to the current backup location, which is stored in the msdb database.
SELECT Distinct physical_device_name FROM msdb.dbo.backupmediafamily;
#10 – Connections and Commands
To monitor current processes and connections, you need a command that shows you what is currently happening on the server. From the beginning, SQL Server database administrators used sp_who and sp_who2 to check the current users, process and session information. These statements also provided information related to cpu, memory and blocking information related to the sessions. Also, please look at sp_whoisactive for the best method possible for finding this type of information.
If you are a Database Administrator, these are the types of commands you should be using as often as required. You should have these in your arsenal of tools, ready for the day when things go wrong, or even if things just don’t seem normal. You should run these commands and have a baseline of results so you know what is normal, and check these setting and status as often as required to verify everything is normal.
One of the first steps, if you’re a Database Administrator who has just assumed responsibility for a new database environment, is to create an inventory of existing database objects. This includes servers, databases, tables, stored procedures, triggers, and scheduled jobs. To get a list of existing SQL Server scheduled jobs you can go to the SQL Server Management Studio (SSMS) and use the GUI to navigate to the SQL Server Agent and expand the Jobs folder to see a listing of the currently scheduled jobs. Documenting the details of those job schedules can be an arduous task, especially in a large production environment with many jobs or even few jobs on many server instances. Although you can use SSMS to quickly view or create jobs and schedules, generating a master list usually involves a lot of mouse-clicking followed by cutting and pasting into Excel.
The script below generates a master schedule for all jobs on the selected server. The results generated include information such as the server name, the job name, the schedule name, whether or not the job is enabled, the frequency, and the interval. For the information is provided in an easy-to-read format where possible. You might want to change the column order or date formats to suit your needs, or you could even schedule it as a job and then email the results.
The image shows some sample output that has been condensed for space purposes.
This script should work on an SQL Server 2005 or higher instance.
SET NOCOUNT ON; SELECT 'ServerName' = left(@@ServerName,20), 'JobName' = left(S.name,60), 'ScheduleName' = left(ss.name,50), 'Enabled' = CASE (S.enabled) WHEN 0 THEN 'No' WHEN 1 THEN 'Yes' ELSE 'UNKNOWN' END, 'Frequency' = CASE(ss.freq_type) WHEN 1 THEN 'Once' WHEN 4 THEN 'Daily' WHEN 8 THEN (case when (ss.freq_recurrence_factor > 1) then 'Every ' + convert(varchar(3),ss.freq_recurrence_factor) + ' Weeks' else 'Weekly' end) WHEN 16 THEN (case when (ss.freq_recurrence_factor > 1) then 'Every ' + convert(varchar(3),ss.freq_recurrence_factor) + ' Months' else 'Monthly' end) WHEN 32 THEN 'Every ' + convert(varchar(3),ss.freq_recurrence_factor) + ' Months' -- RELATIVE WHEN 64 THEN 'SQL Startup' WHEN 128 THEN 'SQL Idle' ELSE 'UNKNOWN' END, 'Interval' = CASE WHEN (freq_type = 1) then 'One Time' WHEN (freq_type = 4 and freq_interval = 1) then 'Every Day' WHEN (freq_type = 4 and freq_interval > 1) then 'Every ' + convert(varchar(10),freq_interval) + ' Days' WHEN (freq_type = 8) then (select 'Weekly Schedule' = D1+ D2+D3+D4+D5+D6+D7 from (select ss.schedule_id, freq_interval, 'D1' = CASE WHEN (freq_interval & 1 <> 0) then 'Sun ' ELSE '' END, 'D2' = CASE WHEN (freq_interval & 2 <> 0) then 'Mon ' ELSE '' END, 'D3' = CASE WHEN (freq_interval & 4 <> 0) then 'Tue ' ELSE '' END, 'D4' = CASE WHEN (freq_interval & 8 <> 0) then 'Wed ' ELSE '' END, 'D5' = CASE WHEN (freq_interval & 16 <> 0) then 'Thu ' ELSE '' END, 'D6' = CASE WHEN (freq_interval & 32 <> 0) then 'Fri ' ELSE '' END, 'D7' = CASE WHEN (freq_interval & 64 <> 0) then 'Sat ' ELSE '' END from msdb..sysschedules ss where freq_type = 8 ) as F where schedule_id = sj.schedule_id ) WHEN (freq_type = 16) then 'Day ' + convert(varchar(2),freq_interval) WHEN (freq_type = 32) then (select freq_rel + WDAY from (select ss.schedule_id, 'freq_rel' = CASE(freq_relative_interval) WHEN 1 then 'First' WHEN 2 then 'Second' WHEN 4 then 'Third' WHEN 8 then 'Fourth' WHEN 16 then 'Last' ELSE 'UNKNOWN' END, 'WDAY' = CASE (freq_interval) WHEN 1 then ' Sun' WHEN 2 then ' Mon' WHEN 3 then ' Tue' WHEN 4 then ' Wed' WHEN 5 then ' Thu' WHEN 6 then ' Fri' WHEN 7 then ' Sat' WHEN 8 then ' Day' WHEN 9 then ' Weekday' WHEN 10 then ' Weekend' ELSE 'UNKNOWN' END from msdb..sysschedules ss where ss.freq_type = 32 ) as WS where WS.schedule_id =ss.schedule_id ) END, 'Time' = CASE (freq_subday_type) WHEN 1 then left(stuff((stuff((replicate('0', 6 - len(Active_Start_Time)))+ convert(varchar(6),Active_Start_Time),3,0,':')),6,0,':'),8) WHEN 2 then 'Every ' + convert(varchar(10),freq_subday_interval) + ' seconds' WHEN 4 then 'Every ' + convert(varchar(10),freq_subday_interval) + ' minutes' WHEN 8 then 'Every ' + convert(varchar(10),freq_subday_interval) + ' hours' ELSE 'UNKNOWN' END, 'Next Run Time' = CASE SJ.next_run_date WHEN 0 THEN cast('N/A' as char(10)) ELSE convert(char(10), convert(datetime, convert(char(8),SJ.next_run_date)),120) + ' ' + left(stuff((stuff((replicate('0', 6 - len(next_run_time)))+ convert(varchar(6),next_run_time),3,0,':')),6,0,':'),8) END from msdb.dbo.sysjobschedules SJ join msdb.dbo.sysjobs S on S.job_id = SJ.job_id join msdb.dbo.sysschedules SS on ss.schedule_id = sj.schedule_id order by S.name
In the not too distant past, Windows servers were rebooted much too frequently for a truly stable environment, but we got used to the practice. With more recent versions of Windows, reboots have become less frequent and it isn’t uncommon to see a server that hasn’t been rebooted in more than 90 days. The issue is that SQL Server is using the same log file during that time, with thousands or even millions of rows that take forever to read and review.
Log Cycling Frequency
For most servers or workloads, you will typically want to keep only about one weeks of data per log. You might prefer to create a weekly job that cycles the SQL Server event log (i.e., terminates the current log and create a new one its place). This way, instead of having a single log in the screenshot below that stretches over the past 90 days, you’d end up having a new log for each week:
View the SQL Server error log by using SQL Server Management Studio or any text editor. By default, the error log is located at Program Files\Microsoft SQL Server\MSSQL.n\MSSQL\LOG\ERRORLOG and ERRORLOG.n files.
On some servers or with some workloads, however, cycling the logs on a nightly/monthly basis might make more sense. While there is no single ‘best’ approach to determining when to cycle the logs—if you’re troubleshooting a production problem, looking to audit access, or trying to use the logs for any reason, having to wait while records are loaded can be a problem. If you’re connecting to the server remotely (a best practice for production servers) if there are too many rows to load you’ll commonly end up with the “SSMS is busy” dialog:
Cycling SQL Server Logs
Once you’ve determined an interval at which to regularly cycle your logs, cycling them is actually simple. Just set up a SQL Server Agent Job that runs weekly, and runs this script to cycle the error log:
-- Error Log: USE master; GO EXEC master.sys.sp_cycle_errorlog; GO
You can also use this technique to cycle the SQL Server Agent Log as well:
-- SQL Server Agent Error Log: USE msdb; GO EXEC dbo.sp_cycle_agent_errorlog; GO
If you already have some jobs running to clean up backup history or other maintenance data, just add these steps.
Error Log Retention
For security purposes it’s a best practice to retain fairly large number of error logs on hand. Malicious users trying to cover their tracks will attempt to cycle these same logs. If they’re able to execute sp_cycle_errorlog they could (effectively) cycle the log enough to potentially cover when they had gained access to your system or done something evil. By default a SQL Server instance will keep 6 error logs on hand—so if you’re dealing with highly sensitive information or an environment where auditing is very important you might want to push this number up. You can also purchase a log retention third-party product to push to logs from the production system to a system that just retains logs. Otherwise, most typically prefer to keep about 10 logs on hand in most environments (where security/auditing are not critical concerns).
To specify the number of log files retained (i.e., other than the default) you can either edit the registry or just use SQL Server Management Studio to edit the registry for you. To use SSMS, just right click on the SQL Server Logs node on the instance in question, and click on Configure:
Then, you can set whatever options or choices you’d like from the ensuing dialog:
You can use the Script button to extract these actions out for review and to use the script on another server.
This script will return a list of triggers along with its schema using sp_MSforeachtable & sp_helptrigger. I recommend thou use scripts in a test environment until you are comfortable with the results.
Given below is the script:
--This script is compatible with SQL Server 2005 and above.
'And Object_id In (Select parent_id From sys.triggers)'
Often times the most difficult job given to a database administrator is finding out what happened after something terrible has happened. One example is someone has dropped a table from your database and you want to track down who did it. Maybe someone has deleted some data from a table, but no one will say who did it or why. We will look at how you can use the transaction log to track down some of this information.
If you want more detailed information on how to read the Transaction Log, I recommend that you read previous articles to understand how the transaction log file works and how it logs all database activity.
This example will use the undocumented function “fn_dblog” to find any unauthorized or unapproved deletes or table drops. This example will help you track or find any activity that has dropped a table or deleted data from a table. You should test these examples in a test environment first, making sure you understand the scripts before you use these scripts in production.
Finding a user who ran a DELETE statement
Before moving ahead, we will create a database and a table on which I will delete some data. Run the below SQL code to create a database and table.
--Create DB. USE [master]; GO CREATE DATABASE ReadingDBLog; GO -- Create tables. USE ReadingDBLog; GO CREATE TABLE [Location] ( [Sr.No] INT IDENTITY, [Date] DATETIME DEFAULT GETDATE (), [City] CHAR (25) DEFAULT 'Bangalore');
We have created a database named “ReadingDBLog” and a table ‘Location’ with three columns. Now we will insert a 100 rows into the table.
USE ReadingDBLog GO INSERT INTO Location DEFAULT VALUES ; GO 100
Now go ahead and delete some rows to check who has deleted your data.
USE ReadingDBLog GO DELETE Location WHERE [Sr.No]=10 GO SELECT * FROM Location WHERE [Sr.No]=10 GO
You can see in the above screenshot that a row has been deleted from the table “Location”. I also ran a SELECT statement to verify the data has been deleted.
Now we have to search the transaction log file to find the info about the deleted rows. Run the below command to get info about all deleted transactions.
USE ReadingDBLog GO SELECT [Transaction ID], Operation, Context, AllocUnitName FROM fn_dblog(NULL, NULL) WHERE Operation = 'LOP_DELETE_ROWS'
All transactions which have executed a DELETE statement will display by running the above command and we can see this in the above screenshot. As we are searching for deleted data in table Location, we can see this in the last row. We can find the table name in the “AllocUnitName” column. The last row says a DELETE statement has been performed on a HEAP table ‘dbo.Location’ under transaction ID 0000:000004ce. Now capture the transaction ID from here for our next command.
We found the transaction ID from the above command which we will use in the below command to get the transaction SID of the user who has deleted the data.
USE ReadingDBLog GO SELECT Operation, [Transaction ID], [Begin Time], [Transaction Name], [Transaction SID] FROM fn_dblog(NULL, NULL) WHERE [Transaction ID] = '0000:000004ce' AND [Operation] = 'LOP_BEGIN_XACT'
Here, we can see the [Begin Time] of this transaction which will also help filter out the possibilities in finding the exact info like when the data was deleted and then you can filter on the base of begin time when that command was executed.
We can read the above output as “A DELETE statement began at 2013/10/14 12:55:17:630 under transaction ID 0000:000004ce by user transaction SID 0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000.
Now our next step is to convert the transaction SID hexadecimal value into text to find the real name of the user.
Now we will figure out who ran the DELETE command. We will copy the hexadecimal value from the transaction SID column for the DELETE transaction and then pass that value into the SUSER_SNAME () function.
USE MASTER GO SELECT SUSER_SNAME(0x0105000000000005150000009F11BA296C79F97398D0CF19E8030000)
Now we have found the user that did the delete.
Finding a user who ran a DROP statement
Here I am going to drop table Location.
USE ReadingDBLog GO DROP TABLE Location
Similarly if you drop any object or you perform anything operation in your database it will get logged in the transaction log file which will be visible by using this function fn_dblog.
Run the below script to display all logs which have been logged under DROPOBJ statement.
USE ReadingDBLog GO SELECT Operation, [Transaction Id], [Transaction SID], [Transaction Name], [Begin Time], [SPID], Description FROM fn_dblog (NULL, NULL) WHERE [Transaction Name] = 'DROPOBJ' GO
Here we can find the transaction SID and all required info which we need to find the user.
Now we can pass the transaction SID into system function SUSER_SNAME () to get the exact user name.
Once again, we found the user in question.
Use this function to do more research into your transaction log file. There is a lot of informative data in more than 100 columns when you use this command.