1. Stop Shrinking Data Files
Shrinking data files has been a general bad practice for a long time, and it can really impact performance in one of three ways. The shrinking of data files can negatively impact performance because it can cause a lot of fragmentation, which means your subsequent queries might suffer. And if you don’t have Instant File Initialization turned on, the resultant growth later can also hinder performance and potentially cause timeouts. Although there are times when shrinking a file might be necessary, make sure that you know the impact before you try it.
2. Find Problem Queries
If you look at all the stored procedures or queries that run against an instance of SQL Server, you will find that it is just a few queries that are responsible for 80 percent of the poor performance that you see throughout the day. If you can identify these problem queries and prioritize tuning them, you can make a significant impact on the overall performance of your server.
One way that you can easily identify expensive statements is by using this code:
SELECT COALESCE(DB_NAME(t.[dbid]),'Unknown') AS [DB Name],
ecp.objtype AS [Object Type],
t.[text] AS [Adhoc Batch or Object Call],
SUBSTRING(t.[text], (qs.[statement_start_offset]/2) + 1,
WHEN -1 THEN DATALENGTH(t.[text]) ELSE qs.[statement_end_offset] END
- qs.[statement_start_offset])/2) + 1) AS [Executed Statement],
qs.[execution_count] AS [Counts],
qs.[total_worker_time] AS [Total Worker Time],
(qs.[total_worker_time] / qs.[execution_count]) AS [Avg Worker Time],
qs.[total_physical_reads] AS [Total Physical Reads],
(qs.[total_physical_reads] / qs.[execution_count]) AS [Avg Physical Reads],
qs.[total_logical_writes] AS [Total Logical Writes],
(qs.[total_logical_writes] / qs.[execution_count]) AS [Avg Logical Writes],
qs.[total_logical_reads] AS [Total Logical Reads],
(qs.[total_logical_reads] / qs.[execution_count]) AS [Avg Logical Reads],
qs.[total_clr_time] AS [Total CLR Time],
(qs.[total_clr_time] / qs.[execution_count]) AS [Avg CLR Time],
qs.[total_elapsed_time] AS [Total Elapsed Time],
(qs.[total_elapsed_time] / qs.[execution_count]) AS [Avg Elapsed Time],
qs.[last_execution_time] AS [Last Exec Time], qs.[creation_time] AS [Creation Time]
FROM sys.dm_exec_query_stats AS qs
JOIN sys.dm_exec_cached_plans ecp ON qs.plan_handle = ecp.plan_handle
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS t
-- ORDER BY [Total Worker Time] DESC
-- ORDER BY [Total Physical Reads] DESC
-- ORDER BY [Total Logical Writes] DESC
-- ORDER BY [Total Logical Reads] DESC
-- ORDER BY [Total CLR Time] DESC
-- ORDER BY [Total Elapsed Time] DESC
ORDER BY [Counts] DESC
The sys.dm_exec_query_stats DMV query contains aggregate performance statistics that are associated with each of the cached query plans in the SQL Server instance. This query easily enables you to order the results in several ways, depending on what type of resource usage you want to review. The key is to weigh the number of times that the statement was executed versus the average resource usage to better determine which statements would have the largest impact on performance if they were optimized better. Do not put too much weight on the total elapsed time or on the overall duration of the statement because other factors, such as blocking, can influence the overall duration. But by using this query, you should be able to quickly identify the top offending statements in your system, and then prioritize the statements so that you can tune them as efficiently as possible.
3. Locate I/O Bottlenecks
One of the reasons why performance suffers in SQL Server is I/O bottlenecks. You have three relatively easy methods at your disposal to determine whether you have I/O issues:
- Check whether you see high page_IO_latch waits or log_write waits in your wait statistics.
- Use the DMF sys.dm_io_virtual_file_stats() to locate any areas in which you have excessive physical I/O or excessive stalls on that I/O. These issues can occur at the database level or even at the file level.
- Use the trusty PerfMon counters. At a minimum, use the Avg. Disk sec/Read and Avg. Disk sec/Write counters to see the latency of the reads and writes. On an OLTP system, you would, ideally, want to see log file latency to be just a few ms and data file latency to be less than 10ms. Remember that these are ideal values. Your system might tolerate larger latency and still be fine. Also keep in mind that many times when you find that the storage subsystem can’t keep up with the current demand, the cause might not be an I/O bottleneck at all. It might, instead, be pooled.
When you find that you have many physical I/O bottlenecks occurring, your first instinct should be to find the queries that are causing all the physical I/O, and then try to tune them before you add more hardware. One performance aspect that you should never ignore is high latency for log writes. If you start to hold up the writing to the log file, all further DML operations can quickly become impeded, and they’ll remain so until you alleviate the bottleneck. High latency in log writes is a sure way to hinder performance in the database.
4. Separate Data and Log Files
One of the most often disregarded rules for good performance is separating the data and the log files onto separate physical drive arrays whenever possible. The key principle here is to separate the mostly random access of the data files from the sequential access that occurs by writing to the transaction logs. One aspect familiar to a SAN environment is that even though you’re presented with different drive letters or LUNs, you can’t be sure that these represent different physical drive arrays. Often, these apparent drives are carved from the same larger disk array, and this will defeat the intent of separating them in the first place. So make sure that you know what you’re really getting when you ask for your storage on a SAN. You’ll be amazed at how much difference this can make as the volume of your transactions increases.
5. Monitor Index Usage
The sys.dm_db_index_operational_stats() DMF is a widely underutilized source of information. This object provides very useful stats on how many times an index has been used, locked, waited on, etc. An index in this context can mean several things: a clustered index, heap, index, or a partition of either of these. This object will give you feel for how indexes are being used, and just exactly how much a specific index is costing you. It does this in part by telling you how often the index is modified at the Leaf level or non-leaf level, as well as how often users waited on blocks associated with the object, which can mean SQL Server locks, or even hardware or I/O latches.
SELECT object_schema_name(ddios.object_id) + '.' + object_name(ddios.object_id) as objectName,
case when is_unique = 1 then 'UNIQUE ' else '' end + indexes.type_desc as index_type,
FROM sys.dm_db_index_operational_stats(db_id(),null,null,null) as ddios
JOIN sys.indexes on indexes.object_id = ddios.object_id
and indexes.index_id = ddios.index_id
ORDER by page_latch_wait_count + page_io_latch_wait_count desc
You can read additional tips here.