SQL Server Information Script

dbcc_sql

As a SQL Server DBA, you must know the detailed information about your databases, like file sizes, locations, etc. This information is easily extracted from your SQL Server instance with the correct script. In this article from Tim Ford, you will get the required information.

The other thing to consider is that all this information that’s returned through these various means don’t necessarily match in terms of units of measure: some of this information is returned as megabytes (mb) some as counts of data pages (which are 8 kilobytes (kb) each.) The disparity does not do us much good.

All this leads to the need to have a single option to go to in order to return information that provides data points that are valuable to the Database Administrator:

  • Server Name
  • Database Name
  • File Name (both logical and physical)
  • File Type
  • File Size, Consumption, Free Space
  • Growth Units (percent or a fixed mb size)
  • Maximum File Size

Since there is no single option for aggregating this information together I had to build a stored procedure to collect it for me and it’s time to share it with all of you.

Parameters

The underlying code taps into some of those sources I mentioned above: sys.database_files and the FILEPROPERTY() function. The remainder of it is pure math and converting 8kb pages (in some cases) into a megabyte unit. The stored procedure accepts two parameters:

@granularity: d | NULL – ‘d’ signifies a database level granularity and aggregates all size metrics for each data file (and separately the transaction log) into a single row.

@database_name: <database name> | NULL – if a database name is specified then only the results for that database are returned. Otherwise results for all databases on the instance are returned.

Returned Results

The following columns are returned when specifying a database-level of granularity:

  • server
  • database_name
  • db_size_mb
  • db_free_mb
  • db_used_mb
  • data_size_mb
  • data_free_mb
  • data_used_mb
  • data_used_pct
  • log_size_mb
  • log_free_mb
  • log_used_mb
  • log_used_pct

The following columns are returned when specifying a file-level of granularity:

  • server
  • database_name
  • file_name
  • physical_name
  • file_type
  • db_size_mb
  • db_free_mb
  • db_used_mb
  • free_space_pct
  • growth_units
  • max_file_size_mb

Read the entire article that includes the sample scripts.

Advertisements

Understanding your SQL Server Fragmentation

SQL Server

Database fragmentation can cause all type of performance issues, and prevent you from shrinking your database files. In this article by Slava Murygin we see some examples of how to explore that fragmentation, including a script to present a graphical presentation of the fragmentation of your database files.

So, now the time to see WHY SQL Server can’t shrink our file.
I use following script to see object allocations/fragmentation in a data file:

IF Object_ID('tempdb..#Used_Pages_List') Is not Null
 DROP TABLE #Used_Pages_List;
GO
SELECT object_id, allocated_page_page_id as page_id INTO #Used_Pages_List
FROM sys.dm_db_database_page_allocations(DB_ID(), NULL, NULL, NULL, NULL) as a
WHERE partition_id = 1
GO
CREATE CLUSTERED INDEX #CLIX_Used_Pages_List ON #Used_Pages_List(object_id,page_id)
GO
DECLARE @i INT = 0;
DECLARE @m INT = (SELECT MAX(object_id) FROM #Used_Pages_List);
DECLARE @g TABLE(Alloc GEOMETRY, Table_Object SYSNAME, object_id INT);
DECLARE @s INT = 131072
DECLARE @LineMegabytes INT = (SELECT Size/(64000)+1 FROM sys.database_files WHERE file_id = 1);
DECLARE @d INT = 128*@LineMegabytes;
DECLARE @DBSize INT = (SELECT Size FROM sys.database_files WHERE file_id = 1) / @d + 1
DECLARE @t TABLE(ID INT, MessageText VARCHAR(100));

INSERT INTO @t(ID,MessageText) VALUES
(1,'Scale: 1 Square 10x10 = 1 Page (8 Kb)'),
(2,'Row: ' + CAST(@LineMegabytes as VARCHAR) + ' Mb'),
(3,'Vertical: 100 points = ' + CAST(10*@LineMegabytes as VARCHAR) + ' Mb')

SELECT * FROM @t ORDER BY ID;

WHILE @i < @m
BEGIN
 SELECT @i = MIN(object_id) FROM #Used_Pages_List
 WHERE @i < object_id

 PRINT CONVERT(VARCHAR,@i) 

 INSERT INTO @g(object_id, Table_Object, Alloc) 
 SELECT @i, '[' + Object_Schema_Name(@i)+ '].[' + Object_Name(@i) + ']'
  , CONVERT(GEOMETRY,'POLYGON(' + SUBSTRING( (
  SELECT 
   ',(' 
   + CONVERT(VARCHAR, (page_id % @d) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 ) + ','
   + CONVERT(VARCHAR, (page_id % @d+1) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 ) + ','
   + CONVERT(VARCHAR, (page_id % @d+1) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 - 10) + ','
   + CONVERT(VARCHAR, (page_id % @d) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 - 10) + ','
   + CONVERT(VARCHAR, (page_id % @d) * 10) + ' ' + CONVERT(VARCHAR, (page_id / -@d) * 10 ) + ')'
  FROM #Used_Pages_List
  WHERE @i = object_id
  FOR XML PATH ('')
 ),2,@@TEXTSIZE) + ')');
END

SELECT object_id, Table_Object, Alloc FROM @g
UNION ALL
SELECT 0, 'Database Size', CONVERT(GEOMETRY,'LINESTRING(0 ' 
 + CONVERT(VARCHAR, @DBSize * -10) + ', ' + CONVERT(VARCHAR, @d * 10) + ' ' + CONVERT(VARCHAR, @DBSize * -10) + ')')
ORDER BY object_id
GO

Here is the result for my database:

As you can see, I definitely have a lot of free space, but my data are so spread across the file and especially up to it’s border, that there is no way to make file size smaller.

You may need to copy and paste the script into SSMS to see everything, because of limited formatting options available on this page.