You may want to log the event each time a user logs into your database. This can be easily done by starting SQL Server Managements Studio (SSMS) and running the following code in Transact-SQL:
CREATE DATABASE LogonAuditDB /* Creates database for storing audit data */
CREATE TABLE LogonAuditing /* Creates table for logons inside the new database */
CREATE TRIGGER [LogonAuditTrigger] /* Creates trigger for logons */
ON ALL SERVER
DECLARE @LogonTriggerData xml,
SET @LogonTriggerData = eventdata()
SET @EventTime = @LogonTriggerData.value('(/EVENT_INSTANCE/PostTime)', 'datetime')
SET @LoginName = @LogonTriggerData.value('(/EVENT_INSTANCE/LoginName)', 'varchar(50)')
SET @ClientHost = @LogonTriggerData.value('(/EVENT_INSTANCE/ClientHost)', 'varchar(50)')
SET @HostName = HOST_NAME()
SET @AppName = APP_NAME()
INSERT INTO [LogonAuditDB].[dbo].[LogonAuditing]
To query the stored information about logons from SSMS, execute the following script:
This article by Kimberly Tripp is very interesting. Simply put, she says you want the initial size of your transaction logs set to 8 GB, with auto growth set to 8 GB. This should help keep your Virtual Lof File (VLF) sizes below 512 MB, improve performance, and make maintenance during backups much faster.
The article, in part, reads:
First, here’s how the log is divided into VLFs. Each “chunk” that is added, is divided into VLFs at the time the log growth (regardless of whether this is a manual or auto-grow addition) and it’s all dependant on the size that is ADDED not the size of the log itself. So, take a 10MB log that is extended to 50MB, here a 40MB chunk is being added. This 40MB chunk will be divided into 4 VLFs. Here’s the breakdown for chunksize:
chunks less than 64MB and up to 64MB = 4 VLFs
chunks larger than 64MB and up to 1GB = 8 VLFs
chunks larger than 1GB = 16 VLFs
And, what this translates into is that a transaction log of 64GB would have 16 VLFs of 4GB each. As a result, the transaction log could only clear at more than 4GB of log information AND that only when it’s completely inactive. To have a more ideally sized VLF, consider creating the transaction log in 8GB chunks (8GB, then extend it to 16GB, then extend it to 24GB and so forth) so that the number (and size) of your VLFs is more reasonable (in this case 512MB).
You should visit Kimberly’s blog entry for more information. You can also get more information about Virtual Log Files here.
You can build an interesting game with almost any computer language. Building a game using Transact-SQL seems to be the biggest challenge, but it has been done before. I wrote about this subject before, but there is now a new effort by Daniel Janik as described in this article.
To me this project is more than a simple game. This was not only a quest to help my son and others interested in technology discover SQL Server through gaming; but, it was also a challenge of could this be done. I’ve never heard of a game in SQL and it’s really a silly thought. None the less I wanted to see if I could make something and I did.
There may be some bugs but I trust those that are reading this blog can probably fix them on their own. If you find some please feel free to report them back to me.
You can read the entire article here.
In a recent post by Adam Machanic, he asked his followers to send him the items they thought were the worst features of SQL Server. The list he developed is called the “SQL Hall of Shame”. He put together the following list:
- In-Memory OLTP
- English Query
- Data Quality Services (DQS)
- Master Data Services (MDS)
- Notification Services (SSNS)
- Query Notifications
- Buffer Pool Extension (BPE)
- Management Data Warehouse (MDW) / Data Collector
- Lightweight Pooling / Fiber Mode
- SQL Server Management Studio (SSMS)
- Connect to SSIS from SQL Server Management Studio
- DROP DATABASE IF EXISTS
- Columnsets (and Sparse Columns in general)
- Utility Control Point (UCP)
- Raw Partitions
- Service Broker (SSB)
- Not Freeing Allocated Memory Except Under Pressure
- Database Engine Tuning Advisor (née Index Tuning Wizard)
- DBCC PINTABLE
- Virtual Interface Adaptor (VIA) Network Protocols
- Mirrored Backups
If you read the article by Adam Machanic, you’ll get the detail for each item on the list.
Nmap, a simple Network Mapper, is a powerful port scanner tool. This free and open source hacking tool is the most popular port scanning tool around that allows you to easily perform network discovery and security auditing. Used for a wide range of services, Nmap uses raw IP packets to determine the hosts available on a network, their services along with details, operating systems used by hosts, the type of firewall used, and other information.
Nmap is available for all major platforms including Windows, Linux, and OS X.
We have written about how you can use this simple tool to find SQL Server instances on your network.
Microsoft’s SQL Server database engine has gone through various versions over the many years it has been one of the most popular solutions for database design. Some of the versions also support databases created under the older versions of the engine. This table helps you understand what support is available from those various SQL Server versions.
||Supported Compatibility Levels
|SQL Server 2016
||130, 120, 110, 100
||130, 120, 110, 100
|SQL Server 2014
||120, 110, 100
|SQL Server 2012
||110, 100, 90
|SQL Server 2008 R2
||100, 90, 80
|SQL Server 2008
||100, 90, 80
|SQL Server 2005
|SQL Server 2000
You can get your current version information with this simple query:
SELECT databases.name, databases.compatibility_level from sys.databases
You can also get end-of-life information here.
Technology is always improving. Microsoft SQL Server 2016 includes many new and improved features that will provide users with greater availability, better performance, and more security. The Microsoft IT Enterprise Services BI team has identified their top eight features and enhancements:
- One programming surface across all editions – With November’s SQL Server 2016 Service Pack 1 (SP1), you can switch from Express to Standard, or Standard to Enterprise, and you don’t have to rework code to take advantage of additional features.
- In-Memory OLTP helps ESBI meet their users’ business requirements for increased agility.
- Columnstore Indexes reduce the amount of time it takes to run and render SRSS reporting data.
- Temporal data reduces the amount of support tickets received from the field due to inaccurate data.
- Row-Level Security provides a more reliable and standardized method to easily control which users can access data.
- Dynamic Data Masking helps limit exposure of sensitive data, preventing users who should not have access to the data from viewing it.
- Query Store provides better insight into the performance differences caused by changes in query plans.
- Active Query Statistics allows a view of active query execution plans and helps identify and fix blocking issues while queries are running.
- SQL Stretch Database helps improve performance to frequently used data while preserving access to archived data.
You can read additional details of these features here.