How to Create a SQL Server Logon Trigger

Login - @SeniorDBA

You may want to log the event each time a user logs into your database. This can be easily done by starting SQL Server Managements Studio (SSMS) and running the following code in Transact-SQL:

CREATE DATABASE LogonAuditDB /* Creates database for storing audit data */
USE [LogonAuditDB]
CREATE TABLE LogonAuditing /* Creates table for logons inside the new database */
(
    SessionId int,
    LogonTime datetime,
    HostName varchar(50),
    ProgramName varchar(500),
    LoginName varchar(50),
    ClientHost varchar(50)
)
GO
CREATE TRIGGER [LogonAuditTrigger] /* Creates trigger for logons */
ON ALL SERVER
FOR LOGON
AS
BEGIN
DECLARE @LogonTriggerData xml,
@EventTime datetime,
@LoginName varchar(50),
@ClientHost varchar(50),
@LoginType varchar(50),
@HostName varchar(50),
@AppName varchar(500)

SET @LogonTriggerData = eventdata()

SET @EventTime = @LogonTriggerData.value('(/EVENT_INSTANCE/PostTime)[1]', 'datetime')
SET @LoginName = @LogonTriggerData.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(50)')
SET @ClientHost = @LogonTriggerData.value('(/EVENT_INSTANCE/ClientHost)[1]', 'varchar(50)')
SET @HostName = HOST_NAME()
SET @AppName = APP_NAME()

INSERT INTO [LogonAuditDB].[dbo].[LogonAuditing]
(
SessionId,
LogonTime,
HostName,
ProgramName,
LoginName,
ClientHost
)
SELECT
@@spid,
@EventTime,
@HostName,
@AppName,
@LoginName,
@ClientHost

END
GO

To query the stored information about logons from SSMS, execute the following script:

SELECT *
   FROM [LogonAuditDB].[dbo].[LogonAuditing]
Advertisements

SQL Server 2017 – Windows, Linux, and Docker

SQL Server 2017 - @SeniorDBA

SQL Server 2017 is available as a Release Candidate (RC2, August 2017) and Community Technical Preview (CTP) , which represents a step towards making SQL Server a platform that gives you more choices when it comes to development languages, data types, and operating systems by making it available the to Linux users, as a Linux-based Docker containers, and traditional Windows users.

New features as described by Microsoft:

SQL Server 2017 Database Engine

SQL Server 2017 includes many new Database Engine features, enhancements, and performance improvements.

    • CLR assemblies can now be added to a whitelist, as a workaround for the clr strict security feature described in CTP 2.0. sp_add_trusted_assembly, sp_drop_trusted_assembly, and sys.trusted_asssemblies are added to support the white list of trusted assemblies (RC1).
    • Resumable online index rebuild resumes an online index rebuild operation from where it stopped after a failure (such as a failover to a replica or insufficient disk space), or pauses and later resumes an online index rebuild operation. See ALTER INDEX and Guidelines for online index operations. (CTP 2.0)
    • The IDENTITY_CACHE option for ALTER DATABASE SCOPED CONFIGURATION allows you to avoid gaps in the values of identity columns if a server restarts unexpectedly or fails over to a secondary server. See ALTER DATABASE SCOPED CONFIGURATION. (CTP 2.0)
    • Automatic database tuning provides insight into potential query performance problems, recommends solutions, and can automatically fix identified problems. See Automatic tuning. (CTP 2.0)
    • New graph database capabilities for modeling many-to-many relationships include new CREATE TABLE syntax for creating node and edge tables, and the keyword MATCH for queries. See Graph Processing with SQL Server 2017. (CTP 2.0)
    • An sp_configure option called clr strict security is enabled by default to enhance the security of CLR assemblies. See CLR strict security. (CTP 2.0)
    • Setup now allows specifying initial tempdb file size up to 256 GB (262,144 MB) per file, with a warning if the file size is set greater than 1GB with IFI not enabled. (CTP 2.0)
    • The modified_extent_page_count column in sys.dm_db_file_space_usage tracks differential changes in each database file, enabling smart backup solutions that perform differential backup or full backup based on percentage of changed pages in the database. (CTP 2.0)
    • SELECT INTO T-SQL syntax now supports loading a table into a FileGroup other than the user’s default by using the ON keyword. (CTP 2.0)
    • Cross database transactions are now supported among all databases that are part of an Always On Availability Group, including databases that are part of same instance. See Transactions – Always On Availability Groups and Database Mirroring (CTP 2.0)
    • New Availability Groups functionality includes clusterless support, Minimum Replica Commit Availability Groups setting, and Windows-Linux cross-OS migrations and testing. (CTP 1.3)
    • New dynamic management views:
      • sys.dm_db_log_stats exposes summary level attributes and information on transaction log files, helpful for monitoring transaction log health. (CTP 2.1)
      • sys.dm_tran_version_store_space_usage tracks version store usage per database, useful for proactively planning tempdb sizing based on the version store usage per database. (CTP 2.0)
      • sys.dm_db_log_info exposes VLF information to monitor, alert, and avert potential transaction log issues. (CTP 2.0)
      • sys.dm_db_stats_histogram is a new dynamic management view for examining statistics. (CTP 1.3)
      • sys.dm_os_host_info provides operating system information for both Windows and Linux. (CTP 1.0)
    • The Database Tuning Advisor (DTA) has additional options and improved performance. (CTP 1.2)
    • In-memory enhancements include support for computed columns in memory-optimized tables, full support for JSON functions in natively compiled modules, and the CROSS APPLY operator in natively compiled modules. (CTP 1.1)
    • New string functions are CONCAT_WS, TRANSLATE, and TRIM, and WITHIN GROUP is now supported for the STRING_AGG function. (CTP 1.1)
    • There are new bulk access options (BULK INSERT and OPENROWSET(BULK…) ) for CSV and Azure Blob files. (CTP 1.1)
    • Memory-optimized object enhancements include sp_spaceused and elimination of the 8 index limitation for memory-optimized tables, sp_rename for memory-optimized tables and natively compiled T-SQL modules, and CASE and TOP (N) WITH TIES for natively compiled T-SQL modules. Memory-optimized filegroup files can now be stored, backed up and restored on Azure Storage. (CTP 1.0)
    • DATABASE SCOPED CREDENTIAL is a new class of securable, supporting CONTROL, ALTER, REFERENCES, TAKE OWNERSHIP, and VIEW DEFINITION permissions. ADMINISTER DATABASE BULK OPERATIONS is now visible in sys.fn_builtin_permissions. (CTP 1.0)
    • Database COMPATIBILITY_LEVEL 140 is added. (CTP 1.0).

SQL Server 2017 Integration Services (SSIS)

    • The new Scale Out feature in SSIS has the following new and changed features.
      • Scale Out Master now supports high availability.
      • The failover handling of the execution logs from Scale Out Workers is improved.
      • The parameter runincluster of the stored procedure [catalog].[create_execution] is renamed to runinscaleout for consistency and readability.
      • The SSIS Catalog has a new global property to specify the default mode for executing SSIS packages.
    • In the new Scale Out for SSIS feature, you can now use the Use32BitRuntime parameter when you trigger execution. (CTP 2.1)
    • SQL Server 2017 Integration Services (SSIS) now supports SQL Server on Linux, and a new package lets you run SSIS packages on Linux from the command line. For more information, see the blog post announcing SSIS support for Linux. (CTP 2.1)
    • The new Scale Out for SSIS feature makes it much easier to run SSIS on multiple machines. See Integration Services Scale Out. (CTP 1.0)
    • OData Source and OData Connection Manager now support connecting to the OData feeds of Microsoft Dynamics AX Online and Microsoft Dynamics CRM Online. (CTP 1.0)

SQL Server 2017 Master Data Services (MDS)

    • Experience and performance are improved when upgrading from SQL Server 2012, SQL Server 2014, and SQL Server 2016 to SQL Server 2017 Master Data Services.
    • You can now view the sorted lists of entities, collections and hierarchies in the Explorer page of the Web application.
    • Performance is improved for staging millions of records using the staging stored procedure.
    • Performance is improved when expanding the Entities folder on the Manage Groups page to assign model permissions. The Manage Groups page is located in the Security section of the Web application.

SQL Server 2017 Analysis Services (SSAS)

SQL Server Analysis Services 2017 introduces many enhancements for tabular models. These include:+

    • Tabular mode as the default installation option for Analysis Services. (CTP 2.0)
    • Object-level security to secure the metadata of tabular models. (CTP 2.0)
    • Date relationships to easily create relationships based on date fields. (CTP 2.0)
    • New Get Data (Power Query) data sources, and existing DirectQuery data sources support for M queries. (CTP 2.0)
    • DAX Editor for SSDT. (CTP 2.0)
    • Encoding hints, an advanced feature for optimizing data refresh of large in-memory tabular models. (CTP 1.3)
    • Support for the 1400 Compatibility level for tabular models. To create new or upgrade existing tabular model projects to the 1400 compatibility level, download and install SQL Server Data Tools (SSDT) 17.0 RC2. (CTP 1.1)
    • A modern Get Data experience for tabular models at the 1400 compatibility level. See the Analysis Services Team Blog. (CTP 1.1)
    • Hide Members property to hide blank members in ragged hierarchies. (CTP 1.1)
    • New Detail Rows end-user action to Show Details for aggregated information. SELECTCOLUMNS and DETAILROWS functions for creating Detail Rows expressions. (CTP 1.1)
    • DAX IN operator for specifying multiple values. (CTP 1.1)

SQL Server 2017 Reporting Services (SSRS)

As of CTP 2.1, SSRS is no longer available to install through SQL Server setup.

    • Comments are now available for reports, to add perspective and collaborate with others. You can also include attachments with comments. (CTP 2.1)
    • In the latest releases of Report Builder and SQL Server Data Tools, you can create native DAX queries against supported SQL Server Analysis Services tabular data models by dragging and dropping desired fields in the query designers.

SQL Server 2017 Machine Learning Services

SQL Server R Services has been renamed SQL Server Machine Learning Services, to reflect support for Python in addition to the R language. You can use Machine Learning Services (In-Database) to run R or Python scripts in SQL Server, or install Microsoft Machine Learning Server (Standalone) to deploy and consume R and Python models that don’t require SQL Server.

  • revoscalepy – This Pythonic version of RevoScaleR includes parallel algorithms for linear and logistic regressions, decision tree, boosted trees and random forests, as well as a rich set of APIs for data transformation and data movement, remote compute contexts, and data sources.
  • microsoftml – This state-of-the-art package of machine learning algorithms and transforms with Python bindings includes deep neural networks, fast decision trees and decision forests, and optimized algorithms for linear and logistic regressions. You also get pre-trained models based on ResNet models that you can use for image extraction or sentiment analysis.
  • Python operationalization with T-SQL – Deploy Python code easily by using the stored procedure sp_execute_external_script. Get great performance by streaming data from SQL to Python processes and using MPI ring parallelization.
  • Python in SQL Server compute contexts – Data scientists and developers can execute Python code remotely from their development environments to explore data and develop models without moving data around.

You can get more information and download the latest “Release Candidate” version here.

Daily Responsibilities of a Successful Database Administrator

Each work day, you have responsibilities as a Database Administrator (DBA). Those responsibilities vary, based on the type of business, type of administrator, and type of databases. Generally speaking, there are specific responsibilities that you should include in your daily activities, which you can customize to your specific environment.

Microsoft Certified Database Administrator

1. Checking on Servers

Whether you’re responsible for the hardware or not, there are plenty of things you’ll want to do to check on your servers each day. Checking Windows Event, SQL Server Logs, and reviewing the SQL Server Agent are daily activities:

  • DBAs and the SQL Server Agent
  • DBAs and SQL Server Logs
  • DBAs and Windows Event Logs (not always accessible to you as a DBA)

In some environments as a DBA you may not have enough time to review details for every server every day. If so, set up a schedule where you make sure to keep looking at your most important (most mission critical) servers daily and then cycle through them daily or weekly, with reviews of non-essential servers based on priority, etc.

2. Baselines

Performance tuning and throughput will typically be part of your world as a DBA. Without performance baselines and other throughput benchmarks to compare current activity against, you’ll quickly find that you’re unsure of what is normal performance and what is a performance issue. Happily, baselines are pretty easy to set up.

3. Testing and Validating Backups AND Restores

Absolutely everything you do (short of security) takes a back seat behind this major concern. If you’re not regularly testing your backups to make sure you can restore them (under a variety of different scenarios or simulated emergencies) then you’re not building the skills necessary to respond to a disaster and running the risk that a change somewhere within your environment has ruined your ability to either get complete backups or destroyed your ability to restore the data in the event they are needed. Moreover, regular testing helps you verify that you’re staying within your SLAs by helping ensure that you’re able to meet your required RPO and RTO.

4. Performance Analysis and Tuning

With baselines and a sense of what your current workloads are, you’ll be able to spot when, where, and how problems occur. From there, you’ll typically spend a decent amount of your time as a DBA tuning those operations (and working carefully against production data in some cases) to consume fewer resources. As a DBA performance isn’t exactly your primary concern—scalability is. And scalability (or the ability to handle more load) is enabled when you don’t have slow running queries and operations capitalizing hardware resources on your server. Tuning and tweaking and ensuring proper indexes and the rest is what you do as a DBA in almost all database environments.

5. Coordinating Data Pushes and Changes

Depending upon your role, your environment, and your data you might be periodically helping to push application changes and upgrades (with corresponding tweaks to schema and code) or regularly ensuring that data pushes from staging to production and the like are operating efficiently.

6. Patch Management

This is something you’ll commonly need to schedule, address, and coordinate as well. You must keep your systems updated with the latest vendor updates and security patches.

7. Code Reviews and Interactions with Developers

Some people feel this is their favorite things to do as a DBA—to help developers improve their skills, better understand SQL Server and internals, and reinforce the idea that we’re all working on the same team. Teaching other people what you know can be rewarding, and you just might learn something from them in the process.

8. Meetings

To address differences of opinion and focus between developers, DBAs, and management. You’ll find you spend too much time in meetings.

9. Capital Improvements and Initiatives

Throughout the year you’ll probably be involved in various initiatives or capital improvements—like making one of your key databases/servers more highly-available by throwing in Clustering, Mirroring, or AlwaysOn Availabililty Groups into the mix—or improving backups, bolstering security, and so on.

10. Self-Improvement

Finally, while you will have some long days and even longer nights as a DBA, there will likely be times during the year when you could probably, realistically, get everything you truly need to do done in about 3-4 hours per day on-premises at work. Only, while management will patently expect that you’re simply available and on-call late nights when problems happen or when code changes are being pushed or for longer hours when working on various initiatives, you typically won’t run into many organizations that let their DBAs (or other IT folks/developers) have any of that time back when things start to get slow. You could fill your slower times with training, reading, and trying new skills or techniques. Make sure you ask for and get the formal training you need to keep your skills updated.

Transaction Log File Size and VLF in SQL Server

This article by Kimberly Tripp is very interesting. Simply put, she says you want the initial size of your transaction logs set to 8 GB, with auto growth set to 8 GB. This should help keep your Virtual Lof File (VLF) sizes below 512 MB, improve performance, and make maintenance during backups much faster.

VLF Image

The article, in part, reads:

First, here’s how the log is divided into VLFs. Each “chunk” that is added, is divided into VLFs at the time the log growth (regardless of whether this is a manual or auto-grow addition) and it’s all dependant on the size that is ADDED not the size of the log itself. So, take a 10MB log that is extended to 50MB, here a 40MB chunk is being added. This 40MB chunk will be divided into 4 VLFs. Here’s the breakdown for chunksize:

chunks less than 64MB and up to 64MB = 4 VLFs
chunks larger than 64MB and up to 1GB = 8 VLFs
chunks larger than 1GB = 16 VLFs

And, what this translates into is that a transaction log of 64GB would have 16 VLFs of 4GB each. As a result, the transaction log could only clear at more than 4GB of log information AND that only when it’s completely inactive. To have a more ideally sized VLF, consider creating the transaction log in 8GB chunks (8GB, then extend it to 16GB, then extend it to 24GB and so forth) so that the number (and size) of your VLFs is more reasonable (in this case 512MB).

You should visit Kimberly’s blog entry for more information. You can also get more information about Virtual Log Files here.

Building Games Using SQL Server

You can build an interesting game with almost any computer language. Building a game using Transact-SQL seems to be the biggest challenge, but it has been done before. I wrote about this subject before, but there is now a new effort by Daniel Janik as described in this article.

SQL Server Game - @SeniorDBA

To me this project is more than a simple game. This was not only a quest to help my son and others interested in technology discover SQL Server through gaming; but, it was also a challenge of could this be done. I’ve never heard of a game in SQL and it’s really a silly thought. None the less I wanted to see if I could make something and I did.

There may be some bugs but I trust those that are reading this blog can probably fix them on their own. If you find some please feel free to report them back to me.

You can read the entire article here.

SQL Server Hall of Shame

Shame - @SeniorDBA

In a recent post by Adam Machanic, he asked his followers to send him the items they thought were the worst features of SQL Server. The list he developed is called the “SQL Hall of Shame”. He put together the following list:

  • In-Memory OLTP
  • English Query
  • Data Quality Services (DQS)
  • Master Data Services (MDS)
  • Auto-Shrink
  • Auto-Close
  • Auto-Grow
  • Notification Services (SSNS)
  • Query Notifications
  • Buffer Pool Extension (BPE)
  • Management Data Warehouse (MDW) / Data Collector
  • Lightweight Pooling / Fiber Mode
  • JSON
  • SQL Server Management Studio (SSMS)
  • Connect to SSIS from SQL Server Management Studio
  • DROP DATABASE IF EXISTS
  • Columnsets (and Sparse Columns in general)
  • SQLCLR
  • Utility Control Point (UCP)
  • Raw Partitions
  • Uninstall
  • Service Broker (SSB)
  • Not Freeing Allocated Memory Except Under Pressure
  • Database Engine Tuning Advisor (née Index Tuning Wizard)
  • DBCC PINTABLE
  • Virtual Interface Adaptor (VIA) Network Protocols
  • Mirrored Backups
  • SQL_VARIANT

If you read the article by Adam Machanic, you’ll get the detail for each item on the list.

Economics of the Cloud

Cloud Economics

For most companies, maintaining a large IT presence implies large capital expenditures and a non-trivial amount of accounting and record-keeping to track depreciation, tax considerations, and so forth. When you purchase the hardware and the software, they become yours (in every sense of the word) and your long-term responsibility.  The traditional model of enterprise computing is a capital-intensive function that requires expensive data centers (electricity, air conditioning, servers, networks, storage, etc.) and operations staff (hardware swaps, networks, backups, OS updates, upgrades, etc.) to keep it all running effectively. With an on-premises data center, you must plan and provision for maximum utilization, which is financially inefficient.

Data Center

The appeal of cloud computing includes the ability of enterprises to pay for only what they use. If demand decreases and you no longer need the assigned capacity, you can turn off systems and you are no longer charged for those systems. Since the cloud is a subscription-based model, it is an “operating expense” model. Computing becomes a service for which businesses are billed a monthly charge that is metered by actual usage. The more (compute, network, and storage resources) that you use the more expensive your monthly bill. The less you use, the less you will be charged.

Another way to save money is cloud operations frees your enterprises of the costly tasks of system backups, routine network maintenance, software patches, etc. because you cloud provider can handle these tasks.

Azure Spend

Most IT organizations find wide variations in system utilization. Some applications are seasonal and other applications run for a short period of time before being shut down. You might have other applications that are simply unpredictable and you can’t apply a cost saving model.

Building your server infrastructure in a cloud environment can save your business money and allow for greater innovations for less money.