SSMS SQL Server cmdlets

In SQL Server Management Studio 17.0 the PowerShell module was moved into the PowerShell gallery.
 
This means that you may see the following:
image1
image2
When you go to https://www.powershellgallery.com/ you currently see:
image3
I don’t know about you, but it suddenly seems that a simple tweak to SSMS has become a headache for DBAs.
 
Performing a search gives 4 results and it looks like the matteot_msft is the correct one. Having company package under a users name doesn’t sound like a good idea to me, possibly this should be a team account?
image4
So how do you download it?
 
image5
lol mind blown, KISS appears to have gone out the window :)
 
If you’ve made it this far and have an internet connection the supplied method is:
Inspect

Save-Module -Name SqlServer -Path 

Install

Install-Module -Name SqlServer

 
Lets give it a try:
 
It’s now requesting NuGet provider version ‘2.8.5.201’ or newer
image6
Once completed you’ll see the folder and files
image7_1
For the next step, I received the following message which requested for -Force to be used:
image9
image10
Once completed, the SSMS error message no longer occurs
image11

Using Tail with PowerShell

Yes from PowerShell 3.0 there is the ability to use the Tail parameter in Get-Content.
 
Details can be found here.
 
Say you wanted to look at a SQL 2016 default instance errorlog file based on a D drive you could use:

Get-Content  "D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Log\ERRORLOG" -Tail 10 –Wait

image1

SQL Server Mobile Report Publisher install

If you come across the following error when attempting to install SQL Server Mobile Report Publisher
image1
you’ll likely find that the currently link doesn’t work
image2
At this time the requirement can be found here. Note: After the install you may be requested for the server to be restarted. You don’t have to, but it’s a good idea to do so.
 
The install then should go smoothly.
image3
Given that Visual Studio 2017 is now out, we’ll hopefully be seeing this legacy 2013 requirement removed.

View top 5 rows in all views

A few years ago I did a blog on viewing sample rows in all used tables.
 
Here is another similar query which allows you to peek at the top 5 rows in each view.

SELECT 'SELECT TOP 5 ''[' + SCHEMA_NAME([schema_id]) + '].[' + [name] + ']'' AS view_name, * FROM [' + DB_NAME() + '].[' + SCHEMA_NAME([schema_id])+ '].[' + [name] + '] WITH (NOLOCK);'  
FROM sys.all_views
ORDER BY [name];

image1
If you are using Azure SQL Database, then use the following which removes the use of the NOLOCK hint:

SELECT 'SELECT TOP 5 ''[' + SCHEMA_NAME([schema_id]) + '].[' + [name] + ']'' AS view_name, * FROM [' + DB_NAME() + '].[' + SCHEMA_NAME([schema_id])+ '].[' + [name] + '];'  
FROM sys.all_views
ORDER BY [name];

Dubious default settings

The installation process of SQL Server is continually evolving.
 
Recently the setup allowed tempdb file adjustments with a default of 4 files and the ability to grant the Perform Volume Maintenance Task privilege.
 
Such a focus on tempdb was a surprise to me as in the 13+ years I’ve worked with SQL Server (tackling literally tens of thousands of instances) there was only once when more than 1 data file was required. To put this in perspective, the SQL Server was accessed by all students at a large university and the tempdb GAM contention was obviously clear. Adding another single data file (2 in total) to this high spec server resolved the issue.
 
Adding additional tempdb files, to me is an advanced feature and should be carefully considered so as to not waste disk space, ensure even file growths, minimize complexity etc.
 
In my view it would have been more productive for Microsoft to have focused the setup on the following:

  • set memory limits (this has to be done 100% of the time) and don’t stop with the engine, include SSAS and SSRS
  • set max degree of parallelism (I believe the default of 0 is incorrect, rather it should be 1 as most instances are not dedicated to a hosting a single application due to licensing costs,consolidation etc)
  • set backup compression (should be defaulted to 1)
  • set optimize for ad hoc workloads (should be defaulted to 1)
  • allow setting Perform Volume Maintenance Tasks (tick)
  • allow setting Lock Pages in Memory
  • allow enabling remote admin connections
  • allow setting the listening port and adding the appropriate firewall entry (this would likely save thousands of can’t connect issues)

Final service pack for SQL Server 2012

The customary final service pack for SQL Server 2012 is just a round the corner before the product exits mainstream support in July and is sealed to all but security and critical bug fixes.
 
Yep no edition of SQL Server has ever had a Service Pack 5 :)
 
What are the key things SQL Server 2012 will be remembered for? Possibly:

  • introduced AlwaysOn Availability Groups
  • introduced Columnstore indexes
  • the infamous transition to core based licensing
  • introduced the short lived Business Intelligence edition
  • introduced (partial) contained databases
  • introduced LocalDB
  • BIDS replaced with SSDT

How to bypass the SQL Server Express database size limit

Yeah this has been possible for years with no funky sharding etc, but given the memory and cpu limitations it doesn’t make much sense in most scenarios.

If on the other hand you have an application that just needs to dump/archive data into an ever growing table and you like working with SQL Server, then sure this could be an option. Now I’m not endorsing this, but yes it is possible.

Of course there is always a catch :)

Below shows that a hard limit is present i.e. when attempting to exceed 10GB in SQL Server 2016:

SET NOCOUNT ON;
CREATE DATABASE d;
GO
USE d;
ALTER DATABASE d SET RECOVERY SIMPLE WITH NO_WAIT;
ALTER DATABASE [d] MODIFY FILE ( NAME = N'd', FILEGROWTH = 10GB );
CREATE TABLE t (c1 nchar(4000));
GO
INSERT INTO t(c1) VALUES ('1');
GO 700

/* delete test database
USE master;
GO
DROP DATABASE d;
*/

image1
Now say you try the same thing with the master database and say bump it up to 15GB for fun:

SET NOCOUNT ON;
USE [master];
ALTER DATABASE [master] MODIFY FILE ( NAME = N'master', FILEGROWTH = 15GB );
CREATE TABLE t (c1 nchar(4000));
GO
INSERT INTO t(c1) VALUES ('1');
GO 700

image_2
It succeeds with no issue and the database is now its 15GB in size:

SELECT SERVERPROPERTY('Edition') AS [version],(size*8/1024) AS size_mb
FROM sys.master_files
WHERE 1=1
AND DB_NAME(database_id) = 'master'
AND [type_desc] = 'ROWS';

image3
You can also exceed the limit with model and tempdb but there is little point and will likely introduce issues as new databases (based on model) can’t be created.

Create a test fragmented index

The following script allows you to quickly create a fragmented index for testing:

USE tempdb;
-- drop test table if exists
IF (OBJECT_ID(N'akawn_t',N'U') IS NOT NULL)
BEGIN 
DROP TABLE akawn_t;
END
--create table
CREATE TABLE akawn_t (c1 float PRIMARY KEY, c2 nchar(1000));
GO
--insert 100 records
INSERT INTO akawn_t 
SELECT RAND(), '1';
GO 100
-- delete some records
DELETE FROM akawn_t
WHERE CAST((c1 * 100) AS smallint) % 2  = 1;
GO
-- view fragmentation
SELECT
'[' + OBJECT_SCHEMA_NAME(a.[object_id]) + '].[' + OBJECT_NAME(a.[object_id]) + ']' AS table_name
,b.[name] AS index_name
,a.avg_fragmentation_in_percent
,a.index_type_desc
,a.*
,b.*
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, NULL) a, sys.indexes b
WHERE 1=1
AND a.[object_id] = b.[object_id]
AND a.index_id = b.index_id
AND a.avg_fragmentation_in_percent > 1;

image1

Why don’t more DBAs use extended events

I suspect instead of a keep it simple stupid (KISS) process it is more slow long and painful (SLAP)
 
Say you simply want to view live login failures.
 
Let’s start the wizard
image1
We only want to look at one type of event
image2
First issue, no Login Failures found, but ‘process_login_finish’ looks to be the correct option, so lets add it to selected events.
image3
The capture global features, is what I think causes the most pain. If you select all options (because you don’t know what you’ll find) you’ll get a nice big error message when you get right to the very end so now you have to go back (painful)
image4
image5
The better option would be to alert the poor DBA when the limit is hit (remove the pain). I spend about 5 minutes attempting find what this limit is on the web before giving up. Testing, having 16 global fields let me finally save without the error.
 
Next came the login failure test:
image6
image7
Nope not showing.
 
I’ll try again in another couple of months when my optimism returns.