AKAWN® LIMITED

SQL Server specialists

Dubious default settings

The installation process of SQL Server is continually evolving.
 
Recently the setup allowed tempdb file adjustments with a default of 4 files and the ability to grant the Perform Volume Maintenance Task privilege.
 
Such a focus on tempdb was a surprise to me as in the 13+ years I’ve worked with SQL Server (tackling literally tens of thousands of instances) there was only once when more than 1 data file was required. To put this in perspective, the SQL Server was accessed by all students at a large university and the tempdb GAM contention was obviously clear. Adding another single data file (2 in total) to this high spec server resolved the issue.
 
Adding additional tempdb files, to me is an advanced feature and should be carefully considered so as to not waste disk space, ensure even file growths, minimize complexity etc.
 
In my view it would have been more productive for Microsoft to have focused the setup on the following:

  • set memory limits (this has to be done 100% of the time) and don’t stop with the engine, include SSAS and SSRS
  • set max degree of parallelism (I believe the default of 0 is incorrect, rather it should be 1 as most instances are not dedicated to a hosting a single application due to licensing costs,consolidation etc)
  • set backup compression (should be defaulted to 1)
  • set optimize for ad hoc workloads (should be defaulted to 1)
  • allow setting Perform Volume Maintenance Tasks (tick)
  • allow setting Lock Pages in Memory
  • allow enabling remote admin connections
  • allow setting the listening port and adding the appropriate firewall entry (this would likely save thousands of can’t connect issues)

Final service pack for SQL Server 2012

The customary final service pack for SQL Server 2012 is just a round the corner before the product exits mainstream support in July and is sealed to all but security and critical bug fixes.
 
Yep no edition of SQL Server has ever had a Service Pack 5 :)
 
What are the key things SQL Server 2012 will be remembered for? Possibly:

  • introduced AlwaysOn Availability Groups
  • introduced Columnstore indexes
  • the infamous transition to core based licensing
  • introduced the short lived Business Intelligence edition
  • introduced (partial) contained databases
  • introduced LocalDB
  • BIDS replaced with SSDT

How to bypass the SQL Server Express database size limit

Yeah this has been possible for years with no funky sharding etc, but given the memory and cpu limitations it doesn’t make much sense in most scenarios.
 
If on the other hand you have an application that just needs to dump/archive data into an ever growing table and you like working with SQL Server, then sure this could be an option. Now I’m not endorsing this, but yes it is possible.
 
Of course there is always a catch :)
 
Below shows that a hard limit is present i.e. when attempting to exceed 10GB in SQL Server 2016:

image1
Now say you try the same thing with the master database and say bump it up to 15GB for fun:

image_2
It succeeds with no issue and the database is now its 15GB in size:

image3
You can also exceed the limit with model and tempdb but there is little point and will likely introduce issues as new databases (based on model) can’t be created.
 
Is this a bug, sure, will I raise a connect article, well I’m extremely reluctant to as I’m not sure who you’ve got to sleep with to get things addressed. In my view it’s always been a big push for newer features rather than fixing/enhancing existing ones.
 
In saying that, connect items do appear to be getting more attention recently, but then again I’ve seen connect items marked as fixed when they weren’t, or indications that items will be addressed in the next release and only to be re-re-re-disappointed 3 releases later.

Create a test fragmented index

The following script allows you to quickly create a fragmented index for testing:

image1

Why don’t more DBAs use extended events

I suspect instead of a keep it simple stupid (KISS) process it is more slow long and painful (SLAP)
 
Say you simply want to view live login failures.
 
Let’s start the wizard
image1
We only want to look at one type of event
image2
First issue, no Login Failures found, but ‘process_login_finish’ looks to be the correct option, so lets add it to selected events.
image3
The capture global features, is what I think causes the most pain. If you select all options (because you don’t know what you’ll find) you’ll get a nice big error message when you get right to the very end so now you have to go back (painful)
image4
image5
The better option would be to alert the poor DBA when the limit is hit (remove the pain). I spend about 5 minutes attempting find what this limit is on the web before giving up. Testing, having 16 global fields let me finally save without the error.
 
Next came the login failure test:
image6
image7
Nope not showing.
 
I’ll try again in another couple of months when my optimism returns.

SQL Server 2017 CTP 2.1 Feature Selection

As mentioned, Reporting Services has now been extracted from the available features that can be installed.
 
Below is a screen shot from CTP 2.1
image1
Below is a screen shot from CTP 2.0
image2
Again, I suspect this is a backward step. I’d rather have SSMS, RS as part of the installer. Then allow the end-user to apply patches to get it up to date. Sending users to all parts of web to install SQL sounds crazy to me.

SQL Server 2017 auto create/drop indexes

It appears that Azure really is the testing ground for new features.
 
As shown in the previous blog the addition of auto index maintenance in Azure, we now see new columns in the sys.indexes view in the SQL Server 2017 CTPs.
image

So watch this space.

Azure SQL Database auto create/drop indexes

As per https://docs.microsoft.com/en-us/azure/sql-database/sql-database-advisor-portal create/drop indexes can now occur automatically.
image1
Extracts from https://docs.microsoft.com/en-us/azure/sql-database/sql-database-advisor
 
Create Index recommendations
Create Index recommendations appear when the SQL Database service detects a missing index that if created, can benefit your databases workload (non-clustered indexes only).
 
Drop Index recommendations
Drop Index recommendations appear when the SQL Database service detects duplicate indexes (currently in preview and applies to duplicate indexes only).

 
As it’s currently only non-clustered and duplicate indexes, the potential issues when a change is made should be limited. It is indicated that the recommendation will be reverted if performance was negatively impacted by the change. What is not currently clear, is what period of time or amount of monitoring will need to pass before a negative performance change will be reverted.

What SQL Server 2017 should have added

Looking at the current features being added to SQL Server 2017 there doesn’t appear to be anything to get overly excited about.
 
There’s nothing that makes you think ‘wow we need to upgrade NOW!’
 
From my perspective the following should be easy no brainers to get DBAs motivated to upgrade:
 
1. The ability to undo transactions from the transaction log / log backups. Geez, this has been around for years by 3rd parties. Surely it’s time to include it, just like native compressed backups (best thing since sliced bread).
 
2. Writable snapshots. We’ve got read-only snapshots so we are half way there. Just think of the space savings and ability to quickly use cloning for test/dev environments.
 
3. The ability to easily fail-over/fail-back log shipped databases. Man, how long has the good old reliable log shipping been around for and that dreaded pain of a fail-over test.
 
4. The ability to attach a database to an older version. Surely there could be a simple flag that says, hey I may need to go back to version x so don’t give me funky xyz features. Exporting/importing data is just painful.
 
5. Wow, wouldn’t it be great if we had a fail-over solution that didn’t drop the client connection.
 
Re-reading the above, possibly Microsoft just needs to focus on making existing features great, before pursuing new features and leaving DBAs again feeling that they could have been better i.e. JSON.