The installation process of SQL Server is continually evolving.
Recently the setup allowed tempdb file adjustments with a default of 4 files and the ability to grant the Perform Volume Maintenance Task privilege.
Such a focus on tempdb was a surprise to me as in the 13+ years I’ve worked with SQL Server (tackling literally tens of thousands of instances) there was only once when more than 1 data file was required. To put this in perspective, the SQL Server was accessed by all students at a large university and the tempdb GAM contention was obviously clear. Adding another single data file (2 in total) to this high spec server resolved the issue.
Adding additional tempdb files, to me is an advanced feature and should be carefully considered so as to not waste disk space, ensure even file growths, minimize complexity etc.
In my view it would have been more productive for Microsoft to have focused the setup on the following:
set memory limits (this has to be done 100% of the time) and don’t stop with the engine, include SSAS and SSRS
set max degree of parallelism (I believe the default of 0 is incorrect, rather it should be 1 as most instances are not dedicated to a hosting a single application due to licensing costs,consolidation etc)
set backup compression (should be defaulted to 1)
set optimize for ad hoc workloads (should be defaulted to 1)
Here’s a quick look at the new CTP 2.1 Reporting Services installer setup which can be found here.
There is only the one option:
The next screen looks like additional options will be added.
At this stage Reporting Services is now installed:
It default is SSRS
The usual settings are there including the new Power BI Service (Cloud)
The customary final service pack for SQL Server 2012 is just a round the corner before the product exits mainstream support in July and is sealed to all but security and critical bug fixes.
Yep no edition of SQL Server has ever had a Service Pack 5 :)
What are the key things SQL Server 2012 will be remembered for? Possibly:
introduced AlwaysOn Availability Groups
introduced Columnstore indexes
the infamous transition to core based licensing
introduced the short lived Business Intelligence edition
Yeah this has been possible for years with no funky sharding etc, but given the memory and cpu limitations it doesn’t make much sense in most scenarios.
If on the other hand you have an application that just needs to dump/archive data into an ever growing table and you like working with SQL Server, then sure this could be an option. Now I’m not endorsing this, but yes it is possible.
Of course there is always a catch :)
Below shows that a hard limit is present i.e. when attempting to exceed 10GB in SQL Server 2016:
SET NOCOUNT ON;
CREATE DATABASE d;
GO
USE d;
ALTER DATABASE d SET RECOVERY SIMPLE WITH NO_WAIT;
ALTER DATABASE [d] MODIFY FILE ( NAME = N'd', FILEGROWTH = 10GB );
CREATE TABLE t (c1 nchar(4000));
GO
INSERT INTO t(c1) VALUES ('1');
GO 700
/* delete test database
USE master;
GO
DROP DATABASE d;
*/
Now say you try the same thing with the master database and say bump it up to 15GB for fun:
SET NOCOUNT ON;
USE [master];
ALTER DATABASE [master] MODIFY FILE ( NAME = N'master', FILEGROWTH = 15GB );
CREATE TABLE t (c1 nchar(4000));
GO
INSERT INTO t(c1) VALUES ('1');
GO 700
It succeeds with no issue and the database is now its 15GB in size:
SELECT SERVERPROPERTY('Edition') AS [version],(size*8/1024) AS size_mb
FROM sys.master_files
WHERE 1=1
AND DB_NAME(database_id) = 'master'
AND [type_desc] = 'ROWS';
You can also exceed the limit with model and tempdb but there is little point and will likely introduce issues as new databases (based on model) can’t be created.
The following script allows you to quickly create a fragmented index for testing:
USE tempdb;
-- drop test table if exists
IF (OBJECT_ID(N'akawn_t',N'U') IS NOT NULL)
BEGIN
DROP TABLE akawn_t;
END
--create table
CREATE TABLE akawn_t (c1 float PRIMARY KEY, c2 nchar(1000));
GO
--insert 100 records
INSERT INTO akawn_t
SELECT RAND(), '1';
GO 100
-- delete some records
DELETE FROM akawn_t
WHERE CAST((c1 * 100) AS smallint) % 2 = 1;
GO
-- view fragmentation
SELECT
'[' + OBJECT_SCHEMA_NAME(a.[object_id]) + '].[' + OBJECT_NAME(a.[object_id]) + ']' AS table_name
,b.[name] AS index_name
,a.avg_fragmentation_in_percent
,a.index_type_desc
,a.*
,b.*
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, NULL) a, sys.indexes b
WHERE 1=1
AND a.[object_id] = b.[object_id]
AND a.index_id = b.index_id
AND a.avg_fragmentation_in_percent > 1;
I suspect instead of a keep it simple stupid (KISS) process it is more slow long and painful (SLAP)
Say you simply want to view live login failures.
Let’s start the wizard
We only want to look at one type of event
First issue, no Login Failures found, but ‘process_login_finish’ looks to be the correct option, so lets add it to selected events.
The capture global features, is what I think causes the most pain. If you select all options (because you don’t know what you’ll find) you’ll get a nice big error message when you get right to the very end so now you have to go back (painful)
The better option would be to alert the poor DBA when the limit is hit (remove the pain). I spend about 5 minutes attempting find what this limit is on the web before giving up. Testing, having 16 global fields let me finally save without the error.
Next came the login failure test:
Nope not showing.
I’ll try again in another couple of months when my optimism returns.
As mentioned, Reporting Services has now been extracted from the available features that can be installed.
Below is a screen shot from CTP 2.1
Below is a screen shot from CTP 2.0
Again, I suspect this is a backward step. I’d rather have SSMS, RS as part of the installer. Then allow the end-user to apply patches to get it up to date. Sending users to all parts of web to install SQL sounds crazy to me.
It appears that Azure really is the testing ground for new features.
As shown in the previous blog the addition of auto index maintenance in Azure, we now see new columns in the sys.indexes view in the SQL Server 2017 CTPs.
SELECT
'[' + OBJECT_SCHEMA_NAME([object_id]) + '].[' + OBJECT_NAME([object_id]) + ']' AS table_name
,[name] AS index_name
,auto_created,is_ignored_in_optimization,suppress_dup_key_messages
FROM sys.indexes;
Create Index recommendations
Create Index recommendations appear when the SQL Database service detects a missing index that if created, can benefit your databases workload (non-clustered indexes only).
Drop Index recommendations
Drop Index recommendations appear when the SQL Database service detects duplicate indexes (currently in preview and applies to duplicate indexes only).
As it’s currently only non-clustered and duplicate indexes, the potential issues when a change is made should be limited. It is indicated that the recommendation will be reverted if performance was negatively impacted by the change. What is not currently clear, is what period of time or amount of monitoring will need to pass before a negative performance change will be reverted.