SQL Server and Local Storage

To be clear, I’m not a disk subsystem storage expert.
 
Often though, I get requests to suggest how the local or Direct Attached Storage (DAS) should be partitioned for use with a single SQL Server.
 
The key considerations for storage partitioning, as I see it, are redundancy, performance and capacity.
 
The below attempts to review the various options for local storage with up to 12 available disks.
 
Redundancy
 
The main RAID levels I commonly see are:
 

RAID Key Pros Key Cons
0 Fastest overall performance.
Allows maximum use of disk capacity.
All data is lost if one drive experiences an issue.
Not recommended for SQL Server, unless other disk subsystem redundancy is also used i.e. Azure Locally Redundant Storage (LDS).
1 Allows for a single disk failure. Only 2 disks can be used.
Useable disk capacity is halved.
5 Allows for larger usable disk capacity with storage redundancy.
Allows for a single disk failure.
A minimum of 3 disks is required.
Performance significantly reduces if a disk drive experiences an issue.
10 Allows for a multiple disk failures.
Allows for fast IO.
Useable disk capacity is halved.
A minimum of 4 disks is required.
An even number of disks is required.

 
Performance
 
To understand the impact of the RAID level on disk IO performance, the following formula has been used:
 

RAID Potential Max Disk IO Formula
0 (Reads + Writes) / Number of Disks
1 (Reads + (2*Writes)) / 2
5 (Reads + (4*Writes)) / Number of Disks
10 (Reads + (2 * Writes)) / Number of Disks

 
Purely as an example and based on the supplied formula, the below shows the IO work required per disk for 3 different types of IO work loads.
 
In theory, the lower the disk IO per disk, the better the performance.
 
Please note that in the real world, it would be preferred to leave 1 or more hot spares to allow for quicker recovery from a disk failure.
 
Balanced work load – A total of 50 read and 50 write IO requests.
 
Notice that for this type of work load, a RAID 1 has less IO requirements per disk than a RAID 5 with 3 disks.
 

No. of
Disks
RAID
0
RAID
1
RAID
5
RAID
10
1 100
2 50 75
3 33 83
4 25 63 38
5 20 50
6 17 42 25
7 14 36
8 13 31 19
9 11 28
10 10 25 15
11 9 23
12 8 21 13

 
Higher Read work load – A total of 80 read and 20 write IO requests.
 
Notice that for this type of work load:
– RAID 10 does not offer much IO improvements over RAID 5 with 4 or more disks.
– RAID 1 has higher IO requirements per disk than RAID 5 with 3 disks.

 

No. of
Disks
RAID
0
RAID
1
RAID
5
RAID
10
1 100
2 50 60
3 33 53
4 25 40 30
5 20 32
6 17 27 20
7 14 23
8 13 20 15
9 11 18
10 10 16 12
11 9 15
12 8 13 10

 
Higher Write work load – A total of 20 read and 80 write IO requests.
 
Notice that for this type of work load that RAID 5 with 4 disks has lower IO requirements per disk than RAID 1.
 

No. of
Disks
RAID
0
RAID
1
RAID
5
RAID
10
1 100
2 50 90
3 33 113
4 25 85 30
5 20 68
6 17 57 20
7 14 49
8 13 43 15
9 11 38
10 10 34 12
11 9 31
12 8 28 10

 
With the above IOPs per drive known, I suspect the next question would likely be regarding the estimated MB throughput based on the type of disk purchased.
 
The following table is a rough estimate which shows how this can fluctuate:
 

Disk RPM (K) Approx.
IOPs/Sec
Approx. MB/sec
(Using 64KB IO)
7.2 75 4
10 125 7
15 175 10
SSD 6,000 375

 
Capacity
 
The disk capacity you could expect when using the common RAID levels would be:
 

RAID Potential Capacity Formula Minimum No of Disks
0 Disk capacity * Number of Disks 1
1 (Disk capacity * Number of Disks) / 2 2*
5 (Disk capacity * Number of Disks)
– Capacity of 1 Disk
3
10 (Disk capacity * Number of Disks) / 2 4

* This is also the maximum number of 2 disks allowed.
 
An indication of what the RAID level using 300GB disks would mean in terms of potential storage capacity is:
 

No. of
Disks
RAID
0
RAID
1
RAID
5
RAID
10
1 300
2 600 300
3 900 600
4 1,200 900 600
5 1,500 1,200
6 1,800 1,500 900
7 2,100 1,800
8 2,400 2,100 1,200
9 2,700 2,400
10 3,000 2,700 1,500
11 3,300 3,000
12 3,600 3,300 1,800

SQL Server configuration in Azure

You may have come across Performance Considerations for SQL Server in Azure Virtual Machines and wonder what the recommendations would look like in practice.
 
Below walks through most of the recommendations.
 
First, connect and log into the Azure portal.
image1
 
Create a storage location.
image2
 
Performance Check: Disable Azure geo-replication on the storage account.
Note: From a security perspective, it would be best not to use your company name.
image3
 
image4
 

Create a cloud service if you don’t have one already.

 
Performance Check: Keep the storage account and SQL Server VM in the same region.
image5
 
image6
 
Both the Storage and Cloud Service will be located in Southeast Asia.
image7
 
Create the SQL Server VM.
image8
 
image9
 
image10
 
Performance Check: Use minimum Standard Tier A2 for SQL Server VMs.
 
If you have purchased SQL Server core licences, you should typically use a 4 core VM as this is the minimum licences you can have per VM.
 
Note: By adding extra CPUs, the base virtual machine running costs increases.
 
image11
 
image12
 
Note: The Antimalware is still in preview at the time of this blog. This application may impact SQL Server performance if exclusions for the SQL Server files are not in place. If you find that the application impacts the server, it can easily be removed via the control panel in the VM, if required.
 
image13
The virtual machine will be provisioned and started.
 
Notice that the below VM is located in Southeast Asia as selected during the install.
image14
 

 

 

 

 

 

 

At the time of this blog, the A2 VM supports up to 4 data disks.
image15
Performance Check: Avoid using operating system or temporary disks for database storage or logging.
 
Note: Microsoft has stated ‘Locally redundant storage (LRS) is replicated three times within a single data center. When you write data to a blob, queue, or table, the write operation is performed synchronously across all three replicas. LRS protects your data from normal hardware failures.’, with this in mind, RAID level protection is not required.
image16
 
Performance Check: Avoid using Azure data disk caching options (caching policy = None).
image17
 

 

 

 

 

 

 

 

 

 

 

 

 

Repeat the above and add another 3 disks.
 
Note: If you try add more than 4 disks to the A2 VM, you’ll receive the following error:
image18
Below shows all 4 disks added.
 
Notice that the C drive uses Read/Write cache which is not recommended for SQL data/log files.
image19
 
Connect to the new VM.
image20

 

 

 

Performance Check: Stripe multiple Azure data disks to get increased IO throughput.
image21
 
 

 

 

image22
 
image23
 
 

 

 

 

 

 

 

 
image24

 

 

 

 

 

 

 

 

 

 

Microsoft notes: ‘For Standard tier A2, we recommend a configuration of three data disks for data files and tempdb, and one data disk for log files. Depending on whether your workload is data or log sensitive, you can adjust the number of disks dedicated to data or log storage pool accordingly.’
 
‘VMs allowing more than 4 disks to be attached, place the data and log files on separate disks or storage pools’.
 
For Production VMs with core licences, you would most likely use at least 4 CPU’s as that is the minimum licences allowed per VM and therefore would be able to assign more disks to the VM.
image25

 

 

 

 

 

 

 

 

 
image26

 

 

 

 

 

 

 

 

 

 

Performance Check: Format with documented allocation sizes.
 
Microsoft notes: ‘NTFS allocation unit size: When formatting the data disk, it is recommended that you use a 64-KB allocation unit size for data and log files as well as tempdb.’
image27

 

 

 

 

 

 

 

 

 
image28

 

 

 

 

 

 

 

 

 
image29

 

 

 

 

The F drive will now show.
 
Performance Check: Separate data and log file I/O paths to obtain dedicated IOPs for data and log.
image30

 

 

 

 

 

 

 

 

 

 
image31

 

 

 

 

 

 

 

 

 
image32
 
 

 

 

 

 

 

 

 

 
image33

 

 

 

 

 

 

 

 

 
image34

 

 

 

 

 

 

 

 

 
image35
 
 

 

 

 

 

 

 

 

 
image36
 
 

 

For this demo, I’ll use a local account called SQLService for the SQL Server service and SQLAgent for the SQL Server Agent service account. You’ll likely use a domain account instead for your environment.
 
Neither of these accounts are a member of the local Administrators group on the server.
 
Performance Check: Enable instant file initialization for data files.
 
Performance Check:  Enable locked pages.
 
Note: The SQL Server Agent service does not require these privileges.
image37
 
 

 

 

 
image38
 
I’ve downloaded SQL Server 2014 evaluation edition (http://technet.microsoft.com/en-US/evalcenter/dn205290) to go through the remaining steps.
 
Note: Internet Explorer is heavily locked down by default and you may need to disable ‘IE Enhanced Security Configuration’ to use it successfully.
image39
 
image40

 

 

 

 

 

 

 

 

 

 

 

During the install of SQL Server the following will be addressed:
 
Performance Check:  Setup default locations.
 
Performance Check:  Move all databases to data disks, including system databases.
 
Performance Check:  Move SQL Server error log and trace file directories to data disks.
 
As shown below the Shared features can remain on the C drive, but the Instance root directory should move the new Data drives.
image41
 
Notice the F drive below. This takes care of the SQL Server error log and trace file directories.
image42
 
As shown below only the log files will use the G drive:
 
Note: The Backup location should reflect your backup strategy.
image43
Once the install is finished:
 
Performance Check:  Apply SQL Server performance fixes.
 
The latest patch info with recommended patch levels can be found here
 
The following SQL database settings complete the check list:
 
Performance Check:  Limit or disable autogrow on the database.
image44
 
Performance Check:  Disable autoshrink on the database.
image45
 
 

 

 

 

 

 

 

 

Performance Check:  Consider enabling database page compression (Note: Available on Enterprise Edition only.)
image46
 
Or
 
image47

 

 

 

 

I would also suggest:
 
Review and set a minimum and maximum memory threshold (also for Reporting Services and Analysis services, if used).
 
The following shows the memory settings for an A2 instance which has 3.5GB and no Reporting/Analysis services.
image48

 

 

 

 

 

 

Enable backup compression by default.
image49
 
That should be it for now.
 

Event ID 1023 Perflib MSSQLServerOLAPService

image01
 
You may find that granting permissions to the Network Service as described in http://support.microsoft.com/?kbid=912399 doesn’t work for you.
 
What resolved the issue for me, was granting the permissions mentioned in the link to the account used by the SQL Server Agent service. The SQL Server Agent service account was responsible for the reported errors due to service account not having access to the required Analysis Services files/folders by default.
 
Also, if the SQL Server Agent service account isn’t a local administrator on the server, you may find that adding the service account to the local ‘Performance Log Users’ and ‘Performance Monitor Users’ security groups usually gets around most performance counter issues without granting excessive permissions.

SQL Server 2014 Licensing

At what point should you think about using a different SQL Server 2014 licensing model.

The price indications below are in US$ without software insurance or discounts.

Licence Cost (US$) Notes
SQL Server 2014 208 1 x User or Device CAL
SQL Server 2014 Standard 894 1 x server*
SQL Server 2014 Standard Core 3,570 2 x cores**
SQL Server 2014 Enterprise Core 13,690 2 x cores
SQL Server 2014 Business Intelligence 8,555 1 x server*
  • You can have up to 50 instances on the server. User and/or Device CALs are also required to be purchased.
    ** A maximum of 16 cores is supported.

An absolute minimum of 4 cores is required to be purchased when using core licensing. This is either 4 cores per physical socket or 4 vCPUs in a virtual machine. Additional 2 core licences can be purchased after this initial charge. Therefore a starting point for the Standard and Enterprises editions with core licences would have to be:

Licence Cost (US$) Notes
SQL Server 2014 Standard Core 7,140 2 x (2 cores)
SQL Server 2014 Enterprise Core 27,380 2 x (2 cores)

Based on the above information, the thresholds between licences would be:

SQL Server 2014 Standard Cores vs SQL Server 2014 Standard with CALs

Cores Cost (US$) Server + CALs Cost (US$)
4 7,140 30 7,134
6 10,710 47 10,670
8 14,280 64 14,206
10 17,850 81 17,742
12 21,420 98 21,278
14 24,990 115 24,814
16 28,560 133 28,558

SQL Server 2014 Standard Cores vs SQL Server 2014 Enterprise Cores

Standard Cores Cost (US$) Enterprise Cores Cost (US$)
16 28,560 4 27,380

SQL Server 2014 Enterprise Cores vs SQL Server 2014 Business Intelligence with CALs

Cores Cost (US$) Server + CALs Cost (US$)
4 27,380 89 27,275

How does Azure fit into this.

Below is a screen shot from the Azure Pricing Calculator in US$. The selected machines have 4 vCPUs.

image_1

This is a real rough guide without software assurance considered, but for interest how many days would it take before purchasing your own licences would appear to make more sense?
Enterprise54327,36727,380

Edition Azure Days Azure Cost (US$) Cost of 4 Cores (US$)
Standard 540 7,128 7,140

Software assurance has become a real consideration point when purchasing SQL Server 2014 licences as passive secondary SQL servers now require software assurance.

Software assurance has catches i.e. it only lasts for a certain term and results in continual payments to Microsoft. It does however have other benefits like software update rights, DR, extended support etc.

No Business Intelligence Development Studio (BIDS) by default with SQL Server 2014

As per this link BIDS has been renamed to SQL Server Data Tools for Business Intelligence (SSDT-BI).
 
SQL Server Data Tools for Business Intelligence (SSDT-BI), previously known as Business Intelligence Development Studio (BIDS), is used to create Analysis Services models, Reporting Services reports, and Integration Services packages.
 
SSDT-BI is not included with the SQL Server media, and at the time of this blog, the following SSDT-BI versions can be downloaded:
 
SSDT-BI for Visual Studio 2013 (32bit). (The download is 1GB in size.)
 
SSDT-BI for Visual Studio 2012 (32bit). (This download is 784MB in size.)
 
A catch during the install of SSDT-BI is that you are asked to either perform a new installation or use an existing instance.
 
image1
Unless you have SQL Server 2014 32bit installed, it’s best to select ‘Perform a new installation of SQL Server 2014‘ or you’ll likely get the following error if you have an existing 64bit install.
 
image2
You may also be prompted to restart your computer, post install.
 
image3

Do you require a SSD to use the Buffer Pool Extension feature in SQL Server 2014?

It appears that the answer is yes, even though SQL Server 2014 (RTM and CU1 at the time of this blog) allows you to create the Buffer Pool Extension (BPE) file on non-SSD media.
 
The current documentation indicates that SSDs are required to use the Buffer Pool Extension feature in SQL Server 2014 i.e.
 
‘Introduced in SQL Server 2014, the buffer pool extension provides the seamless integration of a nonvolatile random access memory (that is, solid-state drive) extension to the Database Engine buffer pool to significantly improve I/O throughput.’
 
As a simple test, I downloaded AdventureWorks2012-Full Database Backup.zip and restored it onto my local SQL Server 2014 test instance.
 
I ran the below to reduce the memory allocated to my SQL Server 2014 instance and to create a BPE file on my local non-SSD C drive.
 

sp_configure 'show advanced options',1;
GO
RECONFIGURE;
GO
EXEC sp_configure 'max server memory (MB)', 512;
GO
RECONFIGURE;
GO
ALTER SERVER CONFIGURATION
SET BUFFER POOL EXTENSION ON
(FILENAME = 'C:\BPE\20GB.BPE', SIZE = 20 GB);
GO

 
image1
 
We can see that a 20GB file was created.
Note: This file is removed when SQL Server is shutdown or BPE is disabled.
 
image2
 
The following DMV also shows the BFE as being enabled:
 

SELECT * FROM sys.dm_os_buffer_pool_extension_configuration

 
image2_1
 
Next I confirmed that the file was not being used:
 

SELECT *
FROM sys.sysperfinfo
WHERE 1=1
AND counter_name LIKE 'Extension%'

 
image3
 
The following query was run against the restored sample AdventureWorks2012 database:
 

USE AdventureWorks2012
GO
EXEC sp_MSforeachtable 'SELECT * FROM ?'
GO

 
image4
 
When the performance counters were rechecked, we can see the BPE file was not being used:
 
image3
 
As another test, when the BPE was created on a USB flash drive it was also not used.
 
What is interesting is that when I ran the exact same test with a BPE file on an Azure VM’s C drive and temporary storage D drive, the BPE was used and therefore it would indicate that Azure VMs make use of SSDs.
 
image5

Progress Report: Online Index Operation Event Class

SQL Server Profiler offers an Event Class called Progress Report: Online Index Operation.
 
image0
 
Below is a screen shot of the SQL Server Profiler output when a Clustered and Non-Clustered index is rebuilt online i.e.
 

USE [database_1]
GO
ALTER INDEX ALL ON [dbo].[Table_1]
REBUILD WITH (ONLINE = ON)
GO

 
click on the image to enlarge
image1
 
The Stage 1 and Stage 2 references in the EventSubClass column may appear a bit confusing. The stages are explained here, along with the meaning of the other columns.
 
If for example only the Non-Clustered index was rebuilt online then only Stage 1 is shown in the output as follows i.e.
 

USE [database_1]
GO
ALTER INDEX [nci] ON [dbo].[Table_1] 
REBUILD WITH (ONLINE = ON)
GO

 
click on the image to enlarge
image2
 
Unfortunately there is not an estimate of when the rebuild will complete included in profiler and the below query will always show 0 percent complete. Currently only ALTER INDEX REORGANIZE will show the estimated percent completed.
 

SELECT text,start_time,percent_complete
FROM sys.dm_exec_requests er
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle)
WHERE 1=1
AND command = 'ALTER INDEX'

 
image3
 
If the rebuild is taking a really long time, a simple quick check would be to take a look at the Index Properties and compare against the BigintData1 column in Profiler to gauge how may rows are remaining to be processed
 
image4
 
or you could use a query like:
 

USE [database_1]
GO
SELECT OBJECT_NAME(id) as table_name,name AS index_name,rowcnt
FROM sys.sysindexes
WHERE 1=1
AND name = 'nci'
GO

 
image5

Invalid Database Compatibility Level on SQL Server 2014

You’ll notice that SQL Server 2005 (90) is a database Compatibility Level option on SQL Server 2014 (RTM & CU1 at the time of this blog).
 
image1
 
You’ll get an error message if you attempt to use it.
 
image2
 
A side effect of this option is that a restore or attach of a SQL Server 2005 database onto SQL Server 2014 is possible; however, the database compatibility level will automatically be set to use SQL Server 2008 (100). This saves you the step of having to transfer the database to a SQL Server 2008, or higher, instance before transitioning to SQL Server 2014.
 
You cannot directly attach a SQL Server 2000 database to SQL Server 2014 and as per SQL Server 2012 you’ll see the following error in the SQL Server errorlog.
 
click on image to enlarge
image3

Gaining sysadmin access to your SQL Server

The following steps show how to give a windows account sysadmin privileges to SQL Server. This method is typically used when you are unable to access SQL Server as a sysadmin by other means.

Prerequisites:

1. A windows account which can log onto the machine hosting SQL Server and has local administrator privileges.
2. An outage window for shutting down SQL Server.
3. Stop any applications which may want to connect to the SQL Server, if known.

Steps

Stop the SQL Server instance.image01
image02

Note: You may want to also stop SQL Sever Reporting Services, if it is running, as it will want to continually connect to the SQL Server.

Start a command prompt with Run as administratorimage03

Note: I started an elevated permission command prompt by clicking on the easily accessible PowerShell icon and entering cmd

Navigate to folder containing sqlservr for the instance and start SQL Server in single user admin mode e.g. for SQL Server 2012 use:

cd "C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn"
 

sqlservr -sMSSQLSERVER -m

 
Note: The -s and -m are lower case. Your path to sqlservr may be different and if you are accessing an instance then use the instance name instead of MSSQLSERVER e.g. for an instance called INST1 you would use -sINST1
 
For some reason if you directly copy and paste the above syntax it doesn’t work and you’ll see the following message in the SQL errorlog. Manually typing the syntax does work.
 
image1
 
image04

You’ll see something similar to the below.

Note: I hadn’t stopped SQL Sever Reporting Services and we can see it attempting to connect to SQL Server.image05

Open another elevated permission command prompt and run

sqlcmd

Note: If you are connecting to a named instance use -S .\instancename with the S being uppercase e.g. for an instance called INST1 you would use:

sqlcmd -S .\INST1
image06

The following syntax shows me adding the local administrator on a server called TEST to the sysadmin role:

CREATE LOGIN [TEST\Administrator] FROM WINDOWS
GO
ALTER SERVER ROLE sysadmin ADD MEMBER [TEST\Administrator]
GO
EXIT

For pre SQL Server 2012 you would use:

CREATE LOGIN [TEST\Administrator] FROM WINDOWS
GO
sp_addsrvrolemember [TEST\Administrator],[sysadmin]
GO
EXIT

image07

Go back to your original command window and stop SQL Server using
CTRL+C and y image08

Finally, restart the SQL Server services and you should be able to access SQL Server as a sysadmin. image09

image10

SQL Server documentation spelling references

Living in New Zealand, there is frequently the conundrum of whether to use American or New Zealand English or a mixture of the both spellings for documentation.

For my reference (sanity), I’ve attempted to identify spellings which I most commonly find mixed in documents and will lean towards use of the New Zealand spellings to standardise documentation.

Exceptions are made for product names and features e.g. Datacenter Edition, Query Analyzer etc

American New Zealand
Analyze Analyse
Data center Data centre
License Licence
Licensed Licensed
Licensing Licensing
Normalize Normalise
Optimize Optimise
Parameterized Parameterised
Prioritize Prioritise
Standardize Standardise
Utilize Utilise
Virtualization Virtualization
Virtualize Virtualize

One last anomaly, I often see the use of the word on-premise when referring to a location when it should be on-premises.