Thursday, August 30, 2012

Simple Default Option Change in SSIS

Prior to SQL Server 2012, when you are importing data from a flat file source, by default Column names in the first row option is disabled. In SQL Server 2012 this option is by default enable!.


Friday, August 24, 2012

XEvents to Monitor Analysis Services

Extended Events a.k.a. XEvents were introduced in SQL Server 2008 as a light weight monitoring mechanism to support DBA.

In SQL Server 2012 XEvents is expanded to more subsystems such as SSAS, Replication, PDW etc. So with this, you have the light weight monitoring system for SSAS.

SQL Server 2012 release includes new trace events to help users for troubleshooting processing issues. Locks Acquired, Locks Released, and Locks Waiting are new trace events included with this release.

New and Modified DMVs in SQL Server 2012


Few changes have happened to DMVs. If you compare the number of DMVs with SQL Server 2008 R2 SP1 dmvs, there are 33 new DMVs.

New DMVs

In SQL Server 2012, you have 174 DMVs in total compared to 141 you have in SQL Server 2008 R2. Out of these 141 DMVs you have in R2 5 of them added by the SP1. So effectively you have 38 new DMVs in SQL Server 2012.

Let us start with those 5 DMVs.


Most of your SQL Server configurations are saved in the windows registry. So in case you want those to be read, you might need to read the registry.


In previous version you have to use xp_reg_read undocumented system stored procedures. Since this is an undocumented, many DBAs do not want to use in production environments.

However, with introduction of the DMV, dm_server_registry, you can simply query it with necessary where clause and you will be able to get the necessary configuration information from the registry.



How often you need windows information to capture in SQL Server? If so you may need to write powershell scripts. However, with the dm_os_windows_info, you can retrieve the information about Windows version, Service Pack level and Language of your operating system.


Insert Image 5528_09_06.png


How often you need to check whether the SQL Server agent job is running? If you need to execute a job from application, if SQL Server Agent Job is stopped, it will fail. To do a clean task, it will better to verify whether the SQL Server Agent Job is running and give an appropriate error message so that users are attend to the correct issue.

With dm_server_services DMVs you can verify the all the SQL Server services and the startup type as well. Also, for trouble shooting purposes, you can find the last startup time as well.



This DMV returns a list of memory dump files generated by the SQL Server Database Engine along with creation time and the dump file size.


This is a function where you need to provide database id and the file id of which need to check the volume stats.


This DMF will return the size of the volume and remaining space on the volume containing that file, file system type (NTFS or FAT) and few more details.


If you are a DBA, you don’t want your log files to auto grow as it would hamper the transaction performance. So you need to monitor the log file space. Most of the times, DBAs will use SQL Server Agent Alerts.

However problem with alert is that alert is event driven not time driven. In case, there is a long running transaction, log will not be cleared and during this time alerts will be received to the responders. So DBAs prefer to use SQL Server Agent Job so that they can schedule them according to the requirement.

In current version of SQL Server using DBCC SQLPERF you can do this using following script.


DBCC SQLPERF also has new column called Principal File Group Name is added in SQL Server 2012. So in case you need to run above script in SQL Server 2008 R2 or below, you need to comment out that column in the temporary table.

Things are easy in SQL Server 2012 with very simple DMV.


Similarly, you can view the used log space from the Disk Usage report available in the SSMS.


Report also has slight but very important change in SQL Server 2012. Now data label is also included in the report. Earlier, you don’t have the actual value and you need to guess it.


This DMV gets T-SQL statement (this can be a procedure as well) and return the metadata of the first result set.


Similarly dm_exec_describe_first_result_set_for_object DMV will give you the same result but difference is you need to pass object id of the stored procedure or a trigger. If you passed ID of any other object such as a view, table, function, or CLR procedure, an error will be specified in the error columns of the result.


How many times you used DBCC IND in production environments? DBCC IND and DBCC PAGE are undocumented but still many are using them in production. dm_db_database_page_allocations is replacement to DBCC IND. Dm_db_database_page_allocations which can be used to find allocated pages for the database object. This function takes 5 parameters, DatabaseId, tableID, indexID, partitionID and mode.

We have discussed furthermore new DMVs in Alwayson, Full Text Search, Waits and Contained database section.

Modified DMVs

Following DMVs are changed to increase the better usage.


sys.dm_exec_query_stats is often used DMV by joining few other DMVs to analyze badly behaving queries. Four additional columns are added to this DMV and they are,


Total number of rows returned by query.


Number of the rows return by the last execution of the query.


Minimum numbers of the rows returned by the query after it is compiled.


Maximum numbers of the rows returned by the query after it is compiled.

Following is the query you need to execute to analyze bad performing queries.


We testing for performance, DBAs check for page reads. However, if the page reads are high, it might be a bad query plan or it can be user has request for large data set. With these four columns, you can eliminate that dilemma and decide why you have high page reads.

Following DMVs are changed only by data length.


Column Name

Previous Length

Length in SQL Server 20012









































Saturday, August 18, 2012

Columnstore Index

Traditional indexes are based on rows where data is grouped and stored in row basis and then join all rows to complete the entire index. Columnstore indexes store data for columns and join the columns to complete the indexes. This type of index is helpful when retrieving data from large tables such as database warehouse fact tables for queries such as queries for filtering, aggregating, grouping.

Let us create a Columnstore index and compare the results with standard non clustered index.

Expand the Indexes tab inside a table where you want to create the Columnstore index. Right click Indexses and select New Index. Under this, you will see in the below image, a new option called Non-Clustered Columnstore Index…


With this you will be taken to the index creation dialog box. Here, Date column is used to create the Non-Clustered Columnstore Index for FactFinance table.


Similarly, you can use T-SQL scripts to create Columnstore Indexes. Following script will create an index on FactFinace table in the AdventureWorksDenali database.

USE [AdventureWorksDWDenali]







To compare the results of Columnstore index with traditional index, let us create standard Non-Clustered Index on the same date column as shown in the below dialog box.


So Index creation also has a different interface. Also, in SQL Server 2012 filtered index creation has an interface support which is not there in previous versions.

After creating those indexes you will see following indexes in the SQL Server Management Studio.


So you have a different icon for the Columnstore index.

Let us try to retrieve some data using above to indexes.

You will see that same query is executing at but with different indexes. More rows are added to this table so that you can visualize the different. This table has 2522176 Rows.

USE [AdventureWorksDWDenali]



FROM dbo.FactFinance WITH ( INDEX = [csi_date])




FROM dbo.FactFinance WITH ( INDEX = [nci_date])



Let us first analyze the execution plans for both queries.


From the above image, you can see that Query 1 (which uses the Columnstore Index) has 4 % cost while the Query 2 (which uses the Non-Clustered index) has cost of 96 %.

Let us now examine these queries with SET STATISTICS IO, TIME ON


With Columnstore Index

With Non-Clustered Index

CPU Time

16 ms

451 ms

Execution Time

35 ms

2,986 ms

Read Count



Above results will tell you that columnstore is far better than the standard index. However, you need to analyze this case by case as for some cases having a Columnstore index will have negative impact.



Following are the limitations of Columnstore Indexes.

· Cannot be Clustered Key or Primary Key

· Columnstore index has to be a Non-Clustered Index and it can be neither Clustered Key nor Primary Key.

· Cannot update tables with Column and following error will return if you try to update a table which has a Columnstore index.

Msg 35330, Level 15, State 1, Line 1

INSERT statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the INSERT statement, then rebuilding the columnstore index after INSERT is complete.

· You can have only once Columnstore Index per table. After creating the first Columnstore index, option to create Columnstore Index will be disabled and if you try to create the index using T-SQL script following error will be returned.

Msg 35339, Level 16, State 1, Line 2

Multiple nonclustered columnstore indexes are not supported.

· Cannot be created with the INCLUDE keyword.

· Cannot be a unique index.

· Cannot be created on a view or indexed view.

· Cannot include a sparse column.

· Cannot have more than 1024 columns.

· If you need this much of columns for the Columnstore indexes, then obviously there is something wrong with the design.

· The following data types can be included in a columnstore index.

o Char, varchar, nchar and nvarchar (except varchar(max) and nvarchar(max))

o decimal (and numeric) (Except with precision greater than 18 digits.)

o int, bigint, smallint, and tinyint

o float and real

o bit

o money and smallmoney

o All date and time data types (except datetimeoffset with scale greater than 2)

· The following data types cannot be included in a columnstore index.

o binary and varbinary

o ntext, text, and image

o varchar(max) and nvarchar(max)

o uniqueidentifier

o rowversion (and timestamp)

o sql_variant

o decimal (and numeric) with precision greater than 18 digits

o datetimeoffset with scale greater than 2

o CLR types (hierarchyid and spatial types)

o xml

· Cannot Modify

Simply you are not allowed to use ALTEX INDEX syntax against Columnstore Indexes to modify the index. Drop and re-create the Columnstore index instead. Of course, if you want to disable or enable index, you can use ALTER INDEX syntax.

Friday, August 10, 2012

August 2012 Meet-up

We've put together a special meet-up for August 2012 with a special session and different meet-up location, just for a change. We also have a couple of transport options listed out below.

Session 1

TITLE: Using Extended Events

Performance issues? Debugging your stored proc? Deadlocks? How would you investigate and monitor these problems? Extended Events is the new monitoring platform. In this session we'll explore how to use extended events to monitor SQL Server and how it could be used as a potential replacement for SQL profiler.

SPEAKER: Prithiviraj Kulasingham, MVP

Session 2

TITLE: Anything-SQL Lightning Talks - 2nd Edition

A train of twelve 5-minute back-to-back presentations on various SQL Server aspects.


  • Abhinandana de Zoysa
  • Avantha Siriwardana
  • B.Umashanthan
  • Dedunu Dhananjaya
  • Dinesh Karunarathna
  • Hasitha Kanchana
  • Jayani Withanawasam
  • Sanjeewa Jayawickarama
  • Shamil Saleem
  • Shane Carvalho
  • Sriyantha Silva
  • Supun Thrikawala

Free for all.
Snacks and drinks provided.

Date and Time

August 15, 2012 6:00 PM Onwards

Pearson Lanka (formerly eCollege)
Regal Building, Orion City
752, Dr. Danister De Silva Mawatha
Colombo 09
Sri Lanka

Tuesday, August 7, 2012

File Table

A SQL Server FileTable is a special table that allows storing directory and files into the SQL Server. With this option, you can access these files and directories from windows applications as if they were stored in the file system.

Files stored in the FileTable are exposed to windows through a windows share.

If you create or change a file through the windows share the command is captured by a SQL Server component and the changes are applied to the corresponding data in the FileTable.

Since FileTable feature builds on top of SQL Server FILESTREAM technology, first you need to enable your SQL Server instance for filestream.

To enable FILESTREAM feature on SQL Server 2008:

Open SQL Server Configuration Manager (Start > Programs > Microsoft SQL Server 2008 > Configuration Tools > SQL Server Configuration Manager)

Navigate to the SQL Server Services node and select the SQL Server instance you want to modify SQL Server (MSSQLSERVER)

Click the FILESTREAM tab and select the checkboxes to enable FILESTREAM and enter a share name for the files as shown below image.


Next is to enable the filestream access level which can be done from either using a T-SQL or from the interface.

EXEC sys.sp_configure N'filestream access level', N'2'




If you prefer to use user interface you can enable file stream by Right Click Server name from the management studio and select Properties and then select Advanced option and finally you can select the filestream option from the given option.


Let us create a database with filestream set to on.



IF DATABASEPROPERTYEX('FileTableDatabase','Version') > 0

DROP DATABASE FileTableDatabase




NAME = N'FileTableDatabase_Data',

FILENAME = N'D:\FileTable\FileTableDatabase.mdf'




NAME = N'FileTableDatabase_FileStream',

FILENAME= 'D:\FileTable\Data'




NAME = N'FileTableDatabase_Log',

FILENAME = N'D:\FileTable\FileTableDatabase_log.ldf'







Next step is to create a File Table which will be the container for the files and directories.

USE FileTableDatabase


CREATE TABLE FillTableDocument AS FileTable



FileTable_Directory = 'FileTableDocument'


You do not have a user interface option to create FileTables hence you need to write T-SQL as shown above.

After creating the FillTable, if you navigate to SQL Server Management Studio and expand FillTables node under the Tables node, you will see the created FileTable. If you expand Columns, you will see list columns.


In the File Table created, you can right click and select Explore FileTables Directory as shown in the below image.


This will take you to the File Directory where you can create your files. Now you have the option of placing files and creating directories in this folder which will automatically see from the File Table.

As you can see from the below image three different types of files are copied to the FileTable Directory and there is another sub folder is created along with few files. These files were copied from the operating system not from the SQL Server.


If you query the File table, you will see manually copied files are updated to the SQL Server.


If you want to rename files, you can do it from the OS level as well as from the T-SQL query as shown in the below script. T-SQL script will update table and accordingly file name will be changed and vice-versa. .



SET name = 'FileTable4.txt'

WHERE [stream_id] = '2EB5608D-F270-E111-80DB-101F74EE0B01'

Since path_locator has data type of heirachyid you can use all the CLR functions.

Following will give you the operating system root path for the given File Table.

SELECT FileTableRootPath('dbo.FillTableDocument') as RootPath

For each file you can get the full file path with the following query.

SELECT name,file_stream.GetFileNamespacePath() full_path ,is_directory

FROM dbo.FillTableDocument c

Which has the following output.


Another important point to note here is, all these files will not be saved in SQL Server database. This means that though you have large files in the operating system, your table sizes or database sizes will not be large in size. When you are querying the table, if you are selecting the file_stream column, for large files it will take little while to run the query since it reads the file stream by reading the entire file.

You can utilize File Table for handle image libraries, video libraries and CV management system.

Triggering a New File

Most of the time, users are required to track new file comes to the folder, For example, you might need to execute a SSIS package once new file is received. If you have File Table option enabled, you can simply create an INSERT trigger to the File table and execute SSIS within a trigger.