Monday, December 31, 2012

Too Many VLFs

Transaction Log consists of Virtual Log Files. When transaction log is increase virtual log files are crated according to following table. More info here

Log Size # of VLFs
64MB and up to 64MB 4
64MB and up to 1GB 8
larger than 1GB 16

Too many VLFs will lead to some problems like, backups will take long time, replication latency will be high. In fact, I have a bad experience, when one of the database has high replication latency and reason for that is high VLFs.

In SQL Server 2012, there is a new feature when the VLFs are more than 10,000 there will be an warning in the error log once the server startup.

During the CTP3 of Denali, this threshold was set to 1,000 but was increased at the RTM. However, Microsoft has forgot to reset the warning message. Winking smile

Here is how I recreate this.


I verified number of VLFs by DBCC LOGINFO(). I verified that there are no warning when log file reached the 1000 VLFs.

After reaching 10,000 I saw the error message but it says database has  more than 1000 virtual log files.


Friday, December 28, 2012

Top 10 Cloud Stories Of 2012

Here’s a list of the Cloud Computing Exchange’s 10 most-read stories in 2012:

Wednesday, December 26, 2012

How to measure the percentage of online index rebuild


Rebuilding indexes online is time consuming hence many DBAs need to monitor the event. There is a profiler event in SQL Profiler to capture the online index rebuilding.

In SQL Profiler, there is an event Progress Report: Online Index Operation under Progress Report category.


When this profiler is executing results can be obtained. Most of the columns are common like application name, database name spid , login etc. However, there are few uncommon columns which are shown in the below image.


Index ID – index ID from for the table.

Object Name : Index Name

BigIntData1: 0 = serial plan; otherwise, the thread ID during parallel execution.

BigIntData2: Number of rows inserted.

In the above example, non-clustered index BigintData2 in 0 which means index rebuild was done using a serial plan. In that example if you consider the BigintData1, it shows the cumulative rows rebuild. Since this table has 20,000 records last row for BigintData2 shows that number.

If you consider the clustered Index, there are 8 parallel plans have created to rebuild the indexes. If you sum all the max value for each plan, that again adds up to 20,000.

Saturday, December 15, 2012

Left Outer Join , What is the Outer Join Table

I was little surprised when I heard the following statement which is,

When you are using Left Outer Join (even in Right Outer Join), Left joined table is defined by the ON condition not by the OUTER JOIN clause.

This means,

SELECT A.Column1, B.Column2


Will give you every thing from the table A and from table B it gives you what only matches.

If you re-write the query, (You will notice that only change is, ON condition which is switched now)

SELECT A.Column1, B.Column2


Will give you every thing from the table B and from table A it gives you what only matches.

What do you think?

Let us try it,

This is my sample data.


So let me try with two queries,


Well, there is no truth what so ever.

Saturday, December 8, 2012

Using DBCC PAGE in Production Environment

We normally in the understanding that, we should never use undocumented commands in production like DBCC PAGE or DBCC IND etc. Recently, during a training, I got to know that there is no harm of using DBCC PAGE.

Only thing to worry here is, syntax or output or DBCC PAGE can be changed by Microsoft, WITHOUT SUPPORTING backward compatibility.

So it is perfectly safe to use valuable DBCC PAGE command but do not create automated task from DBCC PAGE, Use it for troubleshooting tasks.

Friday, December 7, 2012

Data Type Conversion Chart

This Illustrates all explicit and implicit data type conversions that are allowed for SQL Server system-supplied data types.


Thursday, November 22, 2012

Automatically Retry A SQL Server Job After A Failed Step

If you are running SQL Server jobs with numerous steps there can be failures due to network glitches or deadlocks. In such a scenario the DBA will typically restart the job from the failed step. However, there is an option in SQL Server Agent to automatically retry the job, which is under utilized by many DBAs. This faq will tell you how to do it.

Sunday, November 18, 2012

SQL Server 2012 SP1 is released

The latest SP1 has many new and enhanced features.

Few are following,

Selective XML Index
DBCC SHOW_STATISTICS works with SELECT permission
New function returns statistics properties - sys.dm_db_stats_properties


Download 1GB (!!!) size of SP1 from here.

Saturday, October 20, 2012

SSMS Report Security Bug

To view SSMS reports there are few security roles you need to grant. In SSMS there are three types of reports.

1. Server Reports

2. Database Reports

3. SQL Server Agent Reports

To view Server Reports you need VIEW SERVER STATE permission. To view database reports, you need db_datareader permissions. However, if you have any other permissions, following is what you see.


As expected, Standard Reports are disabled but reports which were opened before are in the list!! and you can run those reports as well.

Saturday, September 29, 2012

Database Mail has Locked the Domain Account

This was an incident happened to one of my colleague and I was part of troubleshooting team.   Winking smile

He was complaining that his domain account getting locked daily. Their domain has security configuration,

1. Password is expiring quarterly, every three months.

2. Three continues invalid attempts to login will lock the domain account.

Since I had the same issue before, I thought it might be a service. Once I have configured SQL Server  Express service account to login from my credential. Though I am not using this Express instance it has locked my domain account.

So I checked all the services but it went in vain.

We thought it is something to do with Outlook settings, which we reinstalled again didn’t work.

Then we stopped guessing and thought to approaching it more planned way and we started to examine the Security Log. (You would asks, “Why the hell you didn’t do that before” Angry smile)

Well, we found a clue there and it says logon failures for database mail.

Then examine the database mail configuration, yeap! we got it. SMTP authentication was set with Basic authentication instead he could have done with Windows Authentication.


After configuring database mail later his password has expired but since he has used basic configuration, still database mail was trying with the previous password. In his database server, there is job to send notification using database mail. with this job database mail getting executed and with that domain account is getting locked.

Thursday, September 27, 2012

Wednesday, September 26, 2012

Friday, September 21, 2012

Largest SQL Server Installations in the World

I was watching the 24hrsofPASS yesterday and came across the Kevin Cox presentation on PASS - What are the Largest SQL Server Projects in the World?

Following are the statistics for SQL Server world wide.


Following are most valued SQL Server clients.


Here are some statistics for bwin.


Sunday, September 16, 2012

Internal Version of SQL Server Databases

SQL Server has an internal version for it’s databases which as users you don’t need to know about them. But, in some cases like, attaching databases and restoring them between different version you will end up with error messages indicating these internal version numbers. So you need to know about these version numbers.

Following is the error message I received, trying to attach database which was built in SQL Server 2012 and surprisingly my target server is also SQL Server 2012.


It says my current SQL Server version is 705 while I am trying to attach a database of version 706. But both are SQL Server 2012.

Though both are SQL Server 2012, 705 stand for SQL Server 2012 RC0 while 706 stand for SQL Server 2012 RTM!

Here are the internal versions of different SQL Server versions.


This blog post describes about how to measure the version number and other aspects of it.

Thursday, September 6, 2012

Sunday, September 2, 2012

Query Plan Mysteries

Found nice article about query plan. In summery, query plan is cached not only with the queries but with the SET options. so if your query got cached with a ANSI_PADDING, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER and if you are running a acessing it with different set option that previously cached plan will not be used.

Also, depending on your application, DEFAULT set option will be differed,


Specially ARITHABORT setting is ON for SSMS while for ADO.Net it is OFF!.

Read the article, you will find it interesting.

Thursday, August 30, 2012

Simple Default Option Change in SSIS

Prior to SQL Server 2012, when you are importing data from a flat file source, by default Column names in the first row option is disabled. In SQL Server 2012 this option is by default enable!.


Friday, August 24, 2012

XEvents to Monitor Analysis Services

Extended Events a.k.a. XEvents were introduced in SQL Server 2008 as a light weight monitoring mechanism to support DBA.

In SQL Server 2012 XEvents is expanded to more subsystems such as SSAS, Replication, PDW etc. So with this, you have the light weight monitoring system for SSAS.

SQL Server 2012 release includes new trace events to help users for troubleshooting processing issues. Locks Acquired, Locks Released, and Locks Waiting are new trace events included with this release.

New and Modified DMVs in SQL Server 2012


Few changes have happened to DMVs. If you compare the number of DMVs with SQL Server 2008 R2 SP1 dmvs, there are 33 new DMVs.

New DMVs

In SQL Server 2012, you have 174 DMVs in total compared to 141 you have in SQL Server 2008 R2. Out of these 141 DMVs you have in R2 5 of them added by the SP1. So effectively you have 38 new DMVs in SQL Server 2012.

Let us start with those 5 DMVs.


Most of your SQL Server configurations are saved in the windows registry. So in case you want those to be read, you might need to read the registry.


In previous version you have to use xp_reg_read undocumented system stored procedures. Since this is an undocumented, many DBAs do not want to use in production environments.

However, with introduction of the DMV, dm_server_registry, you can simply query it with necessary where clause and you will be able to get the necessary configuration information from the registry.



How often you need windows information to capture in SQL Server? If so you may need to write powershell scripts. However, with the dm_os_windows_info, you can retrieve the information about Windows version, Service Pack level and Language of your operating system.


Insert Image 5528_09_06.png


How often you need to check whether the SQL Server agent job is running? If you need to execute a job from application, if SQL Server Agent Job is stopped, it will fail. To do a clean task, it will better to verify whether the SQL Server Agent Job is running and give an appropriate error message so that users are attend to the correct issue.

With dm_server_services DMVs you can verify the all the SQL Server services and the startup type as well. Also, for trouble shooting purposes, you can find the last startup time as well.



This DMV returns a list of memory dump files generated by the SQL Server Database Engine along with creation time and the dump file size.


This is a function where you need to provide database id and the file id of which need to check the volume stats.


This DMF will return the size of the volume and remaining space on the volume containing that file, file system type (NTFS or FAT) and few more details.


If you are a DBA, you don’t want your log files to auto grow as it would hamper the transaction performance. So you need to monitor the log file space. Most of the times, DBAs will use SQL Server Agent Alerts.

However problem with alert is that alert is event driven not time driven. In case, there is a long running transaction, log will not be cleared and during this time alerts will be received to the responders. So DBAs prefer to use SQL Server Agent Job so that they can schedule them according to the requirement.

In current version of SQL Server using DBCC SQLPERF you can do this using following script.


DBCC SQLPERF also has new column called Principal File Group Name is added in SQL Server 2012. So in case you need to run above script in SQL Server 2008 R2 or below, you need to comment out that column in the temporary table.

Things are easy in SQL Server 2012 with very simple DMV.


Similarly, you can view the used log space from the Disk Usage report available in the SSMS.


Report also has slight but very important change in SQL Server 2012. Now data label is also included in the report. Earlier, you don’t have the actual value and you need to guess it.


This DMV gets T-SQL statement (this can be a procedure as well) and return the metadata of the first result set.


Similarly dm_exec_describe_first_result_set_for_object DMV will give you the same result but difference is you need to pass object id of the stored procedure or a trigger. If you passed ID of any other object such as a view, table, function, or CLR procedure, an error will be specified in the error columns of the result.


How many times you used DBCC IND in production environments? DBCC IND and DBCC PAGE are undocumented but still many are using them in production. dm_db_database_page_allocations is replacement to DBCC IND. Dm_db_database_page_allocations which can be used to find allocated pages for the database object. This function takes 5 parameters, DatabaseId, tableID, indexID, partitionID and mode.

We have discussed furthermore new DMVs in Alwayson, Full Text Search, Waits and Contained database section.

Modified DMVs

Following DMVs are changed to increase the better usage.


sys.dm_exec_query_stats is often used DMV by joining few other DMVs to analyze badly behaving queries. Four additional columns are added to this DMV and they are,


Total number of rows returned by query.


Number of the rows return by the last execution of the query.


Minimum numbers of the rows returned by the query after it is compiled.


Maximum numbers of the rows returned by the query after it is compiled.

Following is the query you need to execute to analyze bad performing queries.


We testing for performance, DBAs check for page reads. However, if the page reads are high, it might be a bad query plan or it can be user has request for large data set. With these four columns, you can eliminate that dilemma and decide why you have high page reads.

Following DMVs are changed only by data length.


Column Name

Previous Length

Length in SQL Server 20012









































Saturday, August 18, 2012

Columnstore Index

Traditional indexes are based on rows where data is grouped and stored in row basis and then join all rows to complete the entire index. Columnstore indexes store data for columns and join the columns to complete the indexes. This type of index is helpful when retrieving data from large tables such as database warehouse fact tables for queries such as queries for filtering, aggregating, grouping.

Let us create a Columnstore index and compare the results with standard non clustered index.

Expand the Indexes tab inside a table where you want to create the Columnstore index. Right click Indexses and select New Index. Under this, you will see in the below image, a new option called Non-Clustered Columnstore Index…


With this you will be taken to the index creation dialog box. Here, Date column is used to create the Non-Clustered Columnstore Index for FactFinance table.


Similarly, you can use T-SQL scripts to create Columnstore Indexes. Following script will create an index on FactFinace table in the AdventureWorksDenali database.

USE [AdventureWorksDWDenali]







To compare the results of Columnstore index with traditional index, let us create standard Non-Clustered Index on the same date column as shown in the below dialog box.


So Index creation also has a different interface. Also, in SQL Server 2012 filtered index creation has an interface support which is not there in previous versions.

After creating those indexes you will see following indexes in the SQL Server Management Studio.


So you have a different icon for the Columnstore index.

Let us try to retrieve some data using above to indexes.

You will see that same query is executing at but with different indexes. More rows are added to this table so that you can visualize the different. This table has 2522176 Rows.

USE [AdventureWorksDWDenali]



FROM dbo.FactFinance WITH ( INDEX = [csi_date])




FROM dbo.FactFinance WITH ( INDEX = [nci_date])



Let us first analyze the execution plans for both queries.


From the above image, you can see that Query 1 (which uses the Columnstore Index) has 4 % cost while the Query 2 (which uses the Non-Clustered index) has cost of 96 %.

Let us now examine these queries with SET STATISTICS IO, TIME ON


With Columnstore Index

With Non-Clustered Index

CPU Time

16 ms

451 ms

Execution Time

35 ms

2,986 ms

Read Count



Above results will tell you that columnstore is far better than the standard index. However, you need to analyze this case by case as for some cases having a Columnstore index will have negative impact.



Following are the limitations of Columnstore Indexes.

· Cannot be Clustered Key or Primary Key

· Columnstore index has to be a Non-Clustered Index and it can be neither Clustered Key nor Primary Key.

· Cannot update tables with Column and following error will return if you try to update a table which has a Columnstore index.

Msg 35330, Level 15, State 1, Line 1

INSERT statement failed because data cannot be updated in a table with a columnstore index. Consider disabling the columnstore index before issuing the INSERT statement, then rebuilding the columnstore index after INSERT is complete.

· You can have only once Columnstore Index per table. After creating the first Columnstore index, option to create Columnstore Index will be disabled and if you try to create the index using T-SQL script following error will be returned.

Msg 35339, Level 16, State 1, Line 2

Multiple nonclustered columnstore indexes are not supported.

· Cannot be created with the INCLUDE keyword.

· Cannot be a unique index.

· Cannot be created on a view or indexed view.

· Cannot include a sparse column.

· Cannot have more than 1024 columns.

· If you need this much of columns for the Columnstore indexes, then obviously there is something wrong with the design.

· The following data types can be included in a columnstore index.

o Char, varchar, nchar and nvarchar (except varchar(max) and nvarchar(max))

o decimal (and numeric) (Except with precision greater than 18 digits.)

o int, bigint, smallint, and tinyint

o float and real

o bit

o money and smallmoney

o All date and time data types (except datetimeoffset with scale greater than 2)

· The following data types cannot be included in a columnstore index.

o binary and varbinary

o ntext, text, and image

o varchar(max) and nvarchar(max)

o uniqueidentifier

o rowversion (and timestamp)

o sql_variant

o decimal (and numeric) with precision greater than 18 digits

o datetimeoffset with scale greater than 2

o CLR types (hierarchyid and spatial types)

o xml

· Cannot Modify

Simply you are not allowed to use ALTEX INDEX syntax against Columnstore Indexes to modify the index. Drop and re-create the Columnstore index instead. Of course, if you want to disable or enable index, you can use ALTER INDEX syntax.

Friday, August 10, 2012

August 2012 Meet-up

We've put together a special meet-up for August 2012 with a special session and different meet-up location, just for a change. We also have a couple of transport options listed out below.

Session 1

TITLE: Using Extended Events

Performance issues? Debugging your stored proc? Deadlocks? How would you investigate and monitor these problems? Extended Events is the new monitoring platform. In this session we'll explore how to use extended events to monitor SQL Server and how it could be used as a potential replacement for SQL profiler.

SPEAKER: Prithiviraj Kulasingham, MVP

Session 2

TITLE: Anything-SQL Lightning Talks - 2nd Edition

A train of twelve 5-minute back-to-back presentations on various SQL Server aspects.


  • Abhinandana de Zoysa
  • Avantha Siriwardana
  • B.Umashanthan
  • Dedunu Dhananjaya
  • Dinesh Karunarathna
  • Hasitha Kanchana
  • Jayani Withanawasam
  • Sanjeewa Jayawickarama
  • Shamil Saleem
  • Shane Carvalho
  • Sriyantha Silva
  • Supun Thrikawala

Free for all.
Snacks and drinks provided.

Date and Time

August 15, 2012 6:00 PM Onwards

Pearson Lanka (formerly eCollege)
Regal Building, Orion City
752, Dr. Danister De Silva Mawatha
Colombo 09
Sri Lanka

Tuesday, August 7, 2012

File Table

A SQL Server FileTable is a special table that allows storing directory and files into the SQL Server. With this option, you can access these files and directories from windows applications as if they were stored in the file system.

Files stored in the FileTable are exposed to windows through a windows share.

If you create or change a file through the windows share the command is captured by a SQL Server component and the changes are applied to the corresponding data in the FileTable.

Since FileTable feature builds on top of SQL Server FILESTREAM technology, first you need to enable your SQL Server instance for filestream.

To enable FILESTREAM feature on SQL Server 2008:

Open SQL Server Configuration Manager (Start > Programs > Microsoft SQL Server 2008 > Configuration Tools > SQL Server Configuration Manager)

Navigate to the SQL Server Services node and select the SQL Server instance you want to modify SQL Server (MSSQLSERVER)

Click the FILESTREAM tab and select the checkboxes to enable FILESTREAM and enter a share name for the files as shown below image.


Next is to enable the filestream access level which can be done from either using a T-SQL or from the interface.

EXEC sys.sp_configure N'filestream access level', N'2'




If you prefer to use user interface you can enable file stream by Right Click Server name from the management studio and select Properties and then select Advanced option and finally you can select the filestream option from the given option.


Let us create a database with filestream set to on.



IF DATABASEPROPERTYEX('FileTableDatabase','Version') > 0

DROP DATABASE FileTableDatabase




NAME = N'FileTableDatabase_Data',

FILENAME = N'D:\FileTable\FileTableDatabase.mdf'




NAME = N'FileTableDatabase_FileStream',

FILENAME= 'D:\FileTable\Data'




NAME = N'FileTableDatabase_Log',

FILENAME = N'D:\FileTable\FileTableDatabase_log.ldf'







Next step is to create a File Table which will be the container for the files and directories.

USE FileTableDatabase


CREATE TABLE FillTableDocument AS FileTable



FileTable_Directory = 'FileTableDocument'


You do not have a user interface option to create FileTables hence you need to write T-SQL as shown above.

After creating the FillTable, if you navigate to SQL Server Management Studio and expand FillTables node under the Tables node, you will see the created FileTable. If you expand Columns, you will see list columns.


In the File Table created, you can right click and select Explore FileTables Directory as shown in the below image.


This will take you to the File Directory where you can create your files. Now you have the option of placing files and creating directories in this folder which will automatically see from the File Table.

As you can see from the below image three different types of files are copied to the FileTable Directory and there is another sub folder is created along with few files. These files were copied from the operating system not from the SQL Server.


If you query the File table, you will see manually copied files are updated to the SQL Server.


If you want to rename files, you can do it from the OS level as well as from the T-SQL query as shown in the below script. T-SQL script will update table and accordingly file name will be changed and vice-versa. .



SET name = 'FileTable4.txt'

WHERE [stream_id] = '2EB5608D-F270-E111-80DB-101F74EE0B01'

Since path_locator has data type of heirachyid you can use all the CLR functions.

Following will give you the operating system root path for the given File Table.

SELECT FileTableRootPath('dbo.FillTableDocument') as RootPath

For each file you can get the full file path with the following query.

SELECT name,file_stream.GetFileNamespacePath() full_path ,is_directory

FROM dbo.FillTableDocument c

Which has the following output.


Another important point to note here is, all these files will not be saved in SQL Server database. This means that though you have large files in the operating system, your table sizes or database sizes will not be large in size. When you are querying the table, if you are selecting the file_stream column, for large files it will take little while to run the query since it reads the file stream by reading the entire file.

You can utilize File Table for handle image libraries, video libraries and CV management system.

Triggering a New File

Most of the time, users are required to track new file comes to the folder, For example, you might need to execute a SSIS package once new file is received. If you have File Table option enabled, you can simply create an INSERT trigger to the File table and execute SSIS within a trigger.

Monday, July 30, 2012

Database Snapshots in Index Rebuilding


Yesterday I posted a blog about database snapshot and this is somewhat an extension to it. Yesterday we checked the behavior of a database snapshot with database backups. Now let us check it’s relation with Indexes.

Let us create a database and relevant table along with indexes.


Then I manually insert data so that my indexes are fragmented which is shown below.


Then I created snapshot and as yesterday verified the file sizes.


Then did a index rebuild and verified the file sizes again.


So the sizes are increased which means index rebuild does effect the database snapshots.

Saturday, July 28, 2012

Database Snapshot, is it only for Data Modifications?

Database Snapshot consists of a read-only static view of the database without including the uncommitted transactions.

If I put all of the above stuff in one picture, it will be like this.


In simple terms, when we create a database snapshot, empty spare file will be created As and when data is changed in the database, data pages are moved to the sparse file. So if you are reading from the snapshot, modified data pages will be read from the sparse file while non-modified file pages are read from the source database.

Now the question is, Is it only the data modifications are sent to the sparse file.

Let us examine this.
Let me create a database and some sample data for it.

Then I will take a full back and update few data as shown below.


Now then, back to the business end.

Let us create a database snapshot.


After creating this let us see the size of the snap shot file size.


So the file size of the snapshot file is 3.06 MB but size on the disk is 128KB which is actual data size.

Now let me take another backup.


Let us see the file size again.


What! how come, I didn’t do any data changes, still the size on disk has increased more than twice without any data modification.


When there are data changes after a full backup, it will set a bit value in page so that for differential backup it can identify which pages need to taken into the differential backup. This value will be reset with a full backup. Since you have done a full backup after the database snapshot creation, full backup will modify the data page. So database snapshot is not only for the data modification but anything changes to the data page will move the page to the database snapshot.