Translate

Sunday, March 31, 2013

Singapore NUS buys into Microsoft's big data vision

The National University of Singapore (NUS) today announced the successful implementation of SQL Server 2012 to give employees in its Centre for Instructional Technology (CIT) the ability to run hypotheses and validate IT proposals on a self-service basis.

The implementation of Microsoft's SQL Server 2012 took between three to six months, he revealed. Besides cutting down on time, the Power View dashboard for IVLE usage was significantly improved in that it was easily understood, configurable and managed almost entirely by end-users.

The Singapore university is one of the latest in the region to sign up for Microsoft's big data offerings. Arun Ulag, Microsoft's general manager for server & tools division in Asia-Pacific, said in the same briefing session Tuesday that Thailand's Department of Special Investigation (DSI) had implemented a big data system based on SQL Server 2012 and Apache Hadoop software to more effectively retrieve and correlate different sets of information stored up in its siloed databases.

Read more here

SET IDENTITY_INSERT

This is the seventh post of SET Statement series.

SET IDENTITY_INSERT is one of the most commonly used SET statement. SET IDENTITY_INSERT will be set to ON when you need to be inserted into the identity column of a table explicitly.

CREATE TABLE Department
(DeptID INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
DepartmentName Varchar(50)
)


INSERT INTO Department
(DepartmentName)
VALUES
('Account'),
(
'Human Resources')


If you select from the table you will see DeptID column is populated with requital value.

image

Now what if we have a requirement of inserting deptid explicitly.

INSERT INTO Department
(DeptID,DepartmentName)
VALUES
(5,'Transport')

this will return an error as shown below.


Msg 544, Level 16, State 1, Line 2
Cannot insert explicit value for identity column in table 'Department' when IDENTITY_INSERT is set to OFF.


So to avoid this issue, you need to SET the IDENTITY_INSERT property on. This property is valid on for the session which enabled the IDENTITY_INSERT ON.

SET IDENTITY_INSERT  Department ON
INSERT INTO Department
(DeptID,DepartmentName)
VALUES
(5,'Transport')
SET IDENTITY_INSERT  Department OFF


If the value entered to the identity column is greater than the current identity value, after inserting new value value identity will be set to the new value. For example, if the current identity of the table is 5 and if you insert 100 to the identity column, next time you insert data without IDENTITY_INSERT option on, next value for the identity will be 101. However, if you are inserting a value less than the current identity value, current identity value remains.


To perform a IDENTITY INSERT you nee alter permission to the table not the write permission.


Previous posts of this series,

SET IMPLICIT_TRANSACTIONS ON

SET NOCOUNT

SET DEADLOCK_PRIORITY

SET CONCAT_NULL_YIELDS_NULL

SET FORCEPLAN


SET PARSEONLY

NoSQL Benchmarking

In this article it will reveal the characteristics of Cassandra, HBase and MongoDB identified through multiple workloads.

Saturday, March 30, 2013

SET PARSEONLY

This is the sixth post of SET Statement series.

When this set to on, SQL Server engine examines the syntax of each Transact-SQL statement and returns any error messages without compiling or executing the statement. By default this is OFF,

USE AdventureWorks2012
SET PARSEONLY ON

SELECT
SOH.SalesOrderID,SOH.OrderDate, P.ProductID,
P.Name,SOD.OrderQty,SOD.LineTotal FROM Sales.SalesOrderDetail SOD
INNER JOIN Sales.SalesOrderHeader SOH ON SOH.SalesOrderID = SOH.SalesOrderID INNER JOIN Production.Product P ON SOD.ProductID = P.ProductID WHERE SOH.OrderDate = '2005-07-01'
AND P.ProductId = 712

 


When the above query is executed, no results will be displayed.


image


There will be obvious question where it can be used.


Let' us try to set this in a stored procedure.


CREATE PROC InsertProc
AS

SET PARSEONLY ON

SELECT
1


This will fail with following error.

Msg 1059, Level 15, State 1, Procedure InsertProc, Line 0
Cannot set or reset the 'parseonly' option within a procedure or function.


You cannot use the ‘PARSEONLY’ option in a procedure or function. Though this error refers to procedure and function, same error will generate if you try to set this option a trigger.

Previous posts of this series,

SET IMPLICIT_TRANSACTIONS ON

SET NOCOUNT

SET DEADLOCK_PRIORITY

SET CONCAT_NULL_YIELDS_NULL

SET FORCEPLAN

Thursday, March 28, 2013

Database Restore Error

I was trying to restore a database, but was getting this error.

image

Error in text is, “the database principal is set as the execution context of one or more procedures functions.”

Error is getting after completion of 100% and after searching few web pages, it seems like a security or schema error. These are few of those links.

The database principal is set as the execution context of one or more procedures ... - MSDN – Mi

Microsoft SQL Server- The database principal is set as the execution

Drop failed for user

However, none of these links were relevant after I verify all schemas.

Then there was thought came to check the error logs and found this error in the error log.

Database 'XXXX' was restored, however an error was encountered while replication was being restored/removed. The database has been left offline. See the topic MSSQL_ENG003165 in SQL Server Books Online

This is the BOL link for it. http://msdn.microsoft.com/en-us/library/ms151782(v=sql.90).aspx

This error is raised if a problem occurs restoring a backup of a replicated database:

The error is probably the result of a mismatch between the state of the restored database and one or more system databases that contain replication metadata: msdb, master, or the distribution database.

Execute ALTER DATABASE to bring the database online; for example: ALTER DATABASE AdventureWorks SET ONLINE. For more information, see ALTER DATABASE (Transact-SQL). If you want to preserve replication settings, go to step 2. If not, go to step 3.

Also, you can restore the database with KEEP_REPLICATION option as well.

Per-Core Change–Impact on Overall Price

image

I got this from a friend of mine.

Tuesday, March 26, 2013

Common Error in float Data Type

Think about this query.

DECLARE @Cnt FLOAT;
SET @Cnt = 0
WHILE (@Cnt <> 1.0 )
BEGIN
SET
@Cnt = @Cnt+ 0.1
PRINT @Cnt
END

you would expect it to complete after displaying 0.1 to 1.0. But see the output.


image


How come, what’s wrong here.


Problem here is with the float data type. 0.1 may not not be stored as exactly as 0.1 instead it will be stored 0.0999999.


How we can correct this. By changing the float (approximate data type) to numeric (5,1) (exact data type) will solve the problem.

DECLARE @Cnt NUMERIC(5,1);
SET @Cnt = 0
WHILE (@Cnt <> 1.0 )
BEGIN
SET
@Cnt = @Cnt+ 0.1
PRINT @Cnt
END

Output for the above query is.


image



which is what we are looking for.

The attribute key cannot be found error

I got this error during a cube process.

Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'dbo_facttable', Column: 'ColumnSK', Value: '2808'. The attribute is 'Column SK'. Errors in the OLAP storage engine: The attribute key was converted to an unknown member because the attribute key was not found. Attribute ColumnSKof Dimension: DimDimension from Database: SSAS_Cube, Cube:SSAS_Cube, Measure Group: FactTable, Partition: Fact Table  Cases, Record: 1. Errors in the OLAP storage engine: The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation. Errors in the OLAP storage engine: An error occurred while processing the 'Fact Cases' partition of the 'Fact Cases' measure group for the SSAS_Cubecube from the SSAS_Cube database.

When I further analyzed it I verified that 2808 is in the dimension table. Since the CUBE storage is MOLAP, before processing the cube I have to process the dimensions. After that CUBE was processed successfully.

Monday, March 25, 2013

Webinars on MongoDB

Webinar: Best Practices for MongoDB on AWS

Thursday March 28th 11:00am PDT / 2:00pm EDT / 6:00pm UTC

In this session we will look at best practices for administering large MongoDB deployments in the cloud.

Click here for more information.

Webinar: How Telcos Use MongoDB

Tuesday April 2nd 7:00am PDT / 10:00am EDT / 2:00pm UTC

In this webinar, you will learn how operators are increasingly leveraging MongoDB to develop new applications quickly and secure new revenue streams.

Click here for more information.

Webinar: Deployment Best Practices

Tuesday April 9th 11:00am PDT / 2:00pm EDT / 6:00pm GMT

(We suggest registering at least 30 minutes before the start of the webinar.)

The last bugs are finished, testing is complete, and business is ready. What do you do next? In this talk we will cover the topics to ensure that you are prepared for a successful launch of your MongoDB based product, including:

Click here for more information.

Webinar: How Banks Use MongoDB as a Tick Database

Tuesday April 16th 7:00am PDT / 10:00am EDT / 2:00pm GMT

(We suggest registering at least 30 minutes before the start of the webinar.)

Click here for more information.

Webinar: MongoDB 2.4 Feature Demo and Q&A on Geo Capabilities

Thursday April 18th 11:00am PDT / 2:00pm EDT / 6:00pm UTC

In version 2.4, there have been significant enhancements to the geospatial indexing capabilities in MongoDB, such as polygon intersections, a more accurate spherical model, and better integration with MongoDB's aggregation framework.

Click here for more information.

Webinar: NoSQL as the New Normal

Tuesday April 23rd 11:00am PDT / 2:00pm EDT / 6:00pm GMT

What started as a way for web giants to solve problems of serious scale has become the default way all enterprises manage Big Data. Despite having a catchy, if inaccurate title, there really isn't a coherent "NoSQL" category, nor is there a simple future for the range of NoSQL databases. In this presentation, Matt Asay will outline the reasons for NoSQL's existence and persistence, how the different NoSQL technologies help enterprises get control of Big Data, and will identify the trends that point to a bright future for post-relational databases.

Click here for more information.

Webinar: MongoDB 2.4 Feature Demo and Q&A on Security

Thursday April 25th 11:00am PDT / 2:00pm EDT / 6:00pm UTC

In version 2.4, MongoDB Enterprise includes Kerberos support for integration into existing enterprise security systems, as well as role-based privileges to provide more granular security for your cluster.

Click here for more information.

Webinar: How Banks Manage Risk with MongoDB

Tuesday April 30th 8:00am PDT / 11:00am EDT / 3:00pm GMT

(We suggest registering at least 30 minutes before the start of the webinar.)

Click here for more information.

Webinar: MongoDB 2.4 Feature Demo and Q&A on Hash-based Sharding

Thursday May 2nd 11:00am PDT / 2:00pm EDT / 6:00pm UTC

In version 2.4, MongoDB introduces hash-based sharding, allowing the user to shard based on a randomized shard key to spread documents

SET FORCEPLAN

Fifth post of SET Statement series.

Let us execute the following query.

USE AdventureWorks2012
GO

SELECT
SOH.SalesOrderID,SOH.OrderDate, P.ProductID, P.Name,SOD.OrderQty,SOD.LineTotal FROM Sales.SalesOrderDetail SOD
INNER JOIN Sales.SalesOrderHeader SOH ON SOH.SalesOrderID = SOH.SalesOrderID
INNER JOIN Production.Product P ON SOD.ProductID = P.ProductID
WHERE SOH.OrderDate = '2005-07-01'
AND P.ProductId = 712

Following will be the query execution plan.


image


You can see that query plan does not depends on the order of the tables in the query. for example, though you have SalesOrderDetail table in the query, query plan first pick the products table since that query plan will be the best.


If you set the above set option to ON, then the query plan will change. as shown below.


image


So when the SET FORCEPLAN is set to ON,SQL Server query optimizer processes a join in the same order as the tables appear in the FROM clause of a query. In addition, setting FORCEPLAN to ON forces the use of a nested loop join unless other types of joins are required to construct a plan for the query.


However,when using this option, you need to make sure that after setting it to ON, your query performs better.


Previous posts of this series,

SET IMPLICIT_TRANSACTIONS ON

SET NOCOUNT

SET DEADLOCK_PRIORITY

SET CONCAT_NULL_YIELDS_NULL

Monday, March 18, 2013

How to create scripts for all /selected SQL Server Agent Jobs?

SQL Server may have many jobs and if you have a requirement of scripting them, if you are using object explorer you cannot scripting them all at once as shown in the below screen.

image

Go to View -> select Object Explorer Details, you will be taken to following screen.

image

From then above list either you can select all the jobs or jobs you need to script. Then you can create script as shown in the below screen.

image

Friday, March 15, 2013

SET CONCAT_NULL_YIELDS_NULL

Fourth post of SET Statement series.

When concantating multiple columns you always need to verify whether your columns have null values. If they have null values when you concantating them, final result also will become NULL as shown below.

SELECT 'dbfriend' + NULL ;
GO

By default output of this is NULL. However, you can change this behavior by setting the CONCAT_NUL_YIELDS_NULL to OFF (default setting is ON)

SET CONCAT_NULL_YIELDS_NULL OFF;
GO
SELECT
'dbfriend' + NULL ;
GO

Now the output of the above query is dbfriend not NULL.


Let us check this when you are concantating table columns.

USE AdventureWorks2012
GO

SELECT
title + '' +
firstname + '' +
MiddleName + '' +
lastname Full_Name
FROM Person.Person

Result for the above query where you can see lot of null values as either of one column has a null values final result will be null.


image


Let us run this with the setting enable.

SET CONCAT_NULL_YIELDS_NULL OFF;
GO
USE
AdventureWorks2012
GO

SELECT
title + '' +
firstname + '' +
MiddleName + '' +
lastname Full_Name
FROM Person.Person

And the result is shown below and null values are gone!


image


With SQL Server 2012 we have new feature called CONCAT to support this. So in SQL Server 2012, you don’t need to set this setting.


Previous posts of this series,

SET IMPLICIT_TRANSACTIONS ON

SET NOCOUNT

SET DEADLOCK_PRIORITY

Thursday, March 14, 2013

SET DEADLOCK_PRIORITY

This is the third post if the SET statement series.

You can set the DEADLOCK_PRIORITY from labels or from numeric values. Numeric ranges are from -10 to 10 while labels are HIGH, LOW or NORMAL and LOW maps to -5, NORMAL to 0, and HIGH to 5 of the numeric scale.

This option is available from SQL Server 2005.

To create a deadlock for testing purposes you can execute the below TSQL script:

-- 1) Create Objects for Deadlock Example
USE TempDB
GO
CREATE TABLE
foo (col1 INT)
INSERT foo SELECT 1
CREATE TABLE bar (col1 INT)
INSERT bar SELECT 1
-- 2) Run in first connection and in my server this connection was SPID 59

BEGIN TRAN
UPDATE foo SET col1 = 1

-- 3) Run in second connection and in my server this connection was SPID 62
BEGIN TRAN
UPDATE bar SET col1 = 1
UPDATE foo SET col1 = 1
-- 4) Run in first connection
UPDATE bar SET col1 = 1

When the above query is executed, as expected a deadlock occurred and it happened to be the SPID 59 :


Msg 1205, Level 13, State 45, Line 1 Transaction (Process ID 59) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Let us assume that I do not want 59 to become the victim and instead have another connection (62) become the victim. I will add the following SET command to my SPID 59 :

SET DEADLOCK_PRIORITY HIGH;
GO
UPDATE bar SET col1 = 1

Now when I run the first TSQL script SPID 62 will become a victim while SPID 59 will be completed:

Msg 1205, Level 13, State 45, Line 3 Transaction (Process ID 62) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Previous articles of this series,

SET IMPLICIT_TRANSACTIONS ON

SET NOCOUNT

SET NOCOUNT

Continuing on the series on SET statements, this time it is SET NOCOUNT.

Let us run the following query.

USE AdventureWorks2012
GO

SELECT
* FROM Sales.SalesOrderHeader

Of course you will see the output and if you look at messages tab you will see,


(31465 row(s) affected)


So when an query is executed DONE_IN_PROC messages is sent to the client. A DONEINPROC token is sent for each executed SQL statement within a stored procedure.


Token Stream-Specific Rules:

TokenType        =   BYTE
Status = USHORT
CurCmd = USHORT
DoneRowCount = LONG / ULONGLONG;
Details are here for token DONE_IN_PROC. 
When SET NOCUNT is set to ON ( default value is OFF), this token will not be sent. Since this token is sent for each and every statement, by setting this to ON, you can save lot of network traffic. 
SET NOCOUNT ON

USE
AdventureWorks2012
GO

SELECT
* FROM Sales.SalesOrderHeader

When this is executed at the messages tab you will see following results.


Command(s) completed successfully.


However, @@ROWCOUNT will not be effected with SET NOCOUNT ON setting.


Previous articles of this series,


SET IMPLICIT_TRANSACTIONS ON

Wednesday, March 13, 2013

SET IMPLICIT_TRANSACTIONS ON

 

SET IMPLICIT_TRANSACTIONS ON sets the transactions to use COMMIT before a transaction is committed. Default setting is OFF and when it is set to OFF transactions will be committed automatically.

When the setting is set to on, following statements will start a  transaction implicitly.

 

ALTER TABLE CREATE DELETE DROP
FETCH GRANT INSERT OPEN
REVOKE SELECT TRUNCATE UPDATE
MERGE ALTER SERVER    

Transactions that are automatically opened as the result of this setting being ON must be explicitly committed or rolled back by the user at the end of the transaction. Otherwise, the transaction and all of the data changes it contains are rolled back when the user disconnects. After a transaction is committed, executing one of the statements above starts a new transaction.

SET IMPLICIT_TRANSACTIONS ON

INSERT INTO
Emp
(Name,AddressI,City)
VALUES
('Name','Add','C1')

SELECT @@TRANCOUNT

Output of this is 1 which mean there is one transaction is open which was created by setting IMPLICIT_TRANSACTIONS to ON.


Above transaction has to be finished with COMMIT or ROLLBACK.


You can monitor the trancount from perfmon as well.


image


 


User Option Setting


You have a setting for implicit transaction in the server properties in connection page as shown below.


image


This setting is for dblib network library connections and this setting has no effect on ODBC or OLEDB connections.


Also, you can set the above setting by running following script.


EXEC sys.sp_configure N'user options', N'2'
GO
RECONFIGURE WITH OVERRIDE
GO

You are not able to set this if you have contained databases in the Server.


Msg 12830, Level 16, State 1, Procedure sp_configure, Line 166
The sp_configure 'user options' setting must be zero if the Database Engine has contained databases.


However, you can set SET IMPLICIT_TRANSACTIONS ON inside a contained databases without any issues.

Using Filtered Indexes to Improve Performance

Filtered indexes were introduced with SQL Server 2008 and this article describes about features and usages of filtered indexes.

Monday, March 11, 2013

SET Statements

T-SQL programming language behavior will change  with the SET statement you are specifying. There are around 40 SET statements during next blog post, I will be discussing about those settings in coming posts.

Saturday, March 9, 2013

Collation Conflict - 3

I posted a blog post about Collation Conflict some time back. With the discussions on Temp tables and table variables we realized that we have an option with table variable as work around for the above problem.

However, we have much cleaner option in SQL Server 2012 by using contained database option.

Let us create the Contained database.

CREATE DATABASE [Danish]
CONTAINMENT
= PARTIAL
COLLATE Danish_Norwegian_CI_AS
GO


Then let me run the previous script which failed in standard databases.


USE Danish
GO

CREATE TABLE
Table1
(IDINT IDENTITY,
NameVARCHAR(50)
)

INSERT INTOTable1
(NAME)
VALUES
('A'),
(
'B'),
(
'C')

CREATE TABLE#Temp1
(MaxIDNAME VARCHAR(50)
)

INSERT INTO#Temp1
SELECTMAX(Name)FROMTABLE1

SELECT*FROMTABLE1
WHERENAMEIN
(
SELECTMAxIDNAME from#Temp1)


This time we don’t have any errors.


image


Actually, temp table  is created in the tempdb but will not have an issue as in the standard databases.

Friday, March 8, 2013

PASS Virtual Chapter : Video is Available

Date: January 23, 2013

Topic: Resource Governor

Presenter: Dinesh Asanka

Abstract:

Resource Governor is a new technology introduced in SQL Server 2008 that enables you to manage SQL Server workload and resources by specifying limits on resource consumption by incoming requests.

View: ResourceGovenor_01232013.wmv

http://dba.sqlpass.org/MeetingArchive.aspx

Thursday, March 7, 2013

Filtered Indexes with FORCE SEEK

If you closely analyzed the below query, you will see that there is a non-clustered filtered index on ScrapReasonID column and filter condition is ScrapReasonID = 17 . In the query, there is a force seek with the above index but the where condition is 30. So, you can see there is an contradiction between index and query filter condition.

image

So what do you think about the output of the above query.

You will think that index hint will be ignored.However, this is the output for the above query.

Msg 8622, Level 16, State 1, Line 2
Query processor could not produce a query plan because of the hints defined in this query.
Resubmit the query without specifying any hints and without using SET FORCEPLAN.

Tuesday, March 5, 2013

IEEE ICCSE 2013

I will be presenting at IEEE ICCSE 2013 in Data mining category. I will update more information when I get them.

image