dinsdag 11 augustus 2009

SQL Traces mixing SQLServer2005 - 2008

Today was expiriencing some problem opening a Trace File:

Unable to locate trace definition file Microsoft SQL Server TraceDefinition 10.0.0.xml for open trace file... It took me more than 10 min to figure out that the server I was profiling (Using SQLServer 2005 profiler) was a SQLServer 2008. :-p

So you can trace a SQLServer 2008 using a SQLServer 2005 Profiler, but you cannot re-open the same traces using a SQLServer 2005 profiler, you have to use a SQLServer 2008 Profiler instead!

Or if you don't have the 2008 tools installed yet on your locale machine, you can use:

select *
from fn_trace_gettable('\\share\...\TestTrc.trc',default)

It works fine on both SQLServer 2005 and SQLServer2008 SSMS to connect to a SQLServer2005 or SQLServer2008 trace.

vrijdag 7 augustus 2009

Fatal error 8646 occurred when performing an "ALTER PARTITION FUNCTION"

Today I was just doing some tests to create a partitioned table with a sliding window on an existing table.

So I created my partition function and partition scheme, and did a drop index with move to my partition scheme on the existing table. Everithing (selects, inserts,...) worked well, until I wanted to Alter my Partition function to do an extra RANGE SPLIT. Than I did get a Fatal Error 8646... error message. Fortunately, I remembered some problems with indexes and replication from the past who gave the same "8646" error message. And when looking at the table, I saw there was also an existing non-clustered index. After I dropped the non-clustered index, the problem was solved.

I tried to reproduce the same error with a smaller table, but didn't get any "succes".

woensdag 5 augustus 2009

SQLServer Parameter sniffing

We had an issue in two of our stored procedures recently with query execution plan caching. After digging deep in the code base we found what was causing the problem.

One of the great features in SQL Server is its optimizing query processor. There are, however, times when some of the advanced features of the optimizer can cause problems. Sometimes it simply helps too much. One of those is when it automatically uses the parameter values passed into a stored procedure to tailor the execution plan it will construct for a query. This is called "parameter sniffing" and is normally a good thing. It allows the optimizer to take into account the actual values of stored procedure parameters as it builds plans to service queries within the procedure that make use of them. IOW, if you pass a value of "test" for a parameter named @param, then use @param in the WHERE clause of a query within the procedure, the optimizer is smart enough to use "test" when it estimates how many rows will match the WHERE clause as it compiles a plan for the query.
While parameter sniffing is certainly a powerful feature, it can cause problems when a procedure’s plan happens to have been kicked out of the procedure cache (or was never in it) just prior to the procedure being called with atypical parameter values. This can result in a plan that is skewed toward atypical use, one that is suboptimal (or even downright slow) when called with typical values. Since, once cached, a query plan can be reused for parameter values that vary widely, the ideal situation is to have a plan in the cache that covers the typical usage of the procedure as much as possible. If a plan makes it into the cache that is oriented toward atypical parameter values, it can have a devastating effect on performance.
SQL Server 2005 has some features to help combat this, but there are no built-in features for dealing with it in prior releases. There are three simple tricks that you can use to solve this problem:

- the use of "decoy variables”. This means that we filter the query on a local variable value; witch replaces the parameter that we actually pass into the procedure.

CREATE PROCEDURE usp_avoidSniff1(@param int)
AS

BEGIN
Decalre @lclVarParam int

SET @lclVarParam = @param

SELECT x, y
FROM MyTable
WHERE z = @lclVarParam
END

GO

- Filter the query on a dummy parameter, i.e. @template_param, that we never actually pass into the procedure. We always pass in the filter value we want to use via @param and let @template_param retain its default. The optimizer notices that we’re filtering the query against the sales table using @template_param and sniffs its value (which will always be the default value at compilation time) when creating the query plan for it.

CREATE PROCEDURE usp_avoidSniff2(@param int, @param_def int= ‘My optim val’)
AS

BEGIN

SET @param_def = @param


SELECT x, y
FROM MyTable
WHERE z = @param_def
END

GO

- Use the OPTIMIZE FOR statement (SQLServer 2005). Adding this to a query makes sure the plan is optimized for the value you specify. And when all else fails you can even force a particular query plan with the new USE PLAN statement

CREATE PROCEDURE usp_avoidSniff1(@param int)
AS

BEGIN

SELECT x, y
FROM MyTable
WHERE z = @param OPTION(OPTIMIZE FOR (@IDate = Null))

END

GO

dinsdag 28 juli 2009

SQLServer Locking and deadlocks

In SQLServer environments with a lot of concurrency, locks, blocks and deadlocks are a very common issue.

To avoid negative performance impact, here are some best practices and monitoring tools to use:

1) Use of WITH (NOLOCK)


Where applicable, use WITH (NOLOCK) in the FROM-clause and in JOIN. It avoids locks on the database while performing read-operations.
Even when retrieving a value for values of local parameters inside a stored procedure, use the WITH (NOLOCK) statement.

Example:

SELECT x.a
, y.b
, …
FROM dbo.MyTableX AS x WITH (NOLOCK)
INNER JOIN dbo.MyTableY As y WITH (NOLOCK)
ON …..

The NOLOCK keyword instructs SQL Server not to wait until any exclusive locks are gone. This improves concurrency on the database but, a resultset might contain data that have not been committed.

In rare cases we can encounter this scenario:
Query session A starts to change data for a in table X. Milliseconds later session B starts to read these data with the WITH (NOLOCK) option. Data, as changed by session A are shown to user B. But then user A cancels his action and the transaction is rolled back. At this point user B can see data on his screen that have not been committed.
To avoid problems with uncommitted data (dirty reads) we should not use the WITH (NOLOCK) option when retrieving data that will be used in calculations and batches or on reports.

2) Always check that the columns in your WHERE-clause are covered by an index.


If your statement takes more than 1 second, use the sp_lock command in SQL Server to check on exclusive locks (X)

3) Use of Cursors

Do not use cursors in Stored Procedures. They are too expensive for the DB performance. But sometimes are cursors required.

4) Keep transactions short

Put only the strictly necessary commands in the same transaction, and split up all commands in as much transactions as possible!

5) During transactions, don't allow any user input. Collect it before the transaction begins.

6) Execute inserts/ updates on the same tables in the same order to avoid deadlocks.

7) Choose your clustered index wisely:

a. Choose a very narrow index which is very unique

b. If the clustered index contains more then 1 column, place them in the right “fine graining” order.

c. Avoid lock escalations during the update

8) Avoid page splits:

a. Use a correct fillfactor
b. Again choose your clustered index wisely!!!

9) Use the right monitoring tools when necessary:

a. Profiler traces (DeadLocks + DeadLock chains + Deadlock Graph, lock timeouts).

b. Trace flags: To help identify which tables or stored procedures are causing deadlock problems, turn on trace flag 1204 (outputs basic trace data) or trace flag 1205 (outputs more detailed trace data).

DBCC TRACEON (3605,1204,-1)

Be sure to turn off this trace flag when you are done, as this trace can eat up SQL Server's resources unnecessarily, hurting performance.

10) Change the default deadlock behavior when appropriate

You can this by using the following command:

SET DEADLOCK_PRIORITY { LOW NORMAL @deadlock_var }

WHERE:

Low tells SQL Server that the current session should be the preferred deadlock victim, not the session that incurs the least amount of rollback resources. The standard deadlock error message 1205 is returned.

Normal tells SQL Server to use the default deadlock method.

@deadlock_var is a character variable specifying which deadlock method you want to use. Specify "3" for low, or "6" for normal.

This command is set a runtime for a specified user connection