performance

When statistics were last updated in SQL Server?

T-SQL showing when statistics were last updated using STATS_DATE function in SQL Server:

SELECT
        OBJECT_NAME(S.object_id) as object_name,  S.name as stat_name,
        S.auto_created, S.user_created, S.no_recompute,
        STATS_DATE(S.object_id, stats_id) as last_updated
FROM
        sys.stats S
        INNER JOIN sys.objects O ON O.object_id = S.object_id AND O.is_ms_shipped = 0
ORDER BY object_name

How to (manually) update the statistics?

  • Force stats to update for a specific table (including clause WITH FULLSCAN)
    UPDATE STATISTICS <tablename>;
  • Full sampling:
    UPDATE STATISTICS <tableName> (<columnName>) WITH FULLSCAN;
  • Force stats to update for the entire database:
    EXEC sp_updatestats;

Count of SQL Server connections by IP address

SELECT
        EC.client_net_address, ES.host_name, ES.program_name, ES.login_name,
        COUNT (ec.session_id) as NoConnections
FROM
        sys.dm_exec_sessions ES
        INNER JOIN sys.dm_exec_connections EC ON ES.session_id = EC.session_id
GROUP BY EC.client_net_address, ES.program_name, ES.host_name, ES.login_name
ORDER BY NoConnections DESC

Avoiding deadlocks

The following are some of the techniques you can use to avoid a deadlock:

Access resources in the same physical order => the first transaction will successfully acquire locks on the resources without being blocked by the second transaction.

Decrease the locking
• Convert a nonclustered index to a clustered index.
• Use a covering index for a SELECT statement

Minimize lock contention
• Implement row versioning through the READ_COMMITTED_SNAPSHOT isolation level or through the SNAPSHOT isolation level
• Decrease the isolation level
• Use locking hints: NOLOCK or READUNCOMMITTED.

Blocking resolutions

Once you’ve analyzed the cause of a block, the next step is to determine any possible resolutions. Here are a few techniques you can use to do this:

Use a covering index on the contended data If the query of one of the processes can be satisfied using a covering index, then it will prevent the process from requesting locks on the contended resource
Optimize the queries executed by blocking and blocked SPIDs E.g. break UPDATE/DELETE statements into multiple batches using the appropriate WHERE clauses. If the individual statements of the batch are executed in separate transactions, then fewer locks will be held on the resource within one transaction, and for shorter time periods.
Decrease the isolation level Use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED or WITH (NOLOCK) locking hint.
Partition the contended data This way, exclusive locks will be acquired at partition level, so it will increase concurrency.

Using Indexes Effectively

Adding indexes on a specific table doesn't guarantee you the queries will use these indexes effectively. These are some of the query design rules you should follow to improve the use of indexes:

Avoid nonsargable search conditions
A sargable predicate in a query is one in which an index can be used.

Sargable =, >, >=, <, <=
BETWEEN
LIKE 'text%'
Nonsargable <>, !=
NOT EXISTS
NOT IN
OR
LIKE '%text'

Avoid arithmetic operators on the WHERE clause column
For example, assuming an index has been created on NoWeeks column, the use of the multiplication operator column in the first query prevents the optimizer from using (choosing) the index on the column:

-- the index will not be used
WHERE NoWeeks * 7 > @NoDays

-- the index will be used
WHERE NoWeeks > @NoDays / 7

Avoid functions on the WHERE clause column

-- the index will not be taken into account
WHERE SUBSTRING(CustomerName, 1, 1) = 'F' ;

-- the index will be taken into account
WHERE CustomerName LIKE 'F%' ;

How to avoid recompilation

- Don’t interleave DDL and DML statements.

- Avoid recompilation caused by statistics changes: use the KEEPFIXED PLAN option or disable the auto update statistics on the relevant table.

- Use table variables instead of temporary tables (as statistics are not created for table variables)

- Avoid changing SET options within the stored procedure.

- Use the OPTIMIZE FOR query hint that uses parameter values
supplied by you to compile the plan, regardless of the values of the parameter passed in by the calling application. It is recommended to use it in case you have a set of parameters that are "representative" from statistics point of view. This way you'll have a plan that works best most of the time.

CREATE PROCEDURE dbo.GetSales
@CustomerId INT
AS
SELECT ...
FROM
       dbo.Sales S
       INNER JOIN ...
WHERE S.CustomerId = @CustomerId
OPTION (OPTIMIZE FOR (@CustomerId = 1)) ;
GO

- Using plan guides. A plan guide allows you to use query hint or other optimization techniques without having to modify the query or procedure text. For example when the query is part of a third-party application and you are not able to modify it by including OPTION (OPTIMIZE FOR).
Example of creating a plan guide:

EXEC sp_create_plan_guide
       @name = N'MyGuide',
       @stmt = N'SELECT ...
                           FROM
                                 dbo.Sales S
                                 INNER JOIN ...
                                 WHERE S.CustomerId = @CustomerId'
,
       @type = N'OBJECT',
       @module_or_batch = N'dbo.CustomerList',
       @params = NULL,
       @hints = N'OPTION (OPTIMIZE FOR (@CustomerId = 1))' ;

DMVs used to analyze the execution plan cache

You can obtain a lot of information about the execution plans in the procedure cache by accessing various dynamic management objects.

sys.dm_exec_cached_plans: obtain information about the execution plans in the plan cache.

sys.dm_exec_query_plan(plan_handle): retrieve the XML representation of the plan.

sys.dm_exec_sql_text(plan_handle): retrieve the original query text.

sys.dm_exec_query_stats: get aggregate performance metrics about the cached plan.

sys.dm_exec_requests: see execution plans for queries that are currently executing.

Index fragmentation resolutions

After you detected fragmentation of an index, you can "resolve" fragmentation in an index by rearranging the index rows and pages so that their physical and logical orders match.

You achieve this through the following techniques:
1. Dropping and re-creating the index
If the index being dropped is a clustered index, then all the nonclustered indexes on the table have to be rebuilt after the cluster is dropped.


2. Creating the index with the DROP_EXISTING clause


3. Executing the ALTER INDEX REBUILD statement on the index
To be performed in case the fragmentation is over 30%.

ALTER INDEX <IX_IndexName> ON <TableName>
REBUILD;

or rebuild all indexes of a specified table:

ALTER INDEX ALL ON <TableName>
REBUILD;


4. Executing the ALTER INDEX REORGANIZE statement on the index
To be performed in case the index fragmentation is between 5% and 30%

ALTER INDEX <IX_IndexName> ON <TableName>
REORGANIZE;


Characteristics of Four Defragmentation Techniques

Drop and Create Index Create Index with DROP_ EXISTING ALTER INDEX REBUILD ALTER INDEX REORGANIZE
Blocking High High Medium Low
Defragment index with constraints Highly complex Moderately Easy Easy
Defragment multiple indexes together No No Yes Yes
Degree of defragmentation High High High Moderate to low
Statistics are updated Yes Yes Yes No
Apply new fill factor Yes Yes Yes No

Finding queries that are using table scans

The T-SQL script used to discover queries that are using table scans by searching within query plan for TableScan tag:

SELECT TOP 20
        DB_NAME(ST.dbid)AS database_name
        , ST.text as sql_Statement
        , CP.usecounts
        , QP.query_plan
FROM
        sys.dm_exec_cached_plans CP
        CROSS APPLY sys.dm_exec_sql_text(CP.plan_handle) ST
        CROSS APPLY sys.dm_exec_query_plan(CP.plan_handle) QP
WHERE
        ST.dbid = DB_ID()
        AND CAST(QP.query_plan AS NVARCHAR(MAX)) LIKE '%<TableScan%'
ORDER BY CP.usecounts DESC

Pages

Subscribe to RSS - performance