Find missing index warnings in cached query plans

Find missing index warnings for cached plans in the current database:

SELECT TOP 20
        OBJECT_NAME(objectid) as object_name,
        CP.objtype, CP.usecounts,
        QP.query_plan  
FROM
        sys.dm_exec_cached_plans CP
        CROSS APPLY sys.dm_exec_query_plan(CP.plan_handle) QP
WHERE
        CAST(QP.query_plan as NVARCHAR(MAX)) LIKE N'%MissingIndex%'
        AND dbid = DB_ID()
ORDER BY CP.usecounts DESC

Count of SQL Server connections by IP address

Count of connections by IPs.

SELECT
        EC.client_net_address, ES.host_name, ES.program_name, ES.login_name,
        COUNT (ec.session_id) as NoConnections
FROM
        sys.dm_exec_sessions ES
        INNER JOIN sys.dm_exec_connections EC ON ES.session_id = EC.session_id
GROUP BY EC.client_net_address, ES.program_name, ES.host_name, ES.login_name
ORDER BY NoConnections DESC

Creating readonly views in Sql Server

According to MSDN, views composed of simple selects automatically allow you to use insert/update/delete statements on the table.

There are several different options for avoiding this and in this article I will present various ways to make a view read only in a SQL Server database:

1. Permissions
You have the option to remove UPDATE/DELETE/INSERT permissions on the view.

2. UsingINSTEAD OFtrigger

CREATE TRIGGER dbo.MySampleView_Trigger_OnInsertOrUpdateOrDelete]
ON dbo.MySampleView
INSTEAD OF INSERT, UPDATE, DELETE
AS
BEGIN
        RAISERROR ('You are not allow to update this view!', 16, 1)    
END

3. UsingUNIONoperator
You could specify an UNION operator in order to make SQL Server fail during the INSERT/UPDATE/DELETE operation. See the below example:

ALTER VIEW dbo.MySampleView
as
SELECT col1, col2 FROM dbo.MySampleTable
UNION
SELECT NULL, NULL WHERE 1 =0

Remove duplicates

De-duplication of data is a common need, especially when dealing with data-quality issues in environments that end up with duplicate rows due to lack of enforcement of uniqueness with constraints. As an example, the following code prepares sample data with duplicate orders in a table called MyItem:

IF EXISTS (SELECT 1 FROM sys.tables WHERE name = 'MyItems')
        DROP TABLE dbo.MyItems
GO
               
               
CREATE TABLE dbo.MyItems
(
        IdItem INT IDENTITY(1,1),
        Name VARCHAR(50),
        Description VARCHAR(256)
)
GO

INSERT INTO dbo.MyItems(Name, Description)
VALUES
        ('Item1', 'Description 1'),
        ('Item1', 'Description 1'),
        ('Item2', 'Description 2'),
        ('Item3', 'Description 3'),
        ('Item3', 'Description 3'),
        ('Item3', 'Description 3 - different'),
        ('Item4', 'Description 4')
       
SELECT *
FROM dbo.MyItems
GO

An option is to filter either just the distinct rows or all but the distinct rows. You compute both ROW_NUMBER and RANK based on Name + Description ordering, like so:

SELECT
        IdItem, Name, Description,
        ROW_NUMBER() OVER (ORDER BY Name, Description) as row_num,
        RANK() OVER (ORDER BY Name, Description) as rnk
FROM dbo.MyItems
GO

Suppose that you need to de-duplicate the data, keeping only one occurrence of each unique Name+Description combination.

WITH cte
AS
(
        SELECT
                IdItem, Name, Description,
                ROW_NUMBER() OVER (ORDER BY Name, Description) as row_num,
                RANK() OVER (ORDER BY Name, Description) as rnk
        FROM dbo.MyItems
)
DELETE FROM cte
WHERE row_num != rnk

SELECT IdItem, Name, Description
FROM dbo.MyItems

When you’re done, run the following code for cleanup:

IF EXISTS (SELECT 1 FROM sys.tables WHERE name = 'MyItems')
        DROP TABLE dbo.MyItems
GO

Pagination in SQL Server

Solution for SQL Server 2008
The ROW_NUMBER function can be used for paging purposes. You assign row numbers to the result rows based on the desired ordering, and then filter the right range of row numbers based on given page-number and page-size arguments.

DECLARE
        @pagenum AS INT = 2,
        @pagesize AS INT = 10;
       
WITH cte AS
(
        SELECT ROW_NUMBER() OVER( ORDER BY Column1) AS rownum, Column1, Column2, Column3
        FROM dbo.MyTable
)
SELECT Column1, Column2, Column3
FROM cte
WHERE rownum BETWEEN (@pagenum - 1) * @pagesize + 1 AND @pagenum * @pagesize
ORDER BY rownum;

For optimal performance, you want to have an index defined on the window ordering elements as the index keys (and eventually include in the index the rest of the attributes that appear in the query for coverage purposes).

Solution for SQL Server 2012 or SQL Server 2014
An alternative solution to paging is to use the new OFFSET/FETCH filtering option:

DECLARE
        @pagenum AS INT = 2,
        @pagesize AS INT = 10;

SELECT Column1, Column2, Column3
FROM dbo.MyTable
ORDER BY Column1
OFFSET (@pagenum - 1) * @pagesize ROWS FETCH NEXT @pagesize ROWS ONLY;

Islands problem - identifying consecutive ranges

I will present a T-SQL query that uses ROW_NUMBER for identifying consecutive ranges of existing values (also known as islands problem).
Given the below script, the expected ranges are: [1,2], [11-14], [25], [32-33].

SET NOCOUNT ON;
IF OBJECT_ID('dbo.MyTable') IS NOT NULL
        DROP TABLE dbo.MyTable;
GO

CREATE TABLE dbo.MyTable
(
col1 INT NOT NULL
CONSTRAINT PK_MyTable PRIMARY KEY
);

INSERT INTO dbo.MyTable(col1)
VALUES (1),(2),(11),(12),(13),(14),(25),(31),(32),(33);
GO

Step 1: use the ROW_NUMBER function to calculate row numbers based on col1 ordering

SELECT col1, ROW_NUMBER() OVER (ORDER BY col1) as row_num
FROM dbo.MyTable

Step 2: identify islands
The difference between two columns is constant and unique for each island as shown in this query:

SELECT col1, col1 - ROW_NUMBER() OVER (ORDER BY col1) as grp
FROM dbo.MyTable

Step 3: complete solution

;WITH cte
AS
(
        SELECT col1, col1 - ROW_NUMBER() OVER (ORDER BY col1) as grp
        FROM dbo.MyTable
)
SELECT MIN(col1) as start_range, MAX(col1) as end_range
FROM cte
GROUP BY grp

Avoiding deadlocks

The following are some of the techniques you can use to avoid a deadlock:

Access resources in the same physical order => the first transaction will successfully acquire locks on the resources without being blocked by the second transaction.

Decrease the locking
• Convert a nonclustered index to a clustered index.
• Use a covering index for a SELECT statement.

Minimize lock contention
• Implement row versioning through the READ_COMMITTED_SNAPSHOT isolation level or through the SNAPSHOT isolation level.
• Decrease the isolation level.
• Use locking hints: NOLOCK or READUNCOMMITTED.

Tips to avoid database blocking

In a multiuser database application, you must minimize blocking among concurrent transactions.

Keep transactions short
• Perform the minimum steps/logic within a transaction.
• Do not perform costly external activity within a transaction, such as sending acknowledgment email or performing activities driven by the end user.

Optimize queries using indexes
• Create indexes as required to ensure optimal performance of the queries within the system.
• Avoid a clustered index on frequently updated columns. Updates to clustered index key columns require locks on the clustered index and all nonclustered indexes (since their row locator contains the clustered index key).
• Consider using a covering index to serve the blocked SELECT statements.

Consider partitioning a contended table

Use query timeouts or a resource governor to control runaway queries

Avoid losing control over the scope of the transactions because of poor error-handling routines or application logic
• Use SET XACTABORT ON to avoid a transaction being left open on an error condition within the transaction.
• Execute the following SQL statement from a client error handler (TRY/CATCH) after executing a SQL batch or stored procedure containing a transaction:

IF @@TRANCOUNT > 0 ROLLBACK

Use the lowest isolation level required
• Use the default isolation level (Read Committed).
• Consider using row versioning to help reduce contention.

Blocking resolutions

Once you’ve analyzed the cause of a block, the next step is to determine any possible resolutions. Here are a few techniques you can use to do this:

Use a covering index on the contended data If the query of one of the processes can be satisfied using a covering index, then it will prevent the process from requesting locks on the contended resource.
Optimize the queries executed by blocking and blocked SPIDs E.g. break UPDATE/DELETE statements into multiple batches using the appropriate WHERE clauses. If the individual statements of the batch are executed in separate transactions, then fewer locks will be held on the resource within one transaction, and for shorter time periods.
Decrease the isolation level Use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED or WITH (NOLOCK) locking hint.
Partition the contended data This way, exclusive locks will be acquired at partition level, so it will increase concurrency.

Using Indexes Effectively in SQL Server

Adding indexes on a specific table doesn't guarantee you the queries will use these indexes effectively. These are some of the query design rules you should follow to improve the use of indexes:

Avoid nonsargable search conditions
A sargable predicate in a query is one in which an index can be used.

Sargable =, >, >=, <, <=
BETWEEN
LIKE 'text%'
Nonsargable <>, !=
NOT EXISTS
NOT IN
OR
LIKE '%text'

Avoid arithmetic operators on the WHERE clause column
For example, assuming an index has been created on NoWeeks column, the use of the multiplication operator column in the first query prevents the optimizer from using (choosing) the index on the column:

-- the index will not be used
WHERE NoWeeks * 7 > @NoDays

-- the index will be used
WHERE NoWeeks > @NoDays / 7

Avoid functions on the WHERE clause column

-- the index will not be taken into account
WHERE SUBSTRING(CustomerName, 1, 1) = 'F' ;

-- the index will be taken into account
WHERE CustomerName LIKE 'F%' ;

Pages

Subscribe to SQLhint RSS