sql-server

Performance Monitor counters to analyze memory pressure

Counter Description Expected Value
Memory
Available Bytes Free physical memory Should not be too low
Pages/sec Rate of hard page faults.

A page fault occurs when a process requires data that is not in its space in physical memory. If the faulted page is found elsewhere in physical memory, then it is called a soft page fault. A hard page fault occurs when a process requires data that must be retrieved from disk.
The speed of a disk access is in the order of milliseconds, whereas a memory access is in the order of nanoseconds.

Average Value < 50
Page Faults/sec Rate of total page faults (soft page faults plus hard page faults) Compare with its baseline value for trend analysis
SQLServer:Buffer Manager
Buffer cache hit ratio Percentage of requests served out of buffer cache.
A low value indicates that few requests could be served out of the buffer cache, with the rest of the requests being served from disk.
Average Value >= 90% in an OLTP system.
Page Life Expectancy Time page spends in buffer cache without being referenced. A low value means that pages are being removed from the buffer, lowering the efficiency of the cache and indicating the possibility of memory pressure.
Checkpoint Pages/sec Pages written to disk by a checkpoint operation.
A dirty page is one that is modified while in the buffer. When it’s modified, it’s marked as dirty and will get written back to the disk during the next checkpoint.
Average Value < 30
Lazy writes/sec Dirty aged pages flushed from buffer. Average Value < 20
SQLServer:Memory Manager
Memory Grants Pending Number of processes waiting for memory grant. If this counter value is high, then SQL Server is short of memory.
See sys.dm_ exec_query_memory_grants dmv.
Average value = 0.
Target Server Memory (KB) Indicates the total amount of dynamic memory SQL Server is willing to consume. Close to size of physical Memory
Total Server Memory (KB) Amount of physical memory currently assigned to SQL. Close to Target server Memory (KB)
Process
Private Bytes Size, in bytes, of memory that this process has allocated that can’t be shared with other processes

When you get out-of-memory issues, they are recorded in sys.dm_os_ring_buffers dmv:

SELECT *
FROM sys.dm_os_ring_buffers
WHERE ring_buffer_type='RING_BUFFER_OOM';

Index design recommendation - how to choose optimal indexes

Indexes should be considered on all columns that are frequently accessed by the WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.

Clustered index recommendations
- It is highly recommended that every table should have at least a clustered index. However, there is also a very interesting topic to be taken into account on use-the-index-luke.com.
- Consider having a clustered indexes when the data retrieval needs to be sorted.
- Avoid creating a clustered index on columns that are highly updatable as row locator of all the non clustered indexes will be updated accordingly.
- Create the clustered index first. In case the clustered index is created later => all non clustered indexes need to be rebuilt.
- Keep clustered indexes narrow as this directly impacts index size on disk. This is very important as all non clustered indexes store the clustered keys as their row locator, or simpler said:
clustered index row width = non clustered index column width + clustered index column width

Nonclustered Index Recommendations
- A nonclustered index is most useful when all you want to do is retrieve a small number of rows and columns from a large table.
- Consider using INCLUDE clause in order to create covering indexes.
- To improve the performance of a query, SQL Server can use multiple indexes on a table. Therefore, instead of creating wide index keys, consider creating multiple narrow indexes.
- Foreign keys columns are good index candidates.

Disk consideration
- Place the table and index on separate disks.
- Take into account index compression i.e. fewer pages and fewer index levels are needed to store the index.

Column order matters in a composite index
Using the most selective column first will help filter the index rows more efficiently.

Avoid indexes on columns with lower selectivity
If selectivity ratio > 0.85 => a table scan will be preferred.

Take into account column uniqueness
Creating an index on columns with a very low range of possible unique values (such as gender) will not benefit performance, because the query optimizer will not be able to use the index to effectively narrow down the rows to be returned.

The data type of an index matters
Unless they are absolutely necessary, minimize the use of wide data type columns with large sizes in an index. A large index key size increases the number of index pages, thereby increasing the amount of memory and disk activities required for the index.

Indexed or materialized views
- A database view can be materialized on the disk by creating a unique clustered index on the view. After a unique clustered index is created on the view, the view’s result set is materialized immediately and persisted in physical storage in the database, saving the overhead of performing costly operations during query execution.
- Aggregations can be precomputed and persisted in the indexed view to minimize expensive computations during query execution.
- Available only in Enterprise Edition.

Other aspects to be taken into account
- Considering using the Database Engine Tuning Advisor tool provided by SQL Server that helps you determine the correct indexes in a database for a given SQL workload ( a trace file or a table or, new with SQL Server 2012, you can use the queries that exist in the plan cache).
- ColumnStore indexes. Used to index information by columns rather than by rows.
- Spatial indexes.
- XML indexes.
- Index computed columns.
- Consider using filtered indexes.
- Resolve Key or RID lookups by using a clustered index or by using a covering index or by using a index join (i.e. an index intersection between two or more indexes to fully cover a query).

How to create a unique constraint?

ALTER TABLE <tablename>
ADD CONSTRAINT <constraintname> UNIQUE NONCLUSTERED (<columnname>)

When database consistency check was performed last time?

This script will show the last time DBCC CHECKDB was completed without errors, in dbi_dbccLaskKnownGood:

DBCC TRACEON (3604);
DBCC DBINFO (N'DatabaseName');

Counting all the rows in a database

A quick count of all rows in all tables in a database can be done using sysindexes table, a system table containing one row for each index and table in the database.

The following query shows the rows count for each table in database. List of tables can be found by interrogating sysobjects system table (it contains one row for each object created within a database: constraint, default, log, rule, stored procedure, and so on) . The 'rows' column in the system table 'sysindexes' holds the number of committed rows in the table.

SELECT SCHEMA_NAME(O.schema_id) as schema_name, O.name as table_name, I.[rows] as no_rows
FROM
    sys.objects O
    INNER JOIN sysindexes I ON O.object_id = I.id
WHERE
        I.indid < 2
        AND O.is_ms_shipped = 0
ORDER BY no_rows DESC

I added indid < 2 condition for restricting the query to looking at the clustered index or heap info, and ignore subsequent indices. Note that only the results of committed transactions are held in the rows column (uncommitted transactions are not).

Use stored procedures instead of embedded T-SQL

Use stored procedures whenever a client application needs to send Transact-SQL to SQL Server. Stored procedures offer many benefits:

  • Reduces network traffic and latency, boosting application performance.
  • Stored procedure execution plans can be reused, staying cached in SQL Server's memory, reducing server overhead.
  • Stored procedures help promote code reuse. While this does not directly boost an application's performance, it can boost the productivity of developers by reducing the amount of code required, along with reducing debugging time.
  • Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients (assuming you keep the parameters the same and don’t remove any result sets columns). This saves developer time.
  • Stored procedures provide better security to your data. If you use stored procedures exclusively, you can remove direct SELECT, INSERT, UPDATE, and DELETE rights from the tables and force developers to use stored procedures as the method for data access.

How to disable and enable a trigger

Business rules in table triggers often expect your application to update the table one row at a time. Also, some triggers generate an error when the code in the trigger assigns to a local variable the value returned by selecting a column from the inserted virtual table. The assignment fails if you're updating multiple rows because the inserted table contains more than one row, so the subquery returns more than a single value.

Multirow updates need special handling in such a scenario. Developers often wind up dropping a trigger before multirow updates, then creating them later to resume single-row updates. However, I recommend disabling such a trigger instead:

ALTER TABLE MyTable DISABLE TRIGGER MyTrigger

You can re-enable the trigger once you finish your multirow data updates:

ALTER TABLE MyTable ENABLE TRIGGER MyTrigger

Choosing a recovery model

Recovery model choices:
Simple
Recovery to the end of the last full backup, no log backups.

Bulk logged
All backup types are permitted, reduced log space for minimal logged, bulk operations, but additional risk exposure is log is damaged and a limit on point-in-time recovery.

Full
All backups types permitted, point-in-time recovery possible.

Script to modify the database recovery model:

ALTER DATABASE [<dbname>] SET RECOVERY FULL;
GO

Lock granularity

RID A row identifier used to lock a single row within a heap.
KEY A single row is locked - index key for a single row in an index.
PAGE An 8-kilobyte (KB) page in a database, such as data or index pages.
EXTENT A lock on an extent (=a contiguous group of 8 pages, such as data or index pages). Used, for example, when an ALTER INDEX REBUILD command is executed on a table and the pages of the table may be moved from an existing extent to a new extent.
HoBT It acts like a table-level lock, but on a partition instead of on the table itself.
TABLE The entire table, including all data and indexes => blocks all write requests on the entire table (including indexes), it can significantly hurt database concurrency.
FILE A database file.
APPLICATION An application-specified resource.
METADATA The table schema definition is locked.
ALLOCATION_UNIT An allocation unit.
DATABASE The entire database.

How do I obtain a Query Execution Plan?

Estimated plan (compile):

  1. SET SHOWPLAN_XML
  2. Graphical Showplan or Ctrl-L in SSMS
  3. sys.dm_exec_query_plan, sys.dm_exec_text_query_plan

Actual plan (runtime):

  1. SET STATISTICS XML
  2. Graphical Showplan or Ctrl-M in SSMS
  3. SQL Sentry Plan Explorer (free or pro edition)

On deprecation path:

  • SET SHOWPLAN_TEXT
  • SET SHOWPLAN_ALL
  • SET STATISTICS PROFILE

Pages

Subscribe to RSS - sql-server