Setting Up Database Monitoring for Azure SQL Server

Database Monitoring provides deep visibility into your Microsoft SQL Server databases by exposing query metrics, query samples, explain plans, database states, failovers, and events.

Do the following steps to enable Database Monitoring with your database:

  1. Grant the Agent access to the database
  2. Install and configure the Agent
  3. Install the Azure integration

Before you begin

Supported SQL Server versions
2014, 2016, 2017, 2019, 2022
Supported Agent versions
7.41.0+
Performance impact
The default Agent configuration for Database Monitoring is conservative, but you can adjust settings such as the collection interval and query sampling rate to better suit your needs. For most workloads, the Agent represents less than one percent of query execution time on the database and less than one percent of CPU.

Database Monitoring runs as an integration on top of the base Agent (see benchmarks).
Proxies, load balancers, and connection poolers
The Datadog Agent must connect directly to the host being monitored. The Agent should not connect to the database through a proxy, load balancer, or connection pooler. If the Agent connects to different hosts while it is running (as in the case of failover, load balancing, and so on), the Agent calculates the difference in statistics between two hosts, producing inaccurate metrics.
Data security considerations
Read about how Database Management handles sensitive information for information about what data the Agent collects from your databases and how to ensure it is secure.

Grant the Agent access

The Datadog Agent requires read-only access to the database server to collect statistics and queries.

Create a read-only login to connect to your server and grant the required Azure SQL Roles:

CREATE LOGIN datadog WITH PASSWORD = '<PASSWORD>';
CREATE USER datadog FOR LOGIN datadog;
ALTER SERVER ROLE ##MS_ServerStateReader## ADD MEMBER datadog;
ALTER SERVER ROLE ##MS_DefinitionReader## ADD MEMBER datadog;
-- If not using either of Log Shipping Monitoring (available in Agent v7.50+) or
-- SQL Server Agent Monitoring (available in Agent v7.57+), comment out the next three lines:
USE msdb;
CREATE USER datadog FOR LOGIN datadog;
GRANT SELECT to datadog;

Grant the Agent access to each additional Azure SQL Database on this server:

CREATE USER datadog FOR LOGIN datadog;

Note: Microsoft Entra ID managed identity authentication is also supported. Please see the guide on how to configure this for your Azure SQL DB instance.

When configuring the Datadog Agent, specify one check instance for each application database located on a given Azure SQL DB server. Do not include master and other system databases. The Datadog Agent must connect directly to each application database in Azure SQL DB because each database is running in an isolated compute environment. This also means that database_autodiscovery does not work for Azure SQL DB, so it should not be enabled.

Note: Azure SQL Database deploys a database in an isolated network; each database is treated as a single host. This means that if you run Azure SQL Database in an elastic pool, each database in the pool is treated as a separate host.

init_config:
instances:
  - host: '<SERVER_NAME>.database.windows.net,1433'
    database: '<DATABASE_1>'
    username: datadog
    password: '<PASSWORD>'
    # After adding your project and instance, configure the Datadog Azure integration to pull additional cloud data such as CPU, Memory, etc.
    azure:
      deployment_type: 'sql_database'
      fully_qualified_domain_name: '<SERVER_NAME>.database.windows.net'

  - host: '<SERVER_NAME>.database.windows.net,1433'
    database: '<DATABASE_2>'
    username: datadog
    password: '<PASSWORD>'
    # After adding your project and instance, configure the Datadog Azure integration to pull additional cloud data such as CPU, Memory, etc.
    azure:
      deployment_type: 'sql_database'
      fully_qualified_domain_name: '<SERVER_NAME>.database.windows.net'

See Install the Agent for more detailed instructions on how to install and configure the Datadog Agent.

Create a read-only login to connect to your server and grant the required permissions:

For SQL Server versions 2014+

CREATE LOGIN datadog WITH PASSWORD = '<PASSWORD>';
CREATE USER datadog FOR LOGIN datadog;
GRANT CONNECT ANY DATABASE to datadog;
GRANT VIEW SERVER STATE to datadog;
GRANT VIEW ANY DEFINITION to datadog;
-- If not using either of Log Shipping Monitoring (available in Agent v7.50+) or
-- SQL Server Agent Monitoring (available in Agent v7.57+), comment out the next three lines:
USE msdb;
CREATE USER datadog FOR LOGIN datadog;
GRANT SELECT to datadog;

Note: Azure managed identity authentication is also supported. Please see the guide on how to configure this for your Azure SQL DB instance.

For SQL Server on Windows Azure VM follow the Setting Up Database Monitoring for self-hosted SQL Server documentation to install the Datadog Agent directly on the Windows Server host VM.

Securely store your password

Store your password using secret management software such as Vault. You can then reference this password as ENC[<SECRET_NAME>] in your Agent configuration files: for example, ENC[datadog_user_database_password]. See Secrets Management for more information.

The examples on this page use datadog_user_database_password to refer to the name of the secret where your password is stored. It is possible to reference your password in plain text, but this is not recommended.

Install and configure the Agent

Because Azure does not grant direct host access, the Datadog Agent must be installed on a separate host where it is able to talk to the SQL Server host. There are several options for installing and running the Agent.

To start collecting SQL Server telemetry, first install the Datadog Agent.

Create the SQL Server Agent conf file C:\ProgramData\Datadog\conf.d\sqlserver.d\conf.yaml. See the sample conf file for all available configuration options.

init_config:
instances:
  - dbm: true
    host: '<HOSTNAME>,<SQL_PORT>'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    tags:  # Optional
      - 'service:<CUSTOM_SERVICE>'
      - 'env:<CUSTOM_ENV>'
    # After adding your project and instance, configure the Datadog Azure integration to pull additional cloud data such as CPU, Memory, etc.
    azure:
      deployment_type: '<DEPLOYMENT_TYPE>'
      fully_qualified_domain_name: '<AZURE_INSTANCE_ENDPOINT>'

See the SQL Server integration spec for additional information on setting deployment_type and name fields.

To use Windows Authentication, set connection_string: "Trusted_Connection=yes" and omit the username and password fields.

Use the service and env tags to link your database telemetry to other telemetry through a common tagging scheme. See Unified Service Tagging on how these tags are used throughout Datadog.

Supported Drivers

Microsoft ADO

The recommended ADO provider is Microsoft OLE DB Driver. Ensure the driver is installed on the host where the agent is running.

connector: adodbapi
adoprovider: MSOLEDBSQL19  # Replace with MSOLEDBSQL for versions 18 and lower

The other two providers, SQLOLEDB and SQLNCLI, are considered deprecated by Microsoft and should no longer be used.

ODBC

The recommended ODBC driver is Microsoft ODBC Driver. Starting with Agent 7.51, ODBC Driver 18 for SQL Server is included in the agent for Linux. For Windows, ensure the driver is installed on the host where the Agent is running.

connector: odbc
driver: '{ODBC Driver 18 for SQL Server}'

Once all Agent configuration is complete, restart the Datadog Agent.

Validate

Run the Agent’s status subcommand and look for sqlserver under the Checks section. Navigate to the Databases page in Datadog to get started.

To start collecting SQL Server telemetry, first install the Datadog Agent.

On Linux, the Datadog Agent additionally requires an ODBC SQL Server driver to be installed—for example, the Microsoft ODBC driver. Once an ODBC SQL Server is installed, copy the odbc.ini and odbcinst.ini files into the /opt/datadog-agent/embedded/etc folder.

Use the odbc connector and specify the proper driver as indicated in the odbcinst.ini file.

Create the SQL Server Agent conf file /etc/datadog-agent/conf.d/sqlserver.d/conf.yaml. See the sample conf file for all available configuration options.

init_config:
instances:
  - dbm: true
    host: '<HOSTNAME>,<SQL_PORT>'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: odbc
    driver: '<Driver from the `odbcinst.ini` file>'
    tags:  # Optional
      - 'service:<CUSTOM_SERVICE>'
      - 'env:<CUSTOM_ENV>'
    # After adding your project and instance, configure the Datadog Azure integration to pull additional cloud data such as CPU, Memory, etc.
    azure:
      deployment_type: '<DEPLOYMENT_TYPE>'
      fully_qualified_domain_name: '<AZURE_ENDPOINT_ADDRESS>'

See the SQL Server integration spec for additional information on setting deployment_type and name fields.

Use the service and env tags to link your database telemetry to other telemetry through a common tagging scheme. See Unified Service Tagging on how these tags are used throughout Datadog.

Once all Agent configuration is complete, restart the Datadog Agent.

Validate

Run the Agent’s status subcommand and look for sqlserver under the Checks section. Navigate to the Databases page in Datadog to get started.

To configure the Database Monitoring Agent running in a Docker container, set the Autodiscovery Integration Templates as Docker labels on your Agent container.

Note: The Agent must have read permission on the Docker socket for Autodiscovery of labels to work.

Replace the values to match your account and environment. See the sample conf file for all available configuration options.

export DD_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
export DD_AGENT_VERSION=7.51.0

docker run -e "DD_API_KEY=${DD_API_KEY}" \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  -l com.datadoghq.ad.check_names='["sqlserver"]' \
  -l com.datadoghq.ad.init_configs='[{}]' \
  -l com.datadoghq.ad.instances='[{
    "dbm": true,
    "host": "<HOSTNAME>,<SQL_PORT>",
    "connector": "odbc",
    "driver": "ODBC Driver 18 for SQL Server",
    "username": "datadog",
    "password": "<PASSWORD>",
    "tags": [
      "service:<CUSTOM_SERVICE>"
      "env:<CUSTOM_ENV>"
    ],
    "azure": {
      "deployment_type": "<DEPLOYMENT_TYPE>",
      "name": "<AZURE_ENDPOINT_ADDRESS>"
    }
  }]' \
  gcr.io/datadoghq/agent:${DD_AGENT_VERSION}

See the SQL Server integration spec for additional information on setting deployment_type and name fields.

Use the service and env tags to link your database telemetry to other telemetry through a common tagging scheme. See Unified Service Tagging on how these tags are used throughout Datadog.

Validate

Run the Agent’s status subcommand and look for sqlserver under the Checks section. Alternatively, navigate to the Databases page in Datadog to get started.

If you have a Kubernetes cluster, use the Datadog Cluster Agent for Database Monitoring.

If cluster checks are not already enabled in your Kubernetes cluster, follow the instructions to enable cluster checks. You can configure the Cluster Agent either with static files mounted in the Cluster Agent container, or by using Kubernetes service annotations:

Helm

Complete the following steps to install the Datadog Cluster Agent on your Kubernetes cluster. Replace the values to match your account and environment.

  1. Complete the Datadog Agent installation instructions for Helm.

  2. Update your YAML configuration file (datadog-values.yaml in the Cluster Agent installation instructions) to include the following:

    clusterAgent:
      confd:
        sqlserver.yaml: |-
          cluster_check: true
          init_config:
          instances:
          - dbm: true
            host: <HOSTNAME>,1433
            username: datadog
            password: 'ENC[datadog_user_database_password]'
            connector: 'odbc'
            driver: '{ODBC Driver 18 for SQL Server}'
            include_ao_metrics: true  # Optional: For AlwaysOn users
            tags:  # Optional
              - 'service:<CUSTOM_SERVICE>'
              - 'env:<CUSTOM_ENV>'
            azure:
              deployment_type: '<DEPLOYMENT_TYPE>'
              fully_qualified_domain_name: '<AZURE_ENDPOINT_ADDRESS>'      
    
    clusterChecksRunner:
      enabled: true
    
  3. Deploy the Agent with the above configuration file from the command line:

    helm install datadog-agent -f datadog-values.yaml datadog/datadog
    
For Windows, append --set targetSystem=windows to the helm install command.

Configure with mounted files

To configure a cluster check with a mounted configuration file, mount the configuration file in the Cluster Agent container on the path: /conf.d/sqlserver.yaml:

cluster_check: true  # Make sure to include this flag
init_config:
instances:
  - dbm: true
    host: '<HOSTNAME>,<SQL_PORT>'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: "odbc"
    driver: '{ODBC Driver 18 for SQL Server}'
    tags:  # Optional
      - 'service:<CUSTOM_SERVICE>'
      - 'env:<CUSTOM_ENV>'
    # After adding your project and instance, configure the Datadog Azure integration to pull additional cloud data such as CPU, Memory, etc.
    azure:
      deployment_type: '<DEPLOYMENT_TYPE>'
      fully_qualified_domain_name: '<AZURE_ENDPOINT_ADDRESS>'

Configure with Kubernetes service annotations

Rather than mounting a file, you can declare the instance configuration as a Kubernetes Service. To configure this check for an Agent running on Kubernetes, create a Service in the same namespace as the Datadog Cluster Agent:

apiVersion: v1
kind: Service
metadata:
  name: sqlserver-datadog-check-instances
  annotations:
    ad.datadoghq.com/service.check_names: '["sqlserver"]'
    ad.datadoghq.com/service.init_configs: '[{}]'
    ad.datadoghq.com/service.instances: |
      [
        {
          "dbm": true,
          "host": "<HOSTNAME>,<SQL_PORT>",
          "username": "datadog",
          "password": "ENC[datadog_user_database_password]",
          "connector": "odbc",
          "driver": "ODBC Driver 18 for SQL Server",
          "tags": ["service:<CUSTOM_SERVICE>", "env:<CUSTOM_ENV>"],  # Optional
          "azure": {
            "deployment_type": "<DEPLOYMENT_TYPE>",
            "fully_qualified_domain_name": "<AZURE_ENDPOINT_ADDRESS>"
          }
        }
      ]      
spec:
  ports:
  - port: 1433
    protocol: TCP
    targetPort: 1433
    name: sqlserver

See the SQL Server integration spec for additional information on setting deployment_type and name fields.

The Cluster Agent automatically registers this configuration and begins running the SQL Server check.

To avoid exposing the datadog user’s password in plain text, use the Agent’s secret management package and declare the password using the ENC[] syntax.

Example Agent Configurations

Connecting with DSN using the ODBC driver on Linux

  1. Locate the odbc.ini and odbcinst.ini files. By default, these are placed in the /etc directory when installing ODBC.

  2. Copy the odbc.ini and odbcinst.ini files into the /opt/datadog-agent/embedded/etc folder.

  3. Configure your DSN settings as follows:

    odbcinst.ini must provide at least one section header and ODBC driver location.

    Example:

    [ODBC Driver 18 for SQL Server]
    Description=Microsoft ODBC Driver 18 for SQL Server
    Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.2.1
    UsageCount=1
    

    odbc.ini must provide a section header and a Driver path that matches odbcinst.ini.

    Example:

    [datadog]
    Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.3.so.2.1
    
  4. Update the /etc/datadog-agent/conf.d/sqlserver.d/conf.yaml file with your DSN information.

    Example:

    instances:
      - dbm: true
        host: 'localhost,1433'
        username: datadog
        password: 'ENC[datadog_user_database_password]'
        connector: 'odbc'
        driver: '{ODBC Driver 18 for SQL Server}' # This is the section header of odbcinst.ini
        dsn: 'datadog' # This is the section header of odbc.ini
    
  5. Restart the Agent.

Using AlwaysOn

Note: For AlwaysOn users, the Agent must be installed on a separate server and connected to the cluster through the listener endpoint. This is because information about Availability Group (AG) secondary replicas is collected from the primary replica. Additionally, installing the Agent in this way helps to keep it up and running in the event of a failover.

instances:
  - dbm: true
    host: 'shopist-prod,1433'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    include_ao_metrics: true  # If Availability Groups is enabled
    include_fci_metrics: true   # If Failover Clustering is enabled

Monitoring SQL Server Agent Jobs

Note: For monitoring SQL Server Agent jobs, the Datadog Agent must have access to the [msdb] database. Monitoring of SQL Server Agent jobs is supported on SQL Server versions 2016 and newer. Starting from Agent v7.57, the Datadog Agent can collect SQL Server Agent job metrics and histories. To enable this feature, set enabled to true in the agent_jobs section of the SQL Server integration configuration file. The collection_interval and history_row_limit fields are optional.

instances:
  - dbm: true
    host: 'shopist-prod,1433'
    username: datadog
    password: '<PASSWORD>'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    agent_jobs:
      enabled: true
      collection_interval: 15
      history_row_limit: 10000

Collecting schemas

Starting from Agent v7.56, the Datadog Agent can collect schema information from SQLServer databases. To enable this feature, use the schemas_collection option. Schemas are collected on databases for which the Agent has CONNECT access.

Note: For schema collection on RDS instances, it is necessary to grant explicit CONNECT access to the datadog user for each database on the instance. See Grant the Agent access for more details.

Use the database_autodiscovery option to avoid specifying each logical database. See the sample sqlserver.d/conf.yaml for more details.

init_config:
instances:
 # This instance detects every logical database automatically
  - dbm: true
        host: 'shopist-prod,1433'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    database_autodiscovery:
      enabled: true
    schemas_collection:
      enabled: true
    # Optional: enable metric collection for indexes
    include_index_usage_metrics: true
# This instance only collects schemas and index metrics from the `users` database
  - dbm: true
        host: 'shopist-prod,1433'
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    database: users
    schemas_collection:
      enabled: true
    include_index_usage_metrics: true

One Agent connecting to multiple hosts

It is common to configure a single Agent host to connect to multiple remote database instances (see Agent installation architectures for DBM). To connect to multiple hosts, create an entry for each host in the SQL Server integration config. In these cases, Datadog recommends limiting the number of instances per Agent to a maximum of 10 database instances to guarantee reliable performance.

init_config:
instances:
  - dbm: true
    host: 'example-service-primary.example-host.com,1433'
    username: datadog
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    password: 'ENC[datadog_user_database_password]'
    tags:
      - 'env:prod'
      - 'team:team-discovery'
      - 'service:example-service'
  - dbm: true
    host: 'example-service–replica-1.example-host.com,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    tags:
      - 'env:prod'
      - 'team:team-discovery'
      - 'service:example-service'
  - dbm: true
    host: 'example-service–replica-2.example-host.com,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    tags:
      - 'env:prod'
      - 'team:team-discovery'
      - 'service:example-service'
    [...]

Running custom queries

To collect custom metrics, use the custom_queries option. See the sample sqlserver.d/conf.yaml for more details.

init_config:
instances:
  - dbm: true
    host: 'localhost,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    custom_queries:
    - query: SELECT age, salary, hours_worked, name FROM hr.employees;
      columns:
        - name: custom.employee_age
          type: gauge
        - name: custom.employee_salary
           type: gauge
        - name: custom.employee_hours
           type: count
        - name: name
           type: tag
      tags:
        - 'table:employees'

Working with hosts through a remote proxy

If the Agent must connect to a database host through a remote proxy, all telemetry is tagged with the hostname of the proxy rather than the database instance. Use the reported_hostname option to set a custom override of the hostname detected by the Agent.

init_config:
instances:
  - dbm: true
    host: 'localhost,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    reported_hostname: products-primary
  - dbm: true
    host: 'localhost,1433'
    connector: adodbapi
    adoprovider: MSOLEDBSQL
    username: datadog
    password: 'ENC[datadog_user_database_password]'
    reported_hostname: products-replica-1

Discovering ports automatically

SQL Server Browser Service, Named Instances, and other services can automatically detect port numbers. You can use this instead of hardcoding port numbers in connection strings. To use the Agent with one of these services, set the port field to 0.

For example, a Named Instance config:

init_config:
instances:
  - host: <hostname\instance name>
    port: 0

Install the Azure integration

To collect more comprehensive database metrics and logs from Azure, install the Azure integration.

Further reading

PREVIEWING: esther/docs-9478-fix-split-after-example