API tests proactively monitor that your most important services are available at anytime and from anywhere. Single API tests come in eight subtypes that allow you to launch requests on the different network layers of your systems (HTTP, SSL, DNS, WebSocket, TCP, UDP, ICMP, and gRPC). Multistep API tests enable you to run API tests in sequence to monitor the uptime of key journeys at the API level.
Create a single API test
HTTP tests monitor your API endpoints and alert you when response latency is high or fail to meet any conditions you define, such as expected HTTP status code, response headers, or response body content.
Add the URL of the endpoint you want to monitor. If you don’t know what to start with, you can use https://www.shopist.io/, a test e-commerce web application. Defining the endpoint to test automatically populates the name of your test to Test on www.shopist.io.
You can select Advanced Options to set custom request options, certificates, authentication credentials, and more.
Note: You can create secure global variables to store credentials and create local variables to generate dynamic timestamps to use in your request payload. After creating these variables, type {{ in any relevant field and select the variable to inject its value in your test options.
In this example, no specific advanced option is needed.
You can set tags such as env:prod and app:shopist on your test. Tags allow you to keep your test suite organized and quickly find tests you’re interested in on the homepage.
Click Test URL to trigger a sample test run.
Define assertions
Clicking Test URL automatically populates basic assertions about your endpoint’s response. Assertions define what a successful test run is.
In this example, three default assertions populate after triggering the sample test run:
Assertions are fully customizable. To add a custom assertion, click on elements of the response preview such as the headers or click New Assertion to define a new assertion from scratch.
Select locations
Select one or more Managed Locations or Private Locations to run your test from. Datadog’s out-of-the-box managed locations allow you to test public-facing websites and endpoints from regions where your customers are located.
Americas
APAC
EMEA
Canada Central (AWS)
Hong Kong (AWS)
Cape Town (AWS)
Northern California (AWS)
Mumbai (AWS)
Frankfurt (AWS)
Northern Virginia (AWS)
Seoul (AWS)
Ireland (AWS)
Ohio (AWS)
Singapore (AWS)
London (AWS)
Oregon (AWS)
Sydney (AWS)
Paris (AWS)
São Paulo (AWS)
Tokyo (AWS)
Stockholm (AWS)
Virginia (Azure)
Osaka (AWS)
Milan (AWS)
Jakarta (AWS)
Bahrain (AWS)
The Datadog for Government site (US1-FED) uses the following managed location:
Americas
US-West
The Shopist application is publicly available at https://www.shopist.io/, so you can pick any managed locations to execute your test from. To test internal applications or simulate user behavior in discrete geographic regions, use private locations instead.
Specify test frequency
Select the frequency at which you want your test to execute. You can leave the default frequency of 1 minute.
In addition to running your Synthetic test on a schedule, you can trigger them manually or directly from your CI/CD pipelines.
Define alert conditions
You can define alert conditions to ensure your test does not trigger for things like a sporadic network blip, so that you only get alerted in case of real issues with your endpoint.
You can specify the number of consecutive failures that should happen before considering a location failed:
Retry test 2 times after 300 ms in case of failure
You can also configure your test to only trigger a notification when your endpoint goes down for a certain amount of time and number of locations. In the below example, the alerting rule is set to send a notification if the test fails for three minutes on two different locations:
An alert is triggered if your test fails for 3 minutes from any 2 of 13 locations
Configure the test monitor
Design your alert message and add any email address you want your test to send alerts to. You can also use notifications integrations such as Slack, PagerDuty, Microsoft Teams, and webhooks. In order to trigger a Synthetic alert to these notification tools, you first need to set up the corresponding integration.
When you’re ready to save your test configuration and monitor, click Create.
Create a multistep API test
Multistep API tests allow you to monitor key business transactions at the API level.
Similar to API tests, multistep API tests alert you when your endpoints become too slow or fail to meet any conditions you defined. You can create variables from individual step responses and re-inject their values in subsequent steps, chaining steps together in a way that mimics the behavior of your application or service.
The example test below demonstrates the creation of a multistep API test that monitors the addition of an item to a cart. This test contains three steps:
Getting a cart
Getting a product
Adding the product to the cart
If you don’t know which API endpoints to create your multistep API test on, use the example endpoints below.
To create a new multistep API test, click New Test > Multistep API test. Add a test name such as Add product to cart, include tags, and select locations.
Get a cart
In Define steps, click Create Your First Step.
Add a name to your step, for example: Get a cart.
Specify the HTTP method and the URL you want to query. You can enter POST and https://api.shopist.io/carts.
Click Test URL. This creates a cart item in the Shopist application’s backend.
Leave the default assertions or modify them.
Optionally, define execution parameters.
Selecting Continue with test if this step fails is helpful to ensure a whole endpoint collection is tested or to ensure the last cleanup step is executed, regardless of previous steps’ success or failure. The Retry step feature is handy in situations where you know your API endpoint may take some time before responding.
In this example, no specific execution parameter is needed.
To create a variable out of the value of the cart ID located at the end of the location header:
Click Extract a variable from response content.
Name your variable as CART_ID.
In the Response Header, select location.
In the Parsing Regex field, add a regular expression such as (?:[^\\/](?!(\\|/)))+$.
Click Save Variable.
When you’re done creating this test step, click Save Step.
Get a product
In Define another step, click Add Another Step. By default, you can create up to ten steps.
Add a name to your step, for example: Get a product.
Specify the HTTP method and the URL you want to query. Here, you can add: GET and https://api.shopist.io/products.json.
Click Test URL. This retrieves a list of products available in the Shopist application.
Leave the default assertions or modify them.
Optionally, define execution parameters. In this example, no specific execution parameter is needed.
To create a variable out of the product ID located in the response body:
Click Extract a variable from response content
Name your variable as PRODUCT_ID.
Click the Response Body tab.
Click on the $oid key of any product to generate a JSON Path such as $[0].id['$oid'].
Click Save Variable.
When you’re done creating this test step, click Save Step.
Add product to cart
Click Add Another Step to add the final step, the addition of a product into your cart.
Add a name to your step, for example: Add product to cart.
Specify the HTTP method and the URL you want to query. Here, you can add: POST and https://api.shopist.io/add_item.json.
In the Request Body tab, choose the application/json body type and insert the following:
Click Test URL. This adds the product you extracted in Step 2 to the cart you created in Step 1 and returns a checkout URL.
In Add assertions (optional), click Response Body and click the url key to have your test assert that the journey finished with a response containing the checkout URL.
No execution parameters and variable extractions are needed in this last step.
When you’re done creating this test step, click Save Step.
You can then configure the rest of your test conditions such as test frequency and alerting conditions, and the test monitor. When you’re ready to save your test configuration and monitor, click Create.
The API test and Multistep API test detail pages display an overview of the test configuration, the global uptime associated with the tested endpoints by location, graphs about response time and network timings, and a list of test results and events.
To troubleshoot a failed test, scroll down to Test Results and click on a failing test result. Review failed assertions and response details such as status code, response time, and associated headers and body to diagnose the issue.
With Datadog’s APM integration with Synthetic Monitoring, access the root cause of a failed test run by looking at the trace generated from the test run in the Traces tab.
Further Reading
Additional helpful documentation, links, and articles: