Index
- Version
- Overview
- Code Samples
- Request Headers
- Authentication
- User Accounts: API Access and Data Permissions
- Response Codes
- The Response Object
- API Endpoints
- Linked Resources
- Named Fields
- Field Formulas
Examples and Sample Code
- Users
- Devices (cdt_device)
- The cdt_device Object
- Example: Configuring a device as ping-only
- Example: Returning details on all ping-only devices
- Example: Requesting ping metrics for a device
- Example: Return ping availability percentage for a group of devices, during business hours last month
- Example: Deleting multiple devices via IP address
- Interfaces (cdt_port)
- The cdt_port Object
- Example: Turning off an interface
- Example: Requesting timeseries data from all interfaces on a device
- Example: Return the 10 most congested interfaces (inbound) over the last 30mins
- Example: Return the 10 busiest interfaces, according to their 90th percentile values, over the last hour
- Example: Return the inbound traffic 90th percentile for the last hour
- Example: Return traffic rate metrics for an interface, during business hours on a specific date
- Groups
- Using ‘group_by’ for Data Aggregation
- Syslog
- Device and Interface Event Records (event_record)
Data Dictionary
- Resource-Level Endpoints Reference
- Timeseries Data: Stats, Formats, Aggregation Formats and Options
- Event Formats
Version
This guide is specific to the latest version of the Statseeker RESTful API, available with Statseeker version 25.2.
Overview
The Statseeker API (Application Programming Interface) provides access to view and edit Statseeker configuration settings, as well as to retrieve and manipulate timeseries data via an interface other than that provided by Statseeker’s web GUI (Graphical User Interface). Most of the processes and objects that you are used to interacting with in Statseeker (network discovery, users, groups, devices, interfaces, device events, thresholds, reporting data, etc.) are exposed as resources from the API.
The API adheres to basic RESTful API principles:
- Implementing resource-oriented URLs
- Uses HTTP response codes to indicate API errors
- Uses HTTP authentication – Basic & JSON Web Token (JWT)
- Accepts HTTP verbs (GET, PUT, POST, and DELETE)
Code Samples – URL encoding and nested quotes
URL Encoding
While some implementations will incorporate libraries that will handle any required URL encoding, others may not. In response, we have provided sample code demonstrating the use of both Unicode and URL (percent) encoding throughout this guide. You may need to modify this aspect of the samples to suit the requirements specific to your environment.
Nested Quotes
All cURL examples provided in this guide are formatted for a Unix-based system, utilizing both single quotes (‘) and double quotes(“) for nested data objects. A Windows-based cURL implementation may not allow the use of single quotes; in this instance, use double quotes and escape any inner pairs with backslash (\).
Example:
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X POST \
"https://your.statseeker.server/api/v2.1/group/?indent=3" \
-d '{"data":[{"name":"Group1"}]}'
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X POST \
"https://your.statseeker.server/api/v2.1/group/?indent=3" \
-d "{\"data\":[{\"name\":\"Group1\"}]}"
URL Encoding Nested Quotes
Some code samples use nested quotes, and (depending on your development environment) you may need to encode inner quote pairs so that they are interpreted correctly. In some examples, we have URL-encoded inner single-quotes and they are presented as %27. In other examples, we have used the Unicode \u0027 for the same purpose – the variation is simply to highlight that there may be specific encoding requirements depending on your environment.
Request Headers
All requests should include the following HTTP headers:
Header | Mandatory | Value | Notes |
Accept | no | application/json | This specifies the media type accepted in the response |
Content-Type | no | application/json | This specifies the media type of the content being sent to the API |
Authorization | yes | Bearer {authentication_token} | The authentication token returned from a POST request to the Statseeker authentication end-point (/ss-auth) |
Basic [base-64 encoded username:password] | (Deprecated) Basic keyword indicates the authentication method being used and the base-64 encoded username:password pair provides the credentials needed to access the API. See Authentication for further details. |
Authentication
Statseeker uses token-based authentication for user access to the server. With a default configuration, this includes API access. The API has an endpoint which handles authentication requests, successful authentication returns an access token which must be included with all subsequent API requests.
The API can also be set to use Basic Access Authentication instead of token authentication. This setting only applies to authentication with the API and does not impact user authentication via the web interface (GUI).
Set the API Authentication Method
To specify the API authentication method used:
- Select Admin Tool > Statseeker Administration > Web Server Configuration
- Click Edit (top-right)
- Set RESTful API Authentication Mode as needed and click Save
See Web Server Configuration for more information.
Token-Based Authentication
The Statseeker server has a resource, located at https://your.statseeker.server/ss-auth, which handles authentication requests.
- Submit a POST request to https://your.statseeker.server/ss-auth, containing the username and password of an authorized user account (see User Accounts: API Access and Data Permissions) in the body of the request
- Successful authentication will return an access token which must be included with all subsequent requests
The following sample code demonstrates this process:
curl \
-D -X POST \
-H "Content-type: application/x-www-form-urlencoded" \
-d "user=user_name&password=user_password" \
"https://your.statseeker.server/ss-auth"
Subsequent request to access the API.
curl \
-X GET
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
"https://your.statseeker.server/api/v2.1"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# request auth token
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
# If auth was successful, include my token and make my API request
if resp.status_code == 200:
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'your_api_user'
pword = 'your_user_password'
# API root endpoint
query = 'api/v2.1'
# optional response formatting
query += '/?indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
# output the response to my API request
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
Example: Token-based Auth with a fall back to Basic Auth
In this example we attempt token authentication and if this process fails, we fallback to basic authentication.
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# optional response formatting
query += '/?indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
Basic Authentication (Deprecated)
The Statseeker API uses the same HTTP Basic Access (BA) authentication used by the Statseeker web interface (i.e.username:password).
This basic authentication must be included in the HTTP Authorization header for every request sent to the API, see below for examples of supplying this authorization.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-u user:pword \
"https://your.statseeker.server/api/v2.1"
# import requests for handling connection and encoding
import requests, json
# specify the api endpoint to be used
url = "https://your.statseeker.server/api/v2.1/"
# credentials
user = "username"
pword = "password"
# headers
headers = {"Accept":"application/json", "Content-Type":"application/json"}
# send request
r = requests.get(url, headers=headers, auth=(user, pword))
print(r.status_code)
print(r.json())
User Accounts: API Access and Data Permissions
API access also requires that the user account initiating the communication has been assigned permission to interact with the API. The default admin user account (created during the Statseeker installation process) is configured with API read/write access by default but API access must be assigned when creating all other user accounts:
- Via the web interface, by setting the API Access field, see User Accounts for details
- Via the API, by setting api_access = rw, for read-write access (alternatively, r, or w, for read-only and write-only respectively)
Independent of a user account’s ability to access the API are the account’s data access permissions, which are used to enable\restrict access to specific elements of your network infrastructure. These permissions are managed through user-group associations, typically configured during user account creation. If you are not familiar with this process, see Users & Grouping.
You may want to consider creating a dedicated API user account:
- Assign the account API read/write access
- To enable access to all data, assign the account ‘All Groups’ permission (see Users & Grouping for details)
- Reference this account in all API scripts
Response Codes
Statseeker uses RESTful HTTP response codes to indicate success or failure of API requests
Code | Message | Notes |
200 | Success | Successful API request
Note: when referencing deprecated objects/endpoints the response will return a Status: 200 OK, but the response header will contain a Warning 299 – Deprecated API. In this instance, refer to the Resource Reference for the resource being requested.
|
400 | Bad Request | Malformed request, check request syntax and supplied parameters |
401 | Unauthorized | Authorization error, check the supplied username and password |
500 | Server Error | Something has gone wrong on Statseeker’s end |
Example Request:We are requesting a user with an ID of not_a_valid_user_id.
https://your.statseeker.server/api/v2.1/user/not_a_valid_user_id/?links=none
Example Response:The response will return a data object containing all the data retrieved for the specified user, in this case an empty array and the corresponding data_total is 0.
{
"version": "2.1",
"revision": "16",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1575850381,
"objects": [
{
"type": "user",
"sequence": 0,
"status": {
"success": true,
"errcode": 0
},
"data_total": 0,
"data": []
}
]
}
"links": []
}
The Response Object
The API responds to all requests with a JSON object containing the following key:value pairs:
- “version” – the API version (string), this only changes when major architectural changes to the API are released e.g “2.1”
- “revision” – the API revision (string), typically incremented with every Statseeker release. What data is available, and the format that this data is returned may change between revisions. E.g “16”
- “info” – identifies the service providing the response, i.e. “The Statseeker RESTful API”
- “data” – an object containing the data returned in response to the request
- “links” – an object containing links (API requests) to data related to the content of the “data” node. This node:
- Also contains pagination links where necessary, see The ‘limit’ Parameter & Pagination for details
- Can be excluded from the response by setting links=none in the request
** Disambiguation: Links **There are multiple references to Links:
- The “links” node in the response object
- The links parameter which can be set to hide the “links” node in the response object
- Resource links – a defined relationship between resources allowing requests preformed against one resource to return data from a linked resource. E.g A resource link exists between interfaces (cdt_port) and devices (cdt_device) allowing requests made against the interface to be able to return data from the parent device.
The ‘limit’ Parameter & Pagination
The API automatically limits the response when large data sets are returned. By default, the API will return 50 results per response.
The number of results returned in a single response can be modified from the default value of 50 by setting the limit parameter on the request.
The following example HTTP request will return 200 results per response
https://your.statseeker.server/api/v2.1/cdt_port/?limit=200
When a request returns response data row exceeding the defined limit, the Links object ion the responsethrough paginated data with the pagination links (Next, Previous, etc) the request URL is altered to append an offset request parameter.
Following on from the example above, with limit=200 and setting offset=100, the request will return the 101st to 300th result:
https://your.statseeker.server/api/v2.1/cdt_port/?limit=200&offset=100
Response Size Limits
The API has configurable limits on the size of the data being returned from any single API query. The default value of these limits are:
- Event-based data (syslog, threshold events) is limited to 1 million rows per request
- Timeseries data is limited to 1GB of data returned per request
These default values are intentionally set very high and should not impact typical use of the API. They will prevent a single API query from consuming enough system resources to severely impact the performance of your Statseeker server. The limit values can be modified to suit your Statseeker server resources, please contact Statseeker Technical Support for assistance.
API Endpoints
The API allows you to add, retrieve, and modify data relating to a large number of data-object/resource types. These resources are related to either the network hardware that your Statseeker server is monitoring, or various configuration settings (user accounts, thresholds and alerts, automated processes, network configuration details, etc.) applied to your Statseeker server.
Most API’s feature a static selection of object-focused endpoints, which are available to everyone. The Statseeker API is a little different, while many resources are available through the default installation, some are available as modular Custom Data Type (CDT) packages that can be added and removed as needed. This feature, combined with the fact that Statseeker doesn’t force users to upgrade their installation, means that the resource-level endpoints available on one Statseeker deployment may be quite different from those available on another.
- For information on installing CDT packages, see Custom Data Types
The remainder of this section provides a basic overview of the types of endpoints available through the API and the types of requests that can be made against those endpoint types. These endpoint types and their available actions are universal across deployments, but the data objects available at the resource-level may differ between Statseeker installations.
Endpoint | URL | Description |
Statseeker Endpoints | ||
Authentication | /ss-auth
https://your.statseeker.server/ss-auth |
The authentication endpoint authenticates users and generates authentication tokens |
API Endpoints | ||
API Root | /api/{api_version}/
https://your.statseeker.server/api/v2.1/ |
The root endpoint for the API |
Resource | /api/{api_version}/{resource}
https://your.statseeker.server/api/v2.1/user |
The resource endpoint, for running get, add, update, and delete queries on specific resources. This is the endpoint type used to retrieve your timeseries and configuration data from your Statseeker server, see:
|
Describe | /api/{api_version}/{resource}/describe
https://your.statseeker.server/api/v2.1/user/describe |
The describe endpoint, for running describe queries on specific resources |
Execute | /api/{api_version}/{resource}/execute
https://your.statseeker.server/api/v2.1/discover/execute |
The execute endpoint, for running execute queries on specific resources |
ID | /api/{api_version}/{resource}/{id}
https://your.statseeker.server/api/v2.1/user/1234 |
The ID endpoint, for running get, update, and delete queries on a specific entry within a resource |
Field | /api/{api_version}/{resource}/{id}/{field}
https://your.statseeker.server/api/v2.1/user/1234/name |
The field endpoint, for running update queries on a field of a specific entry within a resource |
Authentication Endpoint (/ss-auth))
This endpoint is used to authenticate users and generate authentication tokens for use in API requests. For details on this endpoint and examples of authenticating users, see Authentication.
Root Endpoint (/api/v2.1)
The base (root) endpoint for the API is:
- https://your.statseeker.server/api/[api_version]
, where:
- https://your.statseeker.server is the URL of your Statseeker server
- [api_version] is one of the accepted version references, see below
Version References | Message |
v2, v2.0 | Version 2 of the API, the initial implementation of the Statseeker RESTful Read-Write API |
v2.1 | Version 2.1 of the API, added access to all timeseries data (device\interface metrics) collected by Statseeker |
latest | Currently, Version 2.1 of the API |
Versioning
The base endpoint of /api/latest will return the latest version of the API installed on your Statseeker server. This version is identified in every response from the API in the version key.
{
"info": "The Statseeker RESTful API",
"version": "2.1",
"data": {response_data},
}
You can access a specific API version by altering your base URL to specify the version you want to use.
E.g./api/v2 will return API v2.0.
There may be multiple revisions released for a given API version, these revisions address issues relating to bugs within the API itself, and issues relating to Custom Data Type (CDT) packages that the API can interact with.
For more information on the relationship between the API and CDT packages, see API Endpoints.
GET
A GET request to the root endpoint will return all resource endpoints available on your Statseeker server.
The parameters that may be passed when sending a GET request.
Parameters | Type/Valid Values | Description |
links |
|
Modifies how the links contained within the response object are presented |
indent | positive integer | Number of characters to indent when formatting code blocks for display |
Resource Endpoints (/api/v2.1/{resource})
The resource-level endpoints are where you will spend the vast majority of your time when interacting with the Statseeker API. In order to retrieve reporting data on your network infrastructure, you will be routinely sending GET requests to resource-level endpoints. These requests will contain:
- The resource-level endpoint to specify what type of data objects you are reporting on (device, interface, CPU, memory, etc.)
- filters to specify which of these data objects to return
- fields and formats parameters to specify what data is to be returned (average outbound traffic, maximum aggregated CPU load, etc.)
- timefilters to specify the reporting period
The Statseeker RESTful API contains many resources allowing you to access and interact with both the data that Statseeker has collected on your network, and the configuration options used by Statseeker to collect and manage that data. Some of the object types available at the resource-level endpoint include:
- Users (/api/v2.1/user)
- Groups (/api/v2.1/group)
- Devices (/api/v2.1/cdt_device)
- Interfaces (/api/v2.1/cdt_port)
As of Statseeker v5.6.2, the API contained 332 resource-level endpoints, all aspects of these endpoints are detailed in the Resource Reference. This list can be expanded upon by adding CDT packages and additional modules for new data types. For a complete list of all resources available, from your Statseeker installation:
- Send a GET request to the root endpoint – this will return a list of all available resource-level endpoints
- Refer to the Resource Reference for details on each of those resources
Get
Send a GET request to a resource endpoint to return details on instances of the specified resource.
The parameters that may be passed when sending a GET request.
Parameters | Type/Valid Values | Description |
fields | A comma-separated list of field namesE.g.fields=id,name,location | The list of fields that will be returned in the response. This parameter will be ignored if fields_adv is also specified. |
fields_adv | A JSON string detailing the fields to be returnedE.g.fields_adv={“Device Name”:{“field”:”name”}, “IP”:{“field”:”ipaddress”},”Location”:{“field”:”sysLocation”},”SNMP Polling”:{“field”:”snmp_poll”,”filter”:{“query”:”=%27off%27″}}} | The list of fields that will be returned in the response |
filter | An SQL filter stringE.g.“SNMP Polling”:{“field”:”snmp_poll”,”filter”:{“query”:”=%27off%27″}} | A filter to be applied to the response data inside of fields_adv |
groups | A comma-separated list of group names or group IDs | A list of groups to be used to filter the response data, only data associated with members of the specified groups will be returned |
grouping_mode |
|
The mode to use when processing multiple groups |
group_by |
OR
For examples of using group_by, see Using ‘group_by’ for Data Aggregation. |
Use when you want to aggregate data across multiple entities, such as total traffic across multiple interfaces.
Requires that:
When grouping by groups, an extra group with the name/ID None/null may be returned, containing any rows that weren’t in any of the groups. |
interval | positive integerE.g.interval=300 | The polling interval to be applied to all timeseries metrics specified in the fields parameter. When not specified, the default polling interval of 60 seconds is used. |
limit | positive integerE.g.limit=100 | The number of items to return per ‘page’ of the response, see Pagination for details. The API will automatically paginate response data with a default limit=50. |
offset | positive integerE.g.offset=100 | The number of result items to skip, see Pagination for details. The API will automatically paginate response data with a default offset={limit}. |
precision | positive integerE.g. precision=5 | The number of decimal places to return for all timeseries metrics specified in the fields parameter. The default precision value varies between fields, supply this parameter when you want to enforce a specific level of precision across all timeseries fields. |
sortmode | string
|
Specify how the response should handle null values.Default =novals_small, this setting is used in all instances where sortmode is not set. |
timefmt | stringThe syntax for specifying the time format matches that used by STRFTIME. E.g. timefmt=%A %H:%M:%S (%y-%m-%d) will return a timestamp in the formatWednesday 15:44:48 (19-07-17) |
The format in which to return timestamps when value_time is set |
value_time | One of:
E.g.value_time=all |
Optionally return a timestamp for returned timeseries data points. Timestamps are return in epoch time unless timefmt is also specified. |
When the fields parameter has been specified, the following additional parameters may be used.
Parameters | Type/Valid Values | Description | ||||||||||||||||||||||||||||||
<USER_DEFINED_FIELD>_{field} | string | A user defined variable name followed by a valid field name for the resource E.g.yesterday_RxUtil – indicates that the field yesterday is an instance of RxUtil See Named Fields for more information and examples. |
||||||||||||||||||||||||||||||
{field}_formats | Comma-separated list, see Timeseries Data: Stats, Formats & Options | The formats to request from the API for the given field, required for timeseries data fields
Note: a global formats key (formats=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_formula | string | Specify an SQL-style formula to be used to populate the field, see Field Formulas for details.
Syntax for referencing other fields within the formula:
|
||||||||||||||||||||||||||||||
{field}_filter | string | The filter to apply to the given field. These filters can be either SQL-style filters or extended RegEx filters, for details and examples of applying filters see: | ||||||||||||||||||||||||||||||
{field}_filter_format | string | The format to use for the filter, required for timeseries data fields | ||||||||||||||||||||||||||||||
{field}_interval | integer | The polling interval (in seconds) to use for the specified field. When used, a field-specific timefilter for the specified field must also be used.
Note: when not specified, the default interval of 60 seconds is used. A global interval key (interval=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_link | string | Required when:
See Multiple Links to the same resource for details. |
||||||||||||||||||||||||||||||
{field}_object | string | Required when:
See Multiple Links to the same resource for details. |
||||||||||||||||||||||||||||||
{field}_post_filter | string | The filter to apply to the given field after data aggregation has occurred. This parameter works exactly the same as {field}_filter, except the filter is applied after data aggregation. | ||||||||||||||||||||||||||||||
{field}_post_formula | string | The formula to apply to the given field after data aggregation has occurred. This parameter works exactly the same as {field}_formula, except the formula is applied after data aggregation. | ||||||||||||||||||||||||||||||
{field}_precision | positive integerE.g. {field}_precision=5 | The number of decimal places to return for the specified timeseries metric. The default precision value varies between fields, supply this parameter when you want to enforce a specific level of precision.
Note: a global precision key (precision=) may also be used to apply the same value to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_timefilter | string | The timefilter to use for the given field, required for timeseries data fields.
Note: a global timefilter key (timefilter=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_tz | string | An alternate timezone to use for the {field}_timefilter. All timefilters use the Statseeker server’s timezone unless an override is specified by supplying {field}_tz. Note: a global timezone key (tz=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_sort | Comma-separated list | List specifying the sort hierarchy and direction, in the following format:{field}_sort={rank}{direction} and for Timeseries data:{field}_sort={rank}{direction}{format}E.g.name_sort=1,ascRxUtil_sort=1,desc,avg | ||||||||||||||||||||||||||||||
{field}_stats | Comma-separated list, see Timeseries Data: Stats, Formats & Options | The stats to use for the given field | ||||||||||||||||||||||||||||||
{field}_aggregation_format | One of:
|
The aggregation format to use for the specified field
Note: using aggregation, the following rules apply:
|
GET Example: Retrieving Details on Multiple Devices
Return specified details on all devices in a specified group, the fields I am returning are:
- name
- id
- community
- ipaddress
- snmp_version
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device/?fields=name,id,community,ipaddress,snmp_version&groups=<GROUP NAME>&indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device'
# specify fields to be returned
query += '/?fields=name,id,community,ipaddress,snmp_version'
# Filters
query += '&groups=<GROUP NAME>'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"objects": [
{
"status": {
"errcode": 0,
"success": true
},
"data": [
{
"snmp_version": 2,
"ipaddress": "10.116.4.172",
"name": "Brisbane-Server1",
"community": "public",
"id": 551
},
{
"snmp_version": 2,
"ipaddress": "10.116.4.180",
"name": "Brisbane-Server2",
"community": "public",
"id": 552
},
{
"snmp_version": 2,
"ipaddress": "10.116.4.186",
"name": "Sydney-Server1",
"community": "public",
"id": 555
},
{
"snmp_version": 2,
"ipaddress": "10.116.4.187",
"name": "Sydney-Server2",
"community": "public",
"id": 556
},
{
"snmp_version": 2,
"ipaddress": "10.116.4.194",
"name": "Melbourne-Server1",
"community": "public",
"id": 557
}
],
"type": "cdt_device",
"data_total": 5
}
],
"errmsg": "ok",
"success": true,
"time": 1496188135
},
"links": [],
"api_version": "2.1"
}
POST
Use the POST request to create new instances of the specified resource. The data object included in a POST request must be a json string with a single data key. Some resource types will require additional keys within the data object, use the /describe endpoint to view the requirements for a given resource.
Example: Creating Multiple Groups
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X POST \
"https://your.statseeker.server/api/v2.1/group/?indent=3" \
-d '{"data":[{"name":"<GROUP NAME 1>"},{"name":"<GROUP NAME 2>"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.post(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.post(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"data":[{"name":"<GROUP NAME 1>"},{"name":"<GROUP NAME 2>"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"objects": [
{
"status": {
"errcode": 0,
"success": true
},
"data": [
{
"name": "<GROUP NAME 1>",
"id": 46458
},
{
"name": "<GROUP NAME 2>",
"id": 46459
}
],
"type": "group",
"data_total": 2
}
],
"errmsg": "ok",
"success": true,
"time": 1496190459
},
"api_version": "2.1"
}
PUT Requests
Use the PUT request to update an existing instance of the specified resource. The data object included must be a json string with both a fields key (to identify the resource/s to be updated) and a data key (to specify the data to be updated). Some resource types will require additional keys within the data object, use the /describe endpoint to view the requirements for a given resource.
When sending a PUT request, the data object (request payload) requires both a data key and a fields key.
Example: Updating a Group Name
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/group/?indent=3" \
-d '{"fields":{"name":{"field":"name","filter":{"query":"=\u0027<OLD GROUP NAME>\u0027"}}},"data":[{"name":"<NEW GROUP NAME>"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += ''
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"fields":{"name":{"field":"name","filter":{"query":"='<OLD GROUP NAME>'"}}},"data":[{"name":"<NEW GROUP NAME>"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"errmsg": "ok",
"success": true,
"time": 1496190511
},
"api_version": "2.1"
}
DELETE
Use the DELETE request to delete an existing instance of the specified resource. The data object included must be a json string with a single fields key. Some resource types will require additional keys within the data object, use the /describe endpoint to view the requirements for a given resource.
Example: Deleting Multiple Groups
We will be deleting all groups with a name matching a specified filter string.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X DELETE \
"https://your.statseeker.server/api/v2.1/group/?indent=3" \
-d '{"fields":{"name":{"field":"name","filter":{"query":"LIKE \u0027<FILTER STRING>\u0027"}}}}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.delete(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.delete(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"fields":{"name":{"field":"name","filter":{"query":"LIKE '<FILTER STRING>'"}}}})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623638287
}
}
The Describe Endpoint (/api/v2.1/{resource}/describe)
The /describe endpoint is particularly useful as it allows you to query a specified resource to return details on the resource.
The response from a GET request targeting the {resource}/describe endpoint will contain:
- Resource name, title and description
- All fields pertaining to the resource
- All data formats that can be applied to each field
- All request methods that can be applied to the resource
- All fields and data that can be used with each request method
- Details on all links between the selected resource and any other resources
- Whether or not the resource allows the use of custom formula fields
- An info block presenting data structure details (such as resource inheritance relationships and data functionality options, such as whether or not the resource can be grouped or reported on)
The various HTTP request types are mapped to commands within the API.
HTTP Request Method | API Command |
GET | get |
POST | add |
PUT | update |
DELETE | delete |
The /describe endpoint can be applied to any resources returned from a GET request applied to the root endpoint.
- https://your.statseeker.server/api/v2.1/ will return all resources available to the Statseeker installation
- https://your.statseeker.server/api/v2.1/{some_resource}/describewill return details on the resource
GET
The parameters that may be passed when sending a GET request.
Parameters | Type/Valid Values | Description |
links |
|
Modifies how the links contained within the response object are presented |
indent | positive integer | Number of characters to indent when formatting code blocks for display |
Example: Requesting a /describe on the User Resource
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server1/api/v2.1/user/describe/?indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += 'user/describe'
# optional response formatting
query += '/?indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1646798321,
"objects": [
{
"title": "User",
"type": "user",
"description": "Statseeker Users",
"commands": {
"describe": {
"valid_fields": null,
"valid_data": null
},
"add": {
"valid_data": {
"name": {
"required": true
},
"tz": {
"required": false
},
"top_n": {
"required": false
},
"email": {
"required": false
},
"reportRowSpacing": {
"required": false
},
"exportDateFormat": {
"required": false
},
"auth_ttl": {
"required": false
},
"api": {
"required": false
},
"auth": {
"required": false
},
"auth_refresh": {
"required": false
},
"password": {
"required": false
},
"is_admin": {
"required": false
}
},
"valid_fields": null
},
"update": {
"valid_fields": {
"auth": {
"required": false
},
"auth_refresh": {
"required": false
},
"id": {
"required": false
},
"is_admin": {
"required": false
},
"password": {
"required": false
},
"name": {
"required": false
},
"tz": {
"required": false
},
"top_n": {
"required": false
},
"email": {
"required": false
},
"reportRowSpacing": {
"required": false
},
"api": {
"required": false
},
"auth_ttl": {
"required": false
},
"exportDateFormat": {
"required": false
}
},
"valid_data": {
"reportRowSpacing": {
"required": false
},
"api": {
"required": false
},
"exportDateFormat": {
"required": false
},
"auth_ttl": {
"required": false
},
"tz": {
"required": false
},
"top_n": {
"required": false
},
"email": {
"required": false
},
"is_admin": {
"required": false
},
"password": {
"required": false
},
"auth_refresh": {
"required": false
},
"auth": {
"required": false
}
}
},
"get": {
"valid_fields": {
"email": {
"required": false
},
"tz": {
"required": false
},
"top_n": {
"required": false
},
"name": {
"required": false
},
"api": {
"required": false
},
"auth_ttl": {
"required": false
},
"exportDateFormat": {
"required": false
},
"reportRowSpacing": {
"required": false
},
"auth_refresh": {
"required": false
},
"auth": {
"required": false
},
"is_admin": {
"required": false
},
"password": {
"required": false
},
"id": {
"required": false
}
},
"valid_data": null
},
"delete": {
"valid_fields": {
"email": {
"required": false
},
"name": {
"required": false
},
"tz": {
"required": false
},
"top_n": {
"required": false
},
"exportDateFormat": {
"required": false
},
"auth_ttl": {
"required": false
},
"api": {
"required": false
},
"reportRowSpacing": {
"required": false
},
"auth": {
"required": false
},
"auth_refresh": {
"required": false
},
"id": {
"required": false
},
"password": {
"required": false
},
"is_admin": {
"required": false
}
},
"valid_data": null
}
},
"fields": {
"email": {
"title": "Email",
"description": "User email address",
"datatype": "string"
},
"name": {
"description": "User name",
"datatype": "string",
"title": "Name"
},
"tz": {
"description": "User time zone",
"datatype": "string",
"title": "Time Zone"
},
"top_n": {
"description": "The default Top N number for the user",
"datatype": "integer",
"title": "Top N"
},
"exportDateFormat": {
"description": "User specified Date Format",
"datatype": "string",
"title": "Date Format"
},
"auth_ttl": {
"description": "The TTL for authentication tokens (in seconds)",
"datatype": "integer",
"title": "Authentication TTL"
},
"api": {
"description": "User API access permission",
"datatype": "string",
"title": "API Access"
},
"reportRowSpacing": {
"datatype": "string",
"description": "The report row spacing preference for the user",
"title": "Report Row Spacing"
},
"auth": {
"datatype": "string",
"description": "User authentication method",
"title": "Authentication method"
},
"auth_refresh": {
"description": "The time allowed after a token has expired that it will be refreshed (in seconds)",
"datatype": "integer",
"title": "Authentication Refresh"
},
"id": {
"title": "ID",
"description": "User Identifier",
"datatype": "integer"
},
"password": {
"title": "Password",
"description": "User password",
"datatype": "string"
},
"is_admin": {
"title": "Is Admin",
"datatype": "integer",
"description": "Whether the user has admin access"
}
},
"info": {
"allow_grouping": 1
},
"allow_formula_fields": true,
"links": {},
"global_field_options": {
"aggregation_format": {
"description": "Aggregation formats to apply when group_by option is provided",
"values": {
"first": {
"title": "First",
"description": "First value in the group (default)"
},
"last": {
"title": "Last",
"description": "Last value in the group"
},
"avg": {
"title": "Average",
"description": "Average of the values in the group"
},
"count": {
"title": "Count",
"description": "Number on non-null values in the group"
},
"count_all": {
"title": "Count (include NULL)",
"description": "Number of values in the group (including null values)"
},
"count_unique": {
"title": "Unique count",
"description": "Number of unique non-null values in the group"
},
"count_unique_all": {
"title": "Unique count (include NULL)",
"description": "Number of unique values in the group (including null values)"
},
"cat": {
"title": "Concatenate",
"description": "Concatenation of the values in the group"
},
"list": {
"title": "List",
"description": "comma=separated concatenation of the values in the group"
},
"list_unique": {
"title": "Unique List",
"description": "comma=separated concatenation of the unique values in the group"
},
"min": {
"title": "Minimum",
"description": "Minimum of the values in the group"
},
"max": {
"title": "Maximum",
"description": "Maximum of the values in the group"
},
"sum": {
"title": "Sum",
"description": "Sum of all values in the group (null if no valid values)"
},
"total": {
"title": "Total",
"description": "Sum of all values in the group (0 if no valid values)"
},
"median": {
"title": "Median",
"description": "Median of the values in the group"
},
"95th": {
"title": "95th percentile",
"description": "95th percentile of the values in the group"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the values in the group"
}
}
}
}
}
]
},
"links": [
{
"link": "/api/v2.1/user/describe",
"rel": "self"
},
{
"link": "/api/v2.1",
"rel": "base"
},
{
"link": "/api/v2.1/user",
"rel": "collection"
}
]
}
Example: Requesting a /describe on the Device Resource (cdt_device)
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server1/api/v2.1/cdt_device/describe/?indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += 'cdt_device/describe'
# optional response formatting
query += '/?indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1646798484,
"objects": [
{
"type": "cdt_device",
"title": "Device",
"description": "The custom data entities for the device table",
"fields": {
"id": {
"title": "ID",
"description": "The entity identifier",
"datatype": "integer"
},
"name": {
"title": "Name",
"description": "The entity name",
"datatype": "string"
},
"deviceid": {
"title": "Device ID",
"description": "The ID of the parent device",
"datatype": "integer"
},
"idx": {
"title": "Index",
"description": "The base SNMP index for this entity",
"datatype": "string"
},
"table": {
"title": "Table",
"description": "The table to which the entity belongs",
"datatype": "string"
},
"poll": {
"title": "Poll State",
"description": "The poll state of the entity",
"datatype": "string"
},
"auth_method": {
"title": "SNMPv3 Authentication Method",
"description": "Authentication method for SNMPv3 devices",
"polltype": "cfg",
"datatype": "string"
},
"auth_pass": {
"title": "SNMPv3 Authentication Password",
"description": "Authentication password for SNMPv3 devices",
"polltype": "cfg",
"datatype": "string"
},
"auth_user": {
"title": "SNMPv3 Authentication Username",
"description": "Authentication user for SNMPv3 devices",
"polltype": "cfg",
"datatype": "string"
},
"community": {
"title": "Community",
"description": "The community string status of the device",
"polltype": "cfg",
"datatype": "string"
},
"context": {
"title": "SNMPv3 Context",
"description": "Context for SNMPv3 devices",
"polltype": "cfg",
"datatype": "string"
},
"custom_data_details": {
"title": "Custom Data Details",
"description": "A JSON string object indicating which custom entities have been found during a discovery",
"polltype": "cfg",
"datatype": "string"
},
"discover_getNext": {
"title": "Use GetNext",
"description": "Walk this device using SNMP getNext instead of getBulk",
"polltype": "cfg",
"datatype": "string"
},
"discover_minimal": {
"title": "Use Minimal Walk",
"description": "Walk this device using a minimal set of oids",
"polltype": "cfg",
"datatype": "string"
},
"discover_snmpv1": {
"title": "Use SNMPv1",
"description": "Walk this device using SNMPv1",
"polltype": "cfg",
"datatype": "string"
},
"hostname": {
"title": "Hostname",
"description": "The hostname of the device",
"polltype": "cfg",
"datatype": "string"
},
"ipaddress": {
"title": "IP Address",
"description": "The IP address of the device",
"polltype": "cfg",
"datatype": "string"
},
"latitude": {
"title": "Latitude",
"description": "The user defined latitude of the device's location",
"polltype": "cfg",
"datatype": "float"
},
"longitude": {
"title": "Longitude",
"description": "The user defined longitude of the device's location",
"polltype": "cfg",
"datatype": "float"
},
"manual_name": {
"title": "User Defined Name",
"description": "The user defined name of the device",
"polltype": "cfg",
"datatype": "string"
},
"memorySize": {
"title": "Memory Size",
"description": "The amount of physical read-write memory contained by the entity",
"polltype": "cfg",
"units": "bytes",
"symbol": "B",
"datatype": "integer"
},
"mis": {
"title": "MAC/IP/Switch Collection State",
"description": "Include this device in the MIS report calculations",
"polltype": "cfg",
"datatype": "string"
},
"ping_dup": {
"title": "Ping Duplicate",
"description": "Number of duplicate ping responses received",
"polltype": "tsc",
"interval": 3600,
"datatype": "integer",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"values": [
true,
false
],
"title": "Is Cumulative",
"description": "whether to use cumulative forecasts",
"default": false
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"title": "Is Cumulative",
"values": [
true,
false
],
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"description": "whether to use cumulative trendline"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_lost1": {
"title": "Ping Lost 1",
"description": "Number of times that a single ping request is lost",
"polltype": "tsc",
"interval": 3600,
"datatype": "integer",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"values": [
true,
false
],
"title": "Is Cumulative",
"description": "whether to use cumulative forecasts",
"default": false
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"title": "Is Cumulative",
"values": [
true,
false
],
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"description": "whether to use cumulative trendline"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_lost2": {
"title": "Ping Lost 2",
"description": "Number of times that two ping requests in a row have been lost",
"polltype": "tsc",
"interval": 3600,
"datatype": "integer",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"values": [
true,
false
],
"title": "Is Cumulative",
"description": "whether to use cumulative forecasts",
"default": false
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"title": "Is Cumulative",
"values": [
true,
false
],
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"description": "whether to use cumulative trendline"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_lost3": {
"title": "Ping Lost 3",
"description": "Number of times that three ping requests in a row have been lost",
"polltype": "tsc",
"interval": 3600,
"datatype": "integer",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"values": [
true,
false
],
"title": "Is Cumulative",
"description": "whether to use cumulative forecasts",
"default": false
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"title": "Is Cumulative",
"values": [
true,
false
],
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"description": "whether to use cumulative trendline"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_lost4": {
"title": "Ping Lost 4",
"description": "Number of times that four ping requests in a row have been lost",
"polltype": "tsc",
"interval": 3600,
"datatype": "integer",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"values": [
true,
false
],
"title": "Is Cumulative",
"description": "whether to use cumulative forecasts",
"default": false
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"title": "Is Cumulative",
"values": [
true,
false
],
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"description": "whether to use cumulative trendline"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_outage": {
"title": "Ping Outage",
"description": "Number of seconds to wait before a device is considered to be down",
"polltype": "cfg",
"datatype": "integer"
},
"ping_poll": {
"title": "Ping Poll",
"description": "The ping polling status of the device",
"polltype": "cfg",
"datatype": "string"
},
"ping_rtt": {
"title": "Ping RTT",
"description": "The current ping state of the device",
"polltype": "tsg",
"interval": 60,
"units": "milliseconds",
"symbol": "ms",
"scale": "time",
"datatype": "integer",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"description": "Whether to return the sum of forecast values",
"values": [
true,
false
],
"title": "Is Cumulative"
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"description": "Whether to fit a trendline to the sum of the data values",
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"values": [
true,
false
],
"title": "Is Cumulative"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_rtt_ms": {
"title": "Ping RTT(High Precision)",
"description": "The current ping state of the device in ms to 3 decimal places",
"polltype": "tsg",
"interval": 60,
"units": "milliseconds",
"symbol": "ms",
"scale": "time",
"datatype": "float",
"options": {
"value_time": {
"description": "The timestamp for a timeseries event",
"values": {
"none": "Do not return a timestamp",
"mid": "The midpoint of the start and end timestamps",
"end": "The timestamp at which the event finished",
"start": "The timestamp at which the event began",
"all": "An array of start, mid and end"
}
},
"formats": {
"description": "The valid data formats",
"values": {
"median": {
"description": "Median of the data",
"title": "Median"
},
"trendline_daily_change": {
"title": "Trendline daily change",
"description": "Slope of the trendline (units/day)"
},
"anomaly_metric": {
"description": "Metric from -100 to 100 indicating whether requested timeseries values are unusually large or small",
"title": "Anomaly metric"
},
"trendline_upr": {
"datatype": "array",
"description": "Trendline confidence interval values (upper)",
"title": "Trendline upper confidence interval"
},
"count": {
"description": "Number of non-null data points",
"title": "Count"
},
"baseline": {
"title": "Baseline",
"description": "List of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"datatype": "array"
},
"forecast_min": {
"title": "Forecast minimum",
"description": "Lower boundary of feasible forecast value range"
},
"trendline_strength": {
"title": "Trendline strength",
"description": "Coefficient of determination (R-squared) of the trendline"
},
"forecast_boundary_time": {
"datatype": "time",
"description": "Time forecast exceeds feasible value range if before end of timefilter range",
"title": "Forecast boundary time"
},
"95th": {
"description": "95th percentile of the data",
"title": "95th percentile"
},
"forecast_daily_change": {
"description": "The average daily change of forecast values",
"title": "Forecast daily change"
},
"min": {
"description": "Minimum data value",
"title": "Minimum"
},
"forecast_max": {
"title": "Forecast maximum",
"description": "Upper boundary of feasible forecast value range"
},
"current": {
"title": "Current",
"description": "The current value"
},
"baseline_percent_compare": {
"description": "Percentage difference between data average and expected baseline calculated from historical data",
"title": "Baseline Percent Comparison"
},
"start_tz_offset": {
"datatype": "integer",
"title": "Start TZ offset",
"description": "The timezone offset of the first data point"
},
"forecast_vals": {
"datatype": "array",
"title": "Forecast values",
"description": "An array of forecast values with periodic deviations at peak and offpeak times"
},
"max": {
"title": "Maximum",
"description": "Maximum data value"
},
"trendline_lwr": {
"datatype": "array",
"description": "Trendline confidence interval values (lower)",
"title": "Trendline lower confidence interval"
},
"anomaly_strength": {
"description": "Metric from 0 to 100 indicating whether requested timeseries values are extreme or unusual",
"title": "Anomaly strength"
},
"forecast_predict_offpeak": {
"description": "Long term prediction value calculated from historical data at offpeak times",
"title": "Forecast prediction offpeak"
},
"start_time": {
"datatype": "time",
"title": "Start time",
"description": "Time of the first data point"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the data"
},
"baseline_lwr": {
"description": "List of baseline lower confidence interval values. Corresponds to -95% anomaly metric.",
"title": "Baseline Lower Bound",
"datatype": "array"
},
"trendline_percent_change": {
"description": "Trendline change scaled by the average of data over requested time filter",
"title": "Trendline percent change"
},
"vals": {
"description": "Timeseries data values",
"title": "Values",
"datatype": "array"
},
"cvals": {
"datatype": "array",
"description": "Cumulative data values",
"title": "Cumulative Values"
},
"last": {
"title": "Last",
"description": "The last value in the data"
},
"trendline_start": {
"title": "Trendline Start",
"description": "Value of the first trendline data point"
},
"forecast_fit": {
"datatype": "array",
"title": "Forecast fit",
"description": "An array of forecast values without periodic deviations at peak and offpeak times"
},
"baseline_compare": {
"description": "Difference between data average and expected baseline calculated from historical data",
"title": "Baseline Comparison"
},
"first": {
"title": "First",
"description": "The first value in the data"
},
"percentile": {
"title": "Percentile",
"description": "Custom percentile of the data"
},
"trendline_predict": {
"title": "Trendline Predict",
"description": "Prediction value from the trendline"
},
"trendline_change": {
"description": "Trendline change over the requested timefilter range",
"title": "Trendline change"
},
"baseline_upr": {
"title": "Baseline Upper Bound",
"description": "List of baseline upper confidence interval values. Corresponds to 95% anomaly metric.",
"datatype": "array"
},
"avg": {
"title": "Average",
"description": "Average of the data"
},
"baseline_avg": {
"description": "Average of expected baseline values calculated from historical data. Corresponds to 0% anomaly metric.",
"title": "Baseline Average"
},
"trendline_finish": {
"title": "Trendline Finish",
"description": "Value of the last trendline data point"
},
"total": {
"description": "Sum of the data",
"title": "Total"
},
"forecast_predict": {
"description": "Long term prediction value calculated from historical data",
"title": "Forecast prediction"
},
"forecast_predict_peak": {
"description": "Long term prediction value calculated from historical data at peak times",
"title": "Forecast prediction peak"
},
"trendline_fit": {
"description": "Trendline data values",
"title": "Trendline fit",
"datatype": "array"
}
}
},
"stats": {
"values": {
"forecast": {
"values": {
"min": {
"description": "Manually set lower boundary of feasible forecast value range",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"title": "Minimum"
},
"data_range": {
"title": "Baseline History",
"default": "range = now - 180d to now",
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"description": "Timefilter for the historical data range used to calculate forecasts. Default is last 180 days.",
"values": null
},
"predict_time": {
"values": null,
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak"
],
"description": "Time to use for the forecast_predict, forecast_predict_peak or forecast_predict_offpeak values (e.g. now + 10d)",
"title": "Predict Time"
},
"max": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_slope",
"forecast_fit",
"forecast_vals"
],
"values": null,
"description": "Manually set upper boundary of feasible forecast value range",
"title": "Maximum"
},
"is_cumulative": {
"valid_formats": [
"forecast_predict",
"forecast_predict_peak",
"forecast_predict_offpeak",
"forecast_min",
"forecast_max",
"forecast_boundary_time",
"forecast_daily_change",
"forecast_fit",
"forecast_vals"
],
"description": "Whether to return the sum of forecast values",
"values": [
true,
false
],
"title": "Is Cumulative"
}
},
"description": "Forecast related input options"
},
"percentile": {
"title": "Percentile",
"values": null,
"valid_formats": [
"percentile"
],
"description": "The value to use for the percentile format (between 0 and 100)"
},
"baseline": {
"description": "Baseline and anomaly metric related input options",
"values": {
"percentile": {
"description": "Baseline percentile to output",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline"
],
"default": 50,
"values": null,
"title": "Percentile"
},
"data_range": {
"title": "Baseline History",
"description": "Timefilter for the historical data range used to calculate baseline and anomaly metric. Default is last 180 days.",
"valid_formats": [
"baseline_avg",
"baseline_compare",
"baseline_percent_compare",
"baseline",
"baseline_lwr",
"baseline_upr",
"anomaly_metric",
"anomaly_strength"
],
"default": "range = now - 180d to now",
"values": null
},
"lwr_percentile": {
"title": "Lower Percentile",
"description": "Baseline percentile to output for baseline lower bound",
"valid_formats": [
"baseline_lwr"
],
"default": 2.5,
"values": null
},
"upr_percentile": {
"valid_formats": [
"baseline_upr"
],
"default": 97.5,
"values": null,
"description": "Baseline percentile to output for baseline upper bound",
"title": "Upper Percentile"
}
}
},
"trendline": {
"description": "Trendline related input options",
"values": {
"lwr_stddev": {
"valid_formats": [
"trendline_lwr"
],
"description": "The number of standard deviations to use for the trendline_lwr format",
"values": null,
"title": "Lower Standard Deviation"
},
"upr_stddev": {
"title": "Upper Standard Deviation",
"valid_formats": [
"trendline_upr"
],
"description": "The number of standard deviations to use for the trendline_upr format",
"values": null
},
"is_cumulative": {
"description": "Whether to fit a trendline to the sum of the data values",
"valid_formats": [
"trendline_fit",
"trendline_predict"
],
"values": [
true,
false
],
"title": "Is Cumulative"
},
"predict_time": {
"title": "Predict Time",
"valid_formats": [
"trendline_predict"
],
"values": null,
"description": "The time to use for the trendline_predict value"
},
"constrained": {
"valid_formats": [
"trendline_start",
"trendline_finish",
"trendline_daily_change",
"trendline_change",
"trendline_percent_change",
"trendline_predict"
],
"values": [
true,
false
],
"default": true,
"description": "Whether to use a constrained trendline",
"title": "Constrained"
}
},
"title": "Trendline"
}
},
"description": "Additional input options for certain formats"
},
"timefmt": {
"description": "Format a timestamp into a human readable string. Format specifier is identical to strftime(3)",
"values": null
}
}
},
"ping_state": {
"title": "Ping State",
"description": "The current ping state of the device",
"polltype": "evt",
"interval": 60,
"datatype": "string",
"enum": {
"1": {
"label": "up",
"description": "Device is up"
},
"2": {
"label": "down",
"description": "Device is down"
}
},
"options": {
"formats": {
"description": "The valid data formats",
"values": {
"avl_outTransitions": {
"title": "Transitions out of states",
"datatype": "integer",
"description": "The number of transitions from a state that wasn't request to a requested state"
},
"avl_inTransitions": {
"description": "The number of transitions from one of the requested states to a state not requested",
"datatype": "integer",
"title": "Transitions into states"
},
"avl_outTime": {
"title": "Time not in states",
"description": "The time the event has not been in any of the requested states",
"datatype": "integer"
},
"avl_inTime": {
"title": "Time in states",
"datatype": "float",
"description": "The time the event has been in any of the requested states"
},
"state_inTime": {
"description": "The number of seconds that the current state has been active",
"datatype": "integer",
"title": "Time in current state"
},
"avl_outPercent": {
"title": "Percent not in states",
"description": "The percent of time the event has not been in any of the states",
"datatype": "float"
},
"avl_inPercent": {
"datatype": "float",
"description": "The percent of time the event has been in any of the requested states",
"title": "Percent in states"
},
"state_id": {
"description": "The current state identifier of the event",
"datatype": "integer",
"title": "State ID"
},
"state_delta": {
"title": "Time in previous state",
"description": "The number of seconds to the previous record",
"datatype": "integer"
},
"avl_totalTransitions": {
"title": "Total transitions",
"datatype": "integer",
"description": "The total number of transition between all of the states"
},
"state_time": {
"datatype": "time",
"description": "The time of the last transition",
"title": "Time of last transition"
},
"poll": {
"title": "Poll",
"description": "Whether polling is enabled for the event",
"datatype": "string"
},
"state": {
"datatype": "string",
"description": "The current state of the event",
"title": "State"
}
}
},
"states": {
"values": null,
"description": "Array of state names to use for the calculations"
}
}
},
"priv_method": {
"title": "SNMPv3 Privacy Method",
"description": "Privacy method for SNMPv3 devices",
"polltype": "cfg",
"datatype": "string"
},
"priv_pass": {
"title": "SNMPv3 Privacy Password",
"description": "Privacy password for SNMPv3 devices",
"polltype": "cfg",
"datatype": "string"
},
"retired": {
"title": "Retired",
"description": "The device has been Retired",
"polltype": "cfg",
"datatype": "string"
},
"snmpEngineID": {
"title": "SNMP Engine ID",
"description": "An SNMP engine's administratively-unique identifier",
"polltype": "cfg",
"datatype": "string"
},
"snmp_maxoid": {
"title": "SNMP Max OID",
"description": "Maximum number of oids to poll in a single request",
"polltype": "cfg",
"datatype": "integer"
},
"snmp_poll": {
"title": "SNMP Poll",
"description": "The SNMP polling status of the device",
"polltype": "cfg",
"datatype": "string"
},
"snmp_version": {
"title": "SNMP Version",
"description": "The SNMP version of the device",
"polltype": "cfg",
"datatype": "integer"
},
"sysContact": {
"title": "Contact",
"description": "The textual identification of the contact person for the entity",
"polltype": "cfg",
"datatype": "string"
},
"sysDescr": {
"title": "System Description",
"description": "A textual description of the entity",
"polltype": "cfg",
"datatype": "string"
},
"sysLocation": {
"title": "Location",
"description": "The physical location of the entity",
"polltype": "cfg",
"datatype": "string"
},
"sysName": {
"title": "System Name",
"description": "An administratively-assigned name for the entity",
"polltype": "cfg",
"datatype": "string"
},
"sysObjectID": {
"title": "Object ID",
"description": "The vendor's authoritative identification of the network management subsystem contained in the entity",
"polltype": "cfg",
"datatype": "string"
},
"sysServices": {
"title": "Services",
"description": "A value which indicates the set of services that the entity may potentially offer",
"polltype": "cfg",
"datatype": "integer"
},
"vendor": {
"title": "Vendor",
"description": "The vendor name for the device",
"polltype": "cfg",
"datatype": "string"
}
},
"commands": {
"add": {
"valid_fields": null,
"valid_data": {
"ipaddress": {
"required": true
},
"community": {
"required": false
},
"hostname": {
"required": false
},
"snmp_version": {
"required": false
},
"auth_method": {
"required": false
},
"auth_user": {
"required": false
},
"auth_pass": {
"required": false
},
"priv_method": {
"required": false
},
"priv_pass": {
"required": false
},
"context": {
"required": false
},
"snmp_poll": {
"required": false
},
"ping_poll": {
"required": false
},
"longitude": {
"required": false
},
"latitude": {
"required": false
}
}
},
"update": {
"valid_fields": {
"id": {
"required": false
},
"name": {
"required": false
},
"deviceid": {
"required": false
},
"idx": {
"required": false
},
"table": {
"required": false
},
"poll": {
"required": false
},
"auth_method": {
"required": false
},
"auth_pass": {
"required": false
},
"auth_user": {
"required": false
},
"community": {
"required": false
},
"context": {
"required": false
},
"custom_data_details": {
"required": false
},
"discover_getNext": {
"required": false
},
"discover_minimal": {
"required": false
},
"discover_snmpv1": {
"required": false
},
"hostname": {
"required": false
},
"ipaddress": {
"required": false
},
"latitude": {
"required": false
},
"longitude": {
"required": false
},
"manual_name": {
"required": false
},
"memorySize": {
"required": false
},
"mis": {
"required": false
},
"ping_dup": {
"required": false
},
"ping_lost1": {
"required": false
},
"ping_lost2": {
"required": false
},
"ping_lost3": {
"required": false
},
"ping_lost4": {
"required": false
},
"ping_outage": {
"required": false
},
"ping_poll": {
"required": false
},
"ping_rtt": {
"required": false
},
"ping_rtt_ms": {
"required": false
},
"ping_state": {
"required": false
},
"priv_method": {
"required": false
},
"priv_pass": {
"required": false
},
"retired": {
"required": false
},
"snmpEngineID": {
"required": false
},
"snmp_maxoid": {
"required": false
},
"snmp_poll": {
"required": false
},
"snmp_version": {
"required": false
},
"sysContact": {
"required": false
},
"sysDescr": {
"required": false
},
"sysLocation": {
"required": false
},
"sysName": {
"required": false
},
"sysObjectID": {
"required": false
},
"sysServices": {
"required": false
},
"vendor": {
"required": false
}
},
"valid_data": {
"poll": {
"required": false
},
"auth_method": {
"required": false
},
"auth_pass": {
"required": false
},
"auth_user": {
"required": false
},
"community": {
"required": false
},
"context": {
"required": false
},
"custom_data_details": {
"required": false
},
"discover_getNext": {
"required": false
},
"discover_minimal": {
"required": false
},
"discover_snmpv1": {
"required": false
},
"hostname": {
"required": false
},
"ipaddress": {
"required": false
},
"latitude": {
"required": false
},
"longitude": {
"required": false
},
"manual_name": {
"required": false
},
"mis": {
"required": false
},
"ping_outage": {
"required": false
},
"ping_poll": {
"required": false
},
"ping_state": {
"required": false
},
"priv_method": {
"required": false
},
"priv_pass": {
"required": false
},
"snmpEngineID": {
"required": false
},
"snmp_maxoid": {
"required": false
},
"snmp_poll": {
"required": false
},
"snmp_version": {
"required": false
},
"sysContact": {
"required": false
},
"sysDescr": {
"required": false
},
"sysLocation": {
"required": false
},
"sysName": {
"required": false
},
"sysObjectID": {
"required": false
},
"sysServices": {
"required": false
}
}
},
"delete": {
"valid_fields": {
"id": {
"required": false
},
"name": {
"required": false
},
"deviceid": {
"required": false
},
"idx": {
"required": false
},
"table": {
"required": false
},
"poll": {
"required": false
},
"auth_method": {
"required": false
},
"auth_pass": {
"required": false
},
"auth_user": {
"required": false
},
"community": {
"required": false
},
"context": {
"required": false
},
"custom_data_details": {
"required": false
},
"discover_getNext": {
"required": false
},
"discover_minimal": {
"required": false
},
"discover_snmpv1": {
"required": false
},
"hostname": {
"required": false
},
"ipaddress": {
"required": false
},
"latitude": {
"required": false
},
"longitude": {
"required": false
},
"manual_name": {
"required": false
},
"memorySize": {
"required": false
},
"mis": {
"required": false
},
"ping_dup": {
"required": false
},
"ping_lost1": {
"required": false
},
"ping_lost2": {
"required": false
},
"ping_lost3": {
"required": false
},
"ping_lost4": {
"required": false
},
"ping_outage": {
"required": false
},
"ping_poll": {
"required": false
},
"ping_rtt": {
"required": false
},
"ping_rtt_ms": {
"required": false
},
"ping_state": {
"required": false
},
"priv_method": {
"required": false
},
"priv_pass": {
"required": false
},
"retired": {
"required": false
},
"snmpEngineID": {
"required": false
},
"snmp_maxoid": {
"required": false
},
"snmp_poll": {
"required": false
},
"snmp_version": {
"required": false
},
"sysContact": {
"required": false
},
"sysDescr": {
"required": false
},
"sysLocation": {
"required": false
},
"sysName": {
"required": false
},
"sysObjectID": {
"required": false
},
"sysServices": {
"required": false
},
"vendor": {
"required": false
}
},
"valid_data": null
},
"get": {
"valid_fields": {
"id": {
"required": false
},
"name": {
"required": false
},
"deviceid": {
"required": false
},
"idx": {
"required": false
},
"table": {
"required": false
},
"poll": {
"required": false
},
"auth_method": {
"required": false
},
"auth_pass": {
"required": false
},
"auth_user": {
"required": false
},
"community": {
"required": false
},
"context": {
"required": false
},
"custom_data_details": {
"required": false
},
"discover_getNext": {
"required": false
},
"discover_minimal": {
"required": false
},
"discover_snmpv1": {
"required": false
},
"hostname": {
"required": false
},
"ipaddress": {
"required": false
},
"latitude": {
"required": false
},
"longitude": {
"required": false
},
"manual_name": {
"required": false
},
"memorySize": {
"required": false
},
"mis": {
"required": false
},
"ping_dup": {
"required": false
},
"ping_lost1": {
"required": false
},
"ping_lost2": {
"required": false
},
"ping_lost3": {
"required": false
},
"ping_lost4": {
"required": false
},
"ping_outage": {
"required": false
},
"ping_poll": {
"required": false
},
"ping_rtt": {
"required": false
},
"ping_rtt_ms": {
"required": false
},
"ping_state": {
"required": false
},
"priv_method": {
"required": false
},
"priv_pass": {
"required": false
},
"retired": {
"required": false
},
"snmpEngineID": {
"required": false
},
"snmp_maxoid": {
"required": false
},
"snmp_poll": {
"required": false
},
"snmp_version": {
"required": false
},
"sysContact": {
"required": false
},
"sysDescr": {
"required": false
},
"sysLocation": {
"required": false
},
"sysName": {
"required": false
},
"sysObjectID": {
"required": false
},
"sysServices": {
"required": false
},
"vendor": {
"required": false
}
},
"valid_data": null
},
"describe": {
"valid_fields": null,
"valid_data": null
}
},
"info": {
"tags": [],
"licenced": false,
"vendor": "Standard",
"category": "Device",
"inherited_by": [
"cdt_device_apic",
"cdt_device_freebsd"
],
"inherits": null,
"allow_grouping": 1,
"allow_reporting": 1,
"allow_discovery": 0,
"poller": "snmp"
},
"allow_formula_fields": true,
"links": {
"macLink": {
"title": "Link to MAC",
"object": "mac",
"default": 1
}
},
"global_field_options": {
"aggregation_format": {
"description": "Aggregation formats to apply when group_by option is provided",
"values": {
"first": {
"title": "First",
"description": "First value in the group (default)"
},
"last": {
"title": "Last",
"description": "Last value in the group"
},
"avg": {
"title": "Average",
"description": "Average of the values in the group"
},
"count": {
"title": "Count",
"description": "Number on non-null values in the group"
},
"count_all": {
"title": "Count (include NULL)",
"description": "Number of values in the group (including null values)"
},
"count_unique": {
"title": "Unique count",
"description": "Number of unique non-null values in the group"
},
"count_unique_all": {
"title": "Unique count (include NULL)",
"description": "Number of unique values in the group (including null values)"
},
"cat": {
"title": "Concatenate",
"description": "Concatenation of the values in the group"
},
"list": {
"title": "List",
"description": "comma=separated concatenation of the values in the group"
},
"list_unique": {
"title": "Unique List",
"description": "comma=separated concatenation of the unique values in the group"
},
"min": {
"title": "Minimum",
"description": "Minimum of the values in the group"
},
"max": {
"title": "Maximum",
"description": "Maximum of the values in the group"
},
"sum": {
"title": "Sum",
"description": "Sum of all values in the group (null if no valid values)"
},
"total": {
"title": "Total",
"description": "Sum of all values in the group (0 if no valid values)"
},
"median": {
"title": "Median",
"description": "Median of the values in the group"
},
"95th": {
"title": "95th percentile",
"description": "95th percentile of the values in the group"
},
"stddev": {
"title": "Standard deviation",
"description": "Standard deviation of the values in the group"
}
}
}
}
}
]
},
"links": [
{
"link": "/api/v2.1/cdt_device/describe",
"rel": "self"
},
{
"link": "/api/v2.1",
"rel": "base"
},
{
"link": "/api/v2.1/cdt_device",
"rel": "collection"
}
]
}
The Execute Endpoint (/api/v2.1/{resource}/execute)
The execute endpoint is used to run an execute command on a specific resource, currently this functionality is limited to:
- the discover resource (/api/v2.1/discover/execute)
- the config_build resource (/api/v2.1/config_build/execute)
GET
A GET request to a {resource}/execute endpoint will run the command associated with that resource.
The additional parameters which can be included in an Execute request are specific to the target Resource and are presented as in the Options table for the endpoint. For details see:
Example: Running Network Discovery referencing multiple IP Address Ranges and SNMP Credentials
We pass in a discover_config object as a parameter, this object specifies the IP ranges we are targeting with the discovery and the SNMP credentials to use in each IP range. The discover_config being used is:
{
"iftype": [
"ethernetCsmacd",
"fastEther",
"gigabitEthernet"
],
"ping_count": 2,
"ping_skip": 256,
"ping_rate": 512,
"sysdescr": [
"include Cisco",
"include Dell",
"include HP",
"include Juniper",
"include Linux"
],
"ranges": {
"exclude": [
"22.200.44.1",
"22.200.44.254",
"22.200.44.10",
"22.200.45.1"
],
"include": [
"22.200.44.0/24",
"22.200.45.0/24"
]
},
"ip_range_configurations": [
{
"ip_range": {
"exclude": [],
"include": [
"22.200.46.0/24"
]
},
"name": "test range",
"snmp_credentials": [
2
]
},
{
"ip_range": {
"exclude": [],
"include": [
"22.200.46.[250-255]"
]
},
"name": "test range",
"snmp_credentials": [
1
]
}
],
"snmp_credentials": [
{
"id": 1,
"version": 2,
"community": "public"
},
{
"id": 2,
"version": 2,
"community": "notpublic"
}
]
}
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/discover/execute?mode=Discover&discover_config=%7B%22iftype%22:%20%5B%22ethernetCsmacd%22,%20%22fastEther%22,%20%22gigabitEthernet%22%5D,%20%22ping_count%22:%202,%20%22ping_skip%22:%20256,%20%22ping_rate%22:%20512,%20%22sysdescr%22:%20%5B%22include%20Cisco%22,%20%22include%20Dell%22,%20%22include%20HP%22,%20%22include%20Juniper%22,%20%22include%20Linux%22%5D,%20%22ranges%22:%20%7B%22exclude%22:%20%5B%2210.100.44.1%22,%20%2210.100.44.254%22,%20%2210.100.44.10%22,%20%2210.100.45.1%22%5D,%20%22include%22:%20%5B%2210.100.44.0/24%22,%20%2210.100.45.0/24%22%5D%7D,%20%22ip_range_configurations%22:%20%5B%7B%22ip_range%22:%20%7B%22exclude%22:%20%5B%5D,%20%22include%22:%20%5B%2210.100.46.0/24%22%5D%7D,%20%22name%22:%20%22test%20range%22,%20%22snmp_credentials%22:%20%5B2%5D%7D,%20%7B%22ip_range%22:%20%7B%22exclude%22:%20%5B%5D,%20%22include%22:%20%5B%2210.100.46.%5B250-255%5D%22%5D%7D,%20%22name%22:%20%22test%20range%22,%20%22snmp_credentials%22:%20%5B1%5D%7D%5D,%20%22snmp_credentials%22:%20%5B%7B%22id%22:%201,%20%22version%22:%202,%20%22community%22:%20%22public%22%7D,%20%7B%22id%22:%202,%20%22version%22:%202,%20%22community%22:%20%22notpublic%22%7D%5D%7D"
#!/usr/bin/python
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData = None):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False, data=reqData)
else:
#resp.status_code != 200
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.post(url, headers=headers, auth=(user, pword), verify=False, data=reqData)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/discover/execute?mode=Discover&discover_config='
# discover config
query += json.dumps({
"iftype": [
"ethernetCsmacd",
"fastEther",
"gigabitEthernet"
],
"ping_count": 2,
"ping_skip": 256,
"ping_rate": 512,
"sysdescr": [
"include Cisco",
"include Dell",
"include HP",
"include Juniper",
"include Linux"
],
"ranges": {
"exclude": [
"22.200.44.1",
"22.200.44.254",
"22.200.44.10",
"22.200.45.1"
],
"include": [
"22.200.44.0/24",
"22.200.45.0/24"
]
},
"ip_range_configurations": [
{
"ip_range": {
"exclude": [],
"include": [
"22.200.46.0/24"
]
},
"name": "test range",
"snmp_credentials": [
2
]
},
{
"ip_range": {
"exclude": [],
"include": [
"22.200.46.[250-255]"
]
},
"name": "test range",
"snmp_credentials": [
1
]
}
],
"snmp_credentials": [
{
"id": 1,
"version": 2,
"community": "public"
},
{
"id": 2,
"version": 2,
"community": "notpublic"
}
]
})
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"16",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1724647376,
"objects":[
{
"type":"discover",
"sequence":29,
"status":{
"success":true,
"errcode":0
},
"data_total":0,
"data":[]
}
]
}
}
ID Endpoint (/api/v2.1/{resource}/{id})
The ID endpoint is used to run queries on a specific resource.
GET
The parameters that may be passed when sending a GET request.
Parameters | Type/Valid Values | Description |
fields | A comma=separated list of field namesE.g.fields=id,name,location | The list of fields that will be returned in the response. This parameter will be ignored if fields_adv is also specified. |
fields_adv | A JSON string detailing the fields to be returnedE.g.fields_adv={“Device Name”:{“field”:”name”}, “IP”:{“field”:”ipaddress”},”Location”:{“field”:”sysLocation”},”SNMP Polling”:{“field”:”snmp_poll”,”filter”:{“query”:”=%27off%27″}}} | The list of fields that will be returned in the response |
filter | An SQL filter stringE.g.“SNMP Polling”:{“field”:”snmp_poll”,”filter”:{“query”:”=%27off%27″}} | A filter to be applied to the response data inside of fields_adv |
groups | A comma=separated list of group names or group IDs | A list of groups to be used to filter the response data, only data associated with members of the specified groups will be returned |
grouping_mode |
|
The mode to use when processing multiple group parameters |
group_by |
OR
For examples of using group_by, see Using ‘group_by’ for Data Aggregation. |
Use when you want to aggregate data across multiple entities, such as total traffic across multiple interfaces.
Requires that:
When grouping by groups, an extra group with the name/ID None/null may be returned, containing any rows that weren’t in any of the groups. |
interval | positive integerE.g.interval=300 | The polling interval to be applied to all timeseries metrics specified in the fields parameter. When not specified, the default polling interval of 60 seconds is used. |
limit | positive integerE.g.limit=100 | The number of items to return per ‘page’ of the response, see Pagination for details. The API will automatically paginate response data with a default limit=50. |
offset | positive integerE.g.offset=100 | The number of result items to skip, see Pagination for details. The API will automatically paginate response data with a default offset={limit}. |
precision | positive integerE.g. precision=5 | The number of decimal places to return for all timeseries metrics specified in the fields parameter. The default precision value varies between fields, supply this parameter when you want to enforce a specific level of precision across all timeseries fields. |
sortmode | string
|
Specify how the response should handle null values.Default =novals_small, this setting is used in all instances where sortmode is not set. |
timefmt | stringThe syntax for specifying the time format matches that used by STRFTIME. E.g. timefmt=%A %H:%M:%S (%y-%m-%d) will return a timestamp in the formatWednesday 15:44:48 (19-07-17) |
The format in which to return timestamps when value_time is set |
value_time | One of:
E.g.value_time=all |
Optionally return a timestamp for returned timeseries data points. Timestamps are return in epoch time unless timefmt is also specified. |
When the fields parameter has been specified, the following additional parameters may be used.
Parameters | Type/Valid Values | Description | ||||||||||||||||||||||||||||||
{field}_field | string | A user defined variable name followed by a valid field name for the resource. E.g.yesterday_RxUtil See Named Fields for more information and examples. |
||||||||||||||||||||||||||||||
{field}_formats | Comma-separated list, see Timeseries Data: Stats, Formats & Options | The formats to request from the API for the given field, required for timeseries data fields
Note: a global formats key (formats=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_formula | string | Specify an SQL-style formula to be used to populate the field, see Field Formulas for details.
Syntax for referencing other fields within the formula:
|
||||||||||||||||||||||||||||||
{field}_filter | string | The filter to apply to the given field. These filters can be either SQL-style filters or extended RegEx filters, for details and examples of applying filters see: | ||||||||||||||||||||||||||||||
{field}_filter_format | string | The format to use for the filter, required for timeseries data fields | ||||||||||||||||||||||||||||||
{field}_interval | integer | The polling interval (in seconds) to use for the specified field. When used, a field-specific timefilter for the specified field must also be used.
Note: when not specified, the default interval of 60 seconds is used. A global interval key (interval=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_link | string | Required when:
See Multiple Links to the same resource for details. |
||||||||||||||||||||||||||||||
{field}_object | string | Required when:
See Multiple Links to the same resource for details. |
||||||||||||||||||||||||||||||
{field}_post_filter | string | The filter to apply to the given field after data aggregation has occurred. This parameter works exactly the same as {field}_filter, except the filter is applied after data aggregation. | ||||||||||||||||||||||||||||||
{field}_post_formula | string | The formula to apply to the given field after data aggregation has occurred. This parameter works exactly the same as {field}_formula, except the formula is applied after data aggregation. | ||||||||||||||||||||||||||||||
{field}_precision | positive integerE.g. {field}_precision=5 | The number of decimal places to return for the specified timeseries metric. The default precision value varies between fields, supply this parameter when you want to enforce a specific level of precision.
Note: a global precision key (precision=) may also be used to apply the same value to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_timefilter | string | The timefilter to use for the given field, required for timeseries data fields.
Note: a global timefilter key (timefilter=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_tz | string | An alternate timezone to use for the {field}_timefilter. All timefilters use the Statseeker server’s timezone unless an override is specified by supplying {field}_tz. Note: a global timezone key (tz=) may also be used to apply the same values to all timeseries metrics specified in fields
|
||||||||||||||||||||||||||||||
{field}_sort | Comma-separated list | List specifying the sort hierarchy and direction, in the following format:{field}_sort={rank}{direction} and for Timeseries data:{field}_sort={rank}{direction}{format}E.g.name_sort=1,ascRxUtil_sort=1,desc,avg | ||||||||||||||||||||||||||||||
{field}_stats | Comma-separated list, see Timeseries Data: Stats, Formats & Options | The stats to use for the given field | ||||||||||||||||||||||||||||||
{field}_aggregation_format | One of:
|
The aggregation format to use for the specified field
Note: using aggregation, the following rules apply:
|
Example: Retrieving Details on a Device
Return specified details on a device with a specified ID. The details I am going to retrieve are:
- name
- id
- community
- ipaddress
- snmp_version
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device/<DEVICE ID>/?fields=name,id,community,ipaddress,snmp_version&indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device/<DEVICE ID>'
# specify fields to be returned
query += '/?fields=name,id,community,ipaddress,snmp_version'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1623636846,
"objects": [
{
"type": "cdt_device",
"sequence": 0,
"status": {
"success": true,
"errcode": 0
},
"data_total": 1,
"data": [
{
"name": "Chicago-srv1",
"id": <DEVICE ID>,
"community": "public",
"ipaddress": "10.100.46.11",
"snmp_version": 2
}
]
}
]
}
}
PUT
The PUT request can be used to update the fields associated with the specified resource. Not all fields contained within a specific resource may be updated, use the /describe endpoint to view the requirements for a given resource.
[top]
Example: Updating a Group Name
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/group/<GROUP ID>/?indent=3" \
-d '{"data":[{"name":"<NEW NAME>"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group/<GROUP ID>'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"data":[{"name":"<NEW NAME>"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623637436
}
}
DELETE
The DELETE request can be used to delete the specified resource.
Example: Deleting a Group
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X DELETE \
"https://your.statseeker.server/api/v2.1/group/<GROUP ID>/?indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.delete(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.delete(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group/<GROUP ID>'
# optional response formatting
query += '/?indent=3&links=none'
# data
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623637436
}
}
Field Endpoint (/api/v2.1/{resource}/{id}/{field})
The Field endpoint is used to run queries on a specific field within a specified resource, use the /describe endpoint to view the field restrictions for a given resource.
GET
The parameters that may be passed when sending a GET request behave exactly like the {field}_* parameters for the Resource and ID endpoints. When targeting timeseries metrics, the following parameters may be specified.
Parameters | Type/Valid Values | Description | ||||||||||||||||||||||||||||||
formats | Comma-separated list, see Timeseries Data: Stats, Formats & Options | The formats to request from the API for the given field, required for timeseries data fields | ||||||||||||||||||||||||||||||
formula | string | Specify an SQL-style formula to be used to populate the field, see Field Formulas for details. | ||||||||||||||||||||||||||||||
filter | string | The filter to apply to the given field. These filters can be either SQL-style filters or extended RegEx filters, for details and examples of applying filters see: | ||||||||||||||||||||||||||||||
filter_format | string | The format to use for the filter, required for timeseries data fields | ||||||||||||||||||||||||||||||
interval | integer | The polling interval (in seconds) to use for the specified field. When used, a field-specific timefilter for the specified field must also be used.
Note: when not specified, the default interval of 60 seconds is used.
|
||||||||||||||||||||||||||||||
post_filter | string | The filter to apply to the given field after data aggregation has occurred. This parameter works exactly the same as filter, except the filter is applied after data aggregation. | ||||||||||||||||||||||||||||||
post_formula | string | The formula to apply to the given field after data aggregation has occurred. This parameter works exactly the same as formula, except the formula is applied after data aggregation. | ||||||||||||||||||||||||||||||
precision | positive integerE.g. {field}_precision=5 | The number of decimal places to return for the specified timeseries metric. The default precision value varies between fields, supply this parameter when you want to enforce a specific level of precision. | ||||||||||||||||||||||||||||||
timefilter | string | The timefilter to use for the given field, required for timeseries data fields. | ||||||||||||||||||||||||||||||
tz | string | An alternate timezone to use for the timefilter. All timefilters use the Statseeker server’s timezone unless an override is specified by supplying tz. |
||||||||||||||||||||||||||||||
stats | Comma-separated list, see Timeseries Data: Stats, Formats & Options | The stats to use for the given field | ||||||||||||||||||||||||||||||
aggregation_format | One of:
|
The aggregation format to use for the specified field
Note: using aggregation, the following rules apply:
|
Example: Returning a User’s Time Zone
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/user/<USER ID>/tz/?indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/user/<USER ID>/tz'
# optional response formatting
query += '/?indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623638836,
"objects":[
{
"type":"user",
"sequence":0,
"status":{
"success":true,
"errcode":0
},
"data_total":1,
"data":[
{
"tz":"Australia/Brisbane",
"id":<USER ID>
}
]
}
]
}
}
PUT
The PUT request can be used to update the specified field. Not all fields contained within a specific resource may be updated, use the /describe endpoint to view the restrictions for a given resource. The data object must contain a value key containing the new value/s to be set for the field.
[top]
Example: Updating a User’s Time Zone
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/user/<USER ID>/tz/?indent=3" \
-d '{"value":"Europe/Paris"}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/user/<USER ID>/tz'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"value":"Europe/Paris"})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"errmsg": "ok",
"success": true,
"time": 1496193864
},
"api_version": "2.1"
}
Linked Resources
Many endpoint definitions include links between the endpoint and other related endpoints. These links allow requests against a resource to also return data (or apply filtering based on data) from a linked resource. There are two types of resource links available:
- Standard Links – default links defined by Statseeker
- Referencing standard links when resources have Multiple Links to the same resource
- Custom Links – user-defined links
Standard Links
Most resources are linked to at least one other resource in a parent-child relationship. You can use this linking to access any of the parent-resource fields from the child resource (e.g. a query to the interface resource {cdt_port} can access any of the fields contained in the device resource {cdt_device}, for that interface’s parent device). This linking is one-way, from child to parent/ancestor. The format for specifying a field from a linked resource is:
{linked_resource_name}.{field}E.g.cdt_device.name
To see which parent-resources can be linked to from a given resource, run a GET command against the describe endpoint (api/v2.1/{resource}/describe) for that resource and review content of objects.links.
Example:
api/v2.1/cdt_cpu/describe, will return something like:
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1596514086,
"objects": [
{
"type": "cdt_cpu",
"title": "CPU",
"description": "The custom data entities for the cpu table",
"fields": {...},
"commands": {...},
"info": {...},
"allow_formula_fields": true,
"links": {
"deviceLink": {
"title": "Link to Device",
"object": "cdt_device",
"default": 1
}
},
"global_field_options": {...}
}
]
},
"links": [...]
}
We can see, in the response object above, that objects.links specifies a link (deviceLink) to the device resource cdt_device.
Example:
You can request data about interfaces showing a high rate of discarded packets from the cdt_port resource, and in the same call, request ping-related data from the cdt_device resource of the parent device for each port.
api/v2.1/cdt_port/?fields=name,InOutDiscards,RxDiscardsPercent,TxDiscardsPercent,InOutOctets,cdt_device.name,cdt_device.ping_rtt&formats=avg&InOutDiscards_formats=total,avg&InOutOctets_formats=total&timefilter=range=now -45m to now&InOutDiscards_sort=1,desc,avg&limit=5&links=none
- fields=name,InOutDiscards,RxDiscardsPercent,TxDiscardsPercent,InOutOctets,cdt_device.name,cdt_device.ping_rtt – the fields we want to retrieve, including name and ping from the parent object cdt_device.name,cdt_device.ping_rtt
- formats=avg – a global formats parameter for all timeseries data
- InOutDiscards_formats=total,avg&InOutOctets_formats=total – metric specific overrides for the formats to retrieve additional formats for some metrics
- timefilter=range=now -45m to now – a global timefilter parameter for all timeseries data
- InOutDiscards_sort=1,desc,avg – sort the interfaces presenting the highest discard rate to the top
- limit=5&links=none – limit the initial response to 5 ports and don’t show the links reference object
Once a field from a linked resource has been retrieved, it can be treated like any other field. You can specify formats and timefilters for the field, and you can sort and filter the response object by that field. In the example above, we requested cdt_device.name and we can reference that to apply a filter to the request, returning only data from a single device (or selection of devices).
Example:
api/v2.1/cdt_port/?fields=name,InOutDiscards,RxDiscardsPercent,TxDiscardsPercent,InOutOctets,cdt_device.name,cdt_device.ping_rtt&formats=avg&InOutDiscards_formats=total,avg&InOutOctets_formats=total&timefilter=range=now -45m to now&InOutDiscards_sort=1,desc,avg&limit=5&links=none&cdt_device.name_filter=LIKE(“NewYork%25”)
- cdt_device.name_filter=LIKE(“NewYork%25”) – filter the response to only show ports on devices with a name starting with NewYork
Response:
{
"info": "The Statseeker RESTful API",
"version": "2.1",
"data": {
"objects": [
{
"status": {
"errcode": 0,
"success": true
},
"data": [
{
"name": "Gi6/20",
"cdt_device.ping_rtt": {
"avg": 35.5126
},
"InOutDiscards": {
"total": 5900.34,
"avg": 655.593
},
"InOutOctets": {
"total": 13768100000
},
"cdt_device.name": "NewYork-swt4",
"TxDiscardsPercent": {
"avg": 0.023552
},
"RxDiscardsPercent": {
"avg": 0
},
"id": 811
},
{
"name": "Gi2/28",
"cdt_device.ping_rtt": {
"avg": 47.5407
},
"InOutDiscards": {
"total": 5774.36,
"avg": 641.596
},
"InOutOctets": {
"total": 13159300000
},
"cdt_device.name": "NewYork-swt3",
"TxDiscardsPercent": {
"avg": 0.0231991
},
"RxDiscardsPercent": {
"avg": 0
},
"id": 1032
},
{
"name": "Gi6/18",
"cdt_device.ping_rtt": {
"avg": 35.5126
},
"InOutDiscards": {
"total": 5673.6,
"avg": 630.4
},
"InOutOctets": {
"total": 6980780000
},
"cdt_device.name": "NewYork-swt4",
"TxDiscardsPercent": {
"avg": 0.00432219
},
"RxDiscardsPercent": {
"avg": 0
},
"id": 808
},
{
"name": "Gi2/4",
"cdt_device.ping_rtt": {
"avg": 35.5126
},
"InOutDiscards": {
"total": 5418.93,
"avg": 602.103
},
"InOutOctets": {
"total": 1743290000
},
"cdt_device.name": "NewYork-swt4",
"TxDiscardsPercent": {
"avg": 0.0754798
},
"RxDiscardsPercent": {
"avg": 0
},
"id": 709
},
{
"name": "Gi7/22",
"cdt_device.ping_rtt": {
"avg": 25.2689
},
"InOutDiscards": {
"total": 5256.75,
"avg": 584.083
},
"InOutOctets": {
"total": 8214830000
},
"cdt_device.name": "NewYork-swt1",
"TxDiscardsPercent": {
"avg": 0
},
"RxDiscardsPercent": {
"avg": 0.013608
},
"id": 1491
}
],
"type": "cdt_port",
"data_total": 787,
"sequence": 0
}
],
"errmsg": "ok",
"success": true,
"time": 1534217902
},
"revision": "13"
}
Resources with Multiple Links
Some resources have multiple linked resources, and some have multiple links to instances of the same resource. Take the mis_record (MAC-IP Records) resource for example. If we send a describe request to the mis_record resource, the links section of the response contains:
"links": {
"arp_dev": {
"title": "Link to ARP Device",
"object": "cdt_device",
"default": 0
},
"connecteddevice": {
"title": "Link to Connected Device",
"object": "cdt_device",
"default": 0
},
"connectedport": {
"title": "Link to Connected Interface",
"object": "cdt_port",
"default": 1
},
"deviceLink": {
"title": "Link to Device",
"object": "cdt_device",
"default": 1
},
"port": {
"title": "Link to Interface",
"object": "cdt_port",
"default": 0
}
},
This endpoint has multiple links out to instances of the cdt_device (parent device and ARP device).
When referencing cdt_device.{field} in queries to the endpoints with multiple links to cdt_device (such as mis_record), the API will always use the link referencing the parent device. To reference another linked instance of cdt_device, such the mis_record ARP device, the link needs to be defined. This requires the following parameters:
- {foo} – a custom named field
- {foo}_link={name_of_link} – specify which linked resource to use
- {foo}_object={resource_type_linked_object} – specify the resource type being linked to
- {foo}_field={field_name} – specifying fields to be retrieved from the linked resource
Example: Requesting some fields for a specified MAC
/api/v2.1/mis_record/?fields=name,ip,vlan_name,cdt_device.name&mac_filter=is(“00:00:74:db:3b:f6”)
To also return the ipaddress of the ARP Router we need to add a custom named field (see Named Fields for details), and to define a link to access that ARP device to return that field. I will call my named field arpDeviceIP so my new request will be:
/api/v2.1/cdt_mis/?fields=name,ip,vlan_name,cdt_device.name,arpDeviceIP&mac_filter=is(“00:00:74:db:3b:f6”)&arpDeviceIP_link=arpLink&arpDeviceIP_object=cdt_device&arpDeviceIP_field=ipaddress
- arpDeviceIP – my named field, included in the fields parameter
- arpDeviceIP_link=arp_dev – my {foo}_link parameter, identifying which link on the mis_record record I want to retrieve data from
- arpDeviceIP_object=cdt_device – my {foo}_object parameter, identifying the resource type I am linking to
- arpDeviceIP_field=ipaddress – my {foo}_field parameter, identifying which field I want retrieved
Custom Links
Custom links, like their standard counterpart, allow you to query a resource and, via the defined link, request data from another resource in the same query. To achieve this, the custom link is first created by linking the field from one resource to the corresponding field in another. Once the link has been created, when querying the resource defined as the ‘link source’, you can also query the resource defined as the ‘link destination’. The syntax for specifying a field from a linked resource is the same as when using standard links:
{linked_resource_name}.{field}E.g.cdt_device.name
Example:
We have a threshold set for an average 5-minute memory usage of over 90%. With the API, we can query threshold event records for details on breaches of this threshold, but we cannot request recent memory usage data leading up to the breach because there is no link between the threshold event resource and the memory resource.
We first create a link between the threshold event and the memory resources by specifying a field that exists in both:
- The threshold_record resource contains an entityid field which contains the ID of the entity that triggered the breach, in this instance the memory entity
- The cdt_memory resource contains an id field which contains the ID of the memory entity
The link is created by:
- Sending a POST command to the resource endpoint
- Specifying the source resource and field
- Specifying the destination resource and corresponding field
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X POST \
"https://my.statseeker.server/api/v2.1/link/?indent=3" \
-d '{
"data": [{
"name": "memoryLink",
"title": "Link to Memory",
"default": 1,
"src": "threshold_record",
"src_fields": {"entityid": {}},
"src_query": "{entityid}",
"dst": "cdt_memory",
"dst_fields": {"id": {}},
"dst_query": "{id}"
}]
}'
After the link has been created, any query to the source resource can also query data from the linked destination resource. In the example above, we can request memory data while querying threshold records.
With the link created, we can now send our query containing the following URL:
https://my.statseeker.server/api/v2.1/threshold_record/?fields=time,device,entityTypeName,event,cdt_memory.memoryUsedPercent,Usage5m&Usage5m_field=memoryUsedPercent&Usage5m_object=memory&Usage5m_formats=vals&Usage5m_timefilter=range%20=%20now%20-5m%20to%20now&formats=avg,min,max&timefilter=range=start_of_today%20to%20now&entityTypeName_filter==’memory’&entityTypeName_hide=1&lastx=5&timefmt=%c
Breakdown:
- https://your.statseeker.server/api/v2.1/threshold_record/ – the resource endpoint, specifying the threshold_record resource
- ?fields=time,device,entityTypeName,event,cdt_memory.memoryUsedPercent,Usage5m – the fields to return, including the memoryUsedPercent field from our linked cdt_memory resource. We want to return 2 different sets of memoryUsedPercent data, so we include a custom named field, Usage5m for the second set.
- &Usage5m_field=memoryUsedPercent&Usage5m_object=memory – define the custom named field as another instance of cdt_memory.memoryUsedPercent
- &Usage5m_formats=vals&Usage5m_timefilter=range = now -5m to now – specify the format and timefilter for the custom field
- &formats=avg,min,max&timefilter=range=start_of_today%20to%20now – specify the format and timefilter for all other timeseries metrics, in this case it is only cdt_memory.memoryUsedPercent
- &entityTypeName_filter==’memory’ – filter the returned threshold event records to just those concerning breaches of memory thresholds
- &entityTypeName_hide=1 – we need to retrieve the entityTypeName in order to filter on it, but we don’t need to see the value in my returned data, so we hide it
- &lastx=5 – return the 5 most recent records
- &timefmt=%c – we are specifying the format to use when displaying the time that the threshold breach occurred
Here is the response we receive from the API request:
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1596598309,
"objects": [
{
"type": "threshold_record",
"sequence": 0,
"status": {
"success": true,
"errcode": 0
},
"data_total": 5,
"data": [
{
"time": "Wed Aug 5 13:24:00 2020",
"device": "NewYork-srv4",
"event": "Mem905m (memory.memoryUsedPercent avg, threshold: 90%)",
"cdt_memory.memoryUsedPercent": {
"min": 5,
"max": 94.8153,
"avg": 49.3997
},
"Usage5m": {
"vals": [
89.5952,
92.3465,
93.1448,
94.4017,
94.7985
]
},
"id": "5F2A2650-1B-2"
},
{
"time": "Wed Aug 5 13:19:00 2020",
"device": "Budapest-srv2",
"event": "Mem905m (memory.memoryUsedPercent avg, threshold: 90%)",
"cdt_memory.memoryUsedPercent": {
"min": 4.99998,
"max": 96.9951,
"avg": 50.3273
},
"Usage5m": {
"vals": [
91.668,
94.1118,
95.4288,
96.9171,
95.5872
]
},
"id": "5F2A2524-16-2"
},
{
"time": "Wed Aug 5 13:19:00 2020",
"device": "NewYork-srv4",
"event": "Mem905m (memory.memoryUsedPercent avg, threshold: 90%)",
"cdt_memory.memoryUsedPercent": {
"min": 5,
"max": 94.8153,
"avg": 49.3997
},
"Usage5m": {
"vals": [
91.5952,
92.3465,
93.1448,
94.4017,
94.7985
]
},
"id": "5F2A2524-15-2"
},
{
"time": "Wed Aug 5 13:14:00 2020",
"device": "Budapest-srv2",
"event": "Mem905m (memory.memoryUsedPercent avg, threshold: 90%)",
"cdt_memory.memoryUsedPercent": {
"min": 4.99998,
"max": 97.9951,
"avg": 50.3273
},
"Usage5m": {
"vals": [
91.668,
94.1118,
95.4288,
96.9171,
96.5872
]
},
"id": "5F2A23F8-E-2"
},
{
"time": "Wed Aug 5 12:44:00 2020",
"device": "Pretoria-srv",
"event": "Mem905m (memory.memoryUsedPercent avg, threshold: 90%)",
"cdt_memory.memoryUsedPercent": {
"min": 20.1107,
"max": 96.9795,
"avg": 51.634
},
"Usage5m": {
"vals": [
91.2561,
91.4009,
91.587,
91.8134,
92.0167
]
},
"id": "5F2A1CF0-6C-2"
}
]
}
]
},
}
Named Fields
The API doesn’t allow you to explicitly request the same field twice in the same call. When you want to request multiple sets of data for the same field, but with different options, you can use named fields; simply add a custom field to the fields parameter.
Example:
I want to request TxUtil data on all ports on a device, but I want the data broken into today and yesterday. To do this I am going to:
- Specify two named fields, yesterdayTx and todayTxfields=cdt_device.name,name,yesterdayTx,todayTx
- Define what those fields refer toyesterdayTx_field=TxUtil&todayTx_field=TxUtil
- Set specific timefilters for each named fieldyesterdayTx_timefilter=range=start_of_today -1d to start_of_today&todayTx_timefilter=range=start_of_today to now
- Use a global formats parameter to apply to bothformats=min,max,avg,95th
- Filter the response to just the device in questioncdt_device.name_filter=IS(“<DEVICE NAME>”)
My full request string will be:
api/v2.1/cdt_port/?fields=cdt_device.name,name,yesterdayTx,todayTx&todayTx_field=TxUtil&formats=max,min,avg,95th&todayTx_timefilter=range=start_of_today to now&yesterdayTx_field=TxUtil&yesterdayTx_timefilter=range=start_of_today -1d to start_of_today&cdt_device.name_filter=IS(“<DEVICE NAME>”)&links=none
The response is:
{
"info": "The Statseeker RESTful API",
"version": "2.1",
"data": {
"objects": [
{
"status": {
"errcode": 0,
"success": true
},
"data": [
{
"cdt_device.name": "<DEVICE NAME>",
"yesterdayTx": {
"max": 9.85221,
"avg": 9.78883,
"95th": 9.84576,
"min": 8.00007
},
"name": "Gi0/1",
"todayTx": {
"max": 9.8516,
"avg": 9.80882,
"95th": 9.84672,
"min": 9.34127
},
"id": 1975
},
{
"cdt_device.name": "<DEVICE NAME>",
"yesterdayTx": {
"max": 786.667,
"avg": 294.361,
"95th": 766.18,
"min": 83.6935
},
"name": "Gi0/2",
"todayTx": {
"max": 543.072,
"avg": 182.109,
"95th": 497.423,
"min": 95.8462
},
"id": 1976
},
{
"cdt_device.name": "<DEVICE NAME>",
"yesterdayTx": {
"max": 783.816,
"avg": 256.446,
"95th": 714.649,
"min": 94.7673
},
"name": "Gi0/3",
"todayTx": {
"max": 786.667,
"avg": 495.852,
"95th": 781.882,
"min": 96.1355
},
"id": 1977
},
{
"cdt_device.name": "<DEVICE NAME>",
"yesterdayTx": {
"max": 7.48404,
"avg": 3.68874,
"95th": 4.88369,
"min": 0.995533
},
"name": "Gi0/4",
"todayTx": {
"max": 7.70181,
"avg": 4.54324,
"95th": 5.55599,
"min": 2.71828
},
"id": 1978
},
{
"cdt_device.name": "<DEVICE NAME>",
"yesterdayTx": {
"max": 8.37714,
"avg": 2.99202,
"95th": 6.29977,
"min": 0.96669
},
"name": "Gi0/5",
"todayTx": {
"max": 8.24358,
"avg": 6.07991,
"95th": 7.4297,
"min": 3.94083
},
"id": 1979
}
],
"type": "cdt_port",
"data_total": 5,
"sequence": 0
}
],
"errmsg": "ok",
"success": true,
"time": 1533779107
},
"revision": "13"
}
Named fields are also required when:
- A resource has multiple links to the same resource type – one of these will be the default link and the others not
- You want to reference a non-default linked resource
A named field is defined for each field you want returned from a non-default linked resource, see Multiple Links to the same resource for details.
Field Formulas
Field formulas allow you to request a user-defined/named field (for more information see named fields) and populate that field via a formula based on other fields returned in the request. The formula accepts SQL operators and conventions, and can reference the values of other fields returned by the query.
To define the formula:
- The named field must be specified in the fields parameter
- Use the following parameters to define the formula used to populate the field:
- <MY NAMED FIELD>_formula – use when the query does not aggregate data or when the query aggregates data and the formula should be applied prior to aggregation
- <MY NAMED FIELD>_post_formula – use when the query aggregates data and the formula should be applied after aggregation has occurred
Both pre and post-aggregation formula can be applied in a single query, but a single named field cannot utilize both pre and post-aggregation formulas. In this instance specify two named fields, one for pre-aggregation formula and the other for post-aggregation formula.
When referencing other fields in your formula use the following syntax:
- Configuration field: {field}, e.g. {ping_poll}
- Timeseries/Event fields must also define a data format: {field:format}, e.g. {ping_rtt:current}
- Every field specified in the formula must also be specified in the fields parameter
- Every data format applied to a field referenced in a formula must also be specified for that field elsewhere in the query, either in the global formats parameter, or the field specific <FIELD>_formats parameter
Formula Example: Convert InOctets to Mebibytes
Request: /api/v2.1/cdt_port/?fields=cdt_device.name,name,InOctets,InMB&timefilter=range = now -30m to now&formats=vals&InMB_formula={InOctets:vals}/1024/1024&cdt_device.name_filter=IS(“<DEVICE NAME>”)
The request:
- Defines a named field, InMB
- Specifies the formula to use to populate that field: InMB_formula={InOctets:vals} / 1024 / 1024
- Must include any field referenced within the formula as well as any required data format, in this case {InOctets:vals}
Response:
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1597795111,
"objects": [
{
"type": "cdt_port",
"sequence": 0,
"status": {
"success": true,
"errcode": 0
},
"data_total": 2,
"data": [
{
"cdt_device.name": "<DEVICE NAME>",
"name": "int.2",
"InOctets": {
"vals": [
72506880,
72525312,
72607744,
72499968,
...
73761536,
72515328
]
},
"InMB": [
69.1479,
69.1655,
69.2441,
69.1414,
...
70.3445,
69.156
],
"id": 591
},
{
"cdt_device.name": "<DEVICE NAME>",
"name": "int.3",
"InOctets": {
"vals": [
1204222976,
1162312704,
1141559296,
1186676992,
...
1385965824,
1365224192
]
},
"InMB": [
1148.44,
1108.47,
1088.68,
1131.7,
...
1321.76,
1301.98
],
"id": 592
}
]
}
]
},
"links": []
}
Formula Example: Ping Latency Gauge using a Case Statement
Request: /api/v2.1/cdt_device/?fields=name,ping_rtt,status&timefilter=range = now -30m to now&formats=avg&status_formula=CASE WHEN {ping_rtt:avg} < 10 THEN ‘Green’ WHEN {ping_rtt:avg} < 50 THEN ‘Amber’ WHEN {ping_rtt:avg} IS NULL THEN ‘Unknown’ ELSE ‘Red’ END
The request:
- Defines a named field, status
- Provides a case statement to populate that field: status_formula=CASE WHEN {ping_rtt:avg} < 10 THEN ‘Green’ WHEN {ping_rtt:avg} < 50 THEN ‘Amber’ WHEN {ping_rtt:avg} IS NULL THEN ‘Unknown’ ELSE ‘Red’ END
- Must include any field referenced within the formula as well as any required data format, in this case {ping_rtt:avg}
Response:
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1597793987,
"objects": [
{
"type": "cdt_device",
"sequence": 360,
"status": {
"success": true,
"errcode": 0
},
"data_total": 226,
"data": [
{
"name": "NewYork-ups1",
"ping_rtt": {
"avg": 35.4
},
"status": "Amber",
"id": 338
},
{
"name": "NewYork-ups2",
"ping_rtt": {
"avg": 41.7333
},
"status": "Amber",
"id": 339
},
{
"name": "NewYork-swt2",
"ping_rtt": {
"avg": 52.6
},
"status": "Red",
"id": 347
},
...
...
{
"name": "Phoenix-swt1",
"ping_rtt": {
"avg": 41.6667
},
"status": "Amber",
"id": 387
}
]
}
]
},
"links": []
}
Formula Example: Nested Formulas
Request: /api/v2.1/cdt_device/?fields=name,sysLocation,ping_rtt,status,prettyPing,prettyStatus&timefilter=range = now -30m to now&formats=avg&status_formula=CASE WHEN {ping_rtt:avg} < 10 THEN ‘Green’ WHEN {ping_rtt:avg} < 50 THEN ‘Amber’ WHEN {ping_rtt:avg} IS NULL THEN ‘Unknown’ ELSE ‘Red’ END&prettyPing_formula={ping_rtt:avg} || ‘ ms’&prettyStatus_formula={status} || ‘ (‘ || {prettyPing} || ‘)’&sysLocation_filter=IS(“LosAngeles”)
The request:
- Defines a multiple named fields: status, prettyPing, prettyStatus
- Defines formula to populate each:
- status_formula=CASE WHEN {ping_rtt:avg} < 10 THEN ‘Green’ WHEN {ping_rtt:avg} < 50 THEN ‘Amber’ WHEN {ping_rtt:avg} IS NULL THEN ‘Unknown’ ELSE ‘Red’ END
- prettyPing_formula={ping_rtt:avg} || ‘ ms’
- prettyStatus_formula={status} || ‘ (‘ || {prettyPing} || ‘)’ – requires output from other formula fields
- Must include any field referenced within the formula as well as any required data format, in this case {ping_rtt:avg}, {status} and {prettyPing}
Response:
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1597796468,
"objects": [
{
"type": "cdt_device",
"sequence": 360,
"status": {
"success": true,
"errcode": 0
},
"data_total": 12,
"data": [
{
"name": "LosAngeles-ups1",
"sysLocation": "LosAngeles",
"ping_rtt": {
"avg": 38.1333
},
"status": "Amber",
"prettyPing": "38.1333 ms",
"prettyStatus": "Amber (38.1333 ms)",
"id": 350
},
{
"name": "LosAngeles-ups2",
"sysLocation": "LosAngeles",
"ping_rtt": {
"avg": 32.4
},
"status": "Amber",
"prettyPing": "32.4 ms",
"prettyStatus": "Amber (32.4 ms)",
"id": 351
},
...
...
{
"name": "LosAngeles-rtr",
"sysLocation": "LosAngeles",
"ping_rtt": {
"avg": 43.8
},
"status": "Amber",
"prettyPing": "43.8 ms",
"prettyStatus": "Amber (43.8 ms)",
"id": 361
}
]
}
]
},
"links": []
}
Common Resources: Examples and Sample Code
Most of the processes and objects that you are used to interacting with in Statseeker (network discovery, users, groups, devices, interfaces, alerts, etc.) are exposed as resources from the API standpoint and through querying the API these resources can be returned as data objects which you can review and modify.
Statseeker exposes many resources via the API but most API users will spend the majority of their time working with just a few. The most commonly used resources are presented below along with examples of interacting with those resources.
- A full reference to all resources is available from the Resource Reference
- For additional detail and examples on applying filters to API queries, see Working with API Filters
Users
The User resource allows you to interact with (create, edit, delete, and report on) Statseeker user records.
The user object
Field ID | Field Title | Type | Get, Add, Update | Description |
api | API Access | string | G, A, U | User API access permission |
auth | Authentication method | string | G, A, U | User authentication method |
auth_refresh | Authentication Refresh | integer | G, A, U | The time allowed after a token has expired that it will be refreshed (in seconds) |
auth_ttl | Authentication TTL | integer | G, A, U | The TTL for authentication tokens (in seconds) |
string | G, A, U | User email address | ||
exportDateFormat | Date Format | string | G, A, U | User specified Date Format |
id | ID | integer | G | User Identifier |
is_admin | Is Admin | integer | G, A, U | Whether the user has admin access |
name | Name | string | G, A (required) | User name |
password | Password | string | G, A, U | User password This a private field, GET requests will instead return the state, one of:
|
reportRowSpacing | Report Row Spacing | string | G, A, U | The report row spacing preference for the user |
top_n | Top N | integer | G, A, U | The default Top N number for the user |
tz | Time Zone | string | G, A, U | User time zone |
Example: Creating a New User
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-X POST \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
"https://your.statseeker.server/api/v2.1/user/?indent=3" \
-d '{"data":[{"name":"<USER NAME>","password":"<PASSWORD>","auth":"<AUTHENTICATION METHOD>","email":"<EMAIL ADDRESS>","api":"<API PERMISSION>","tz":"<TIMEZONE>"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.post(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.post(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/user'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"data":[{"name":"<USER NAME>","password":"<PASSWORD>","auth":"<AUTHENTICATION METHOD>","email":"<EMAIL ADDRESS>","api":"<API PERMISSION>","tz":"<TIMEZONE>"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623643182,
"objects":[
{
"type":"user",
"sequence":1,
"status":{
"success":true,
"errcode":0
},
"data_total":1,
"data":[
"api":"<API PERMISSION>",
"auth":"<AUTHENTICATION METHOD>",
"email":"<EMAIL ADDRESS>",
"name":"<USER NAME>",
"password":"<PASSWORD>",
"tz":"<TIMEZONE>",
"id":94551,
"is_admin":0
}
]
}
]
}
}
Example: Updating an Existing User
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/user/<USER ID>?indent=3" \
-d '{"data":[{"api":"<API PERMISSION>","tz":"<NEW TIMEZONE>"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/user/<USER ID>'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"data":[{"api":"<API PERMISSION>","tz":"<NEW TIMEZONE>"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623644832
}
}
Devices (cdt_device)
The cdt_device resource allows you to interact with your currently discovered network devices.
The cdt_device object
Field ID | Field Title | Type | Get, Add, Update | Description |
auth_method | SNMPv3 Authentication Method | string | G | Authentication method for SNMPv3 devices |
auth_pass | SNMPv3 Authentication Password | string | G | Authentication password for SNMPv3 devices This a private field, GET requests will instead return the state, one of:
|
auth_user | SNMPv3 Authentication Username | string | G | Authentication user for SNMPv3 devices This a private field, GET requests will instead return the state, one of:
|
chassisId | Chassis ID | string | G | The string value used to identify the chassis component associated with the local system. |
community | Community | string | G | The community string status of the device This a private field, GET requests will instead return the state, one of:
|
context | SNMPv3 Context | string | G | Context for SNMPv3 devices |
custom_data_details | Custom Data Details | string | G, U | A JSON string object indicating which custom entities have been found during a discovery |
default_poller | Default Poller | string | G, A, U | The poller where ping metrics for this device are polled from |
deviceid | Device ID | integer | G | The ID of the parent device |
discover_getNext | Use GetNext | string | G, U | Walk this device using SNMP getNext instead of getBulk |
discover_minimal | Use Minimal Walk | string | G, U | Walk this device using a minimal set of oids |
discover_snmpv1 | Use SNMPv1 | string | G, U | Walk this device using SNMPv1 |
hostname | Hostname | string | G, A, U | The hostname of the device |
id | ID | integer | G | The entity identifier |
idx | Index | string | G | The base SNMP index for this entity |
ipaddress | IP Address | string | G, A (required), U | The IP address of the device |
latitude | Latitude | float | G, A, U | The user defined latitude of the device’s location |
longitude | Longitude | float | G, A, U | The user defined longitude of the device’s location |
manual_name | User Defined Name | string | G, A, U | The user defined name of the device |
memorySize | Memory Size | integer | G | The amount of physical read-write memory contained by the entity |
mis | MAC/IP/Switch Collection State | string | G, U | Include this device in the MIS report calculations |
name | Name | string | G | The entity name |
ping_dup | Ping Duplicate | integer | G | Number of duplicate ping responses received from the default pollerTimeseries Data: Stats, Formats & Options |
ping_jitter | Ping Jitter | float | G | The ping rtt jitter to this device in ms to 3 decimal places from the default pollerTimeseries Data: Stats, Formats & Options |
ping_lost | Ping Lost | integer | G | Number of times that a ping request was lost from the default pollerTimeseries Data: Stats, Formats & Options |
ping_lost_percent | Ping Lost Percent | integer | G | The percentage of lost ping packets from the default pollerTimeseries Data: Stats, Formats & Options |
ping_outage | Ping Outage | integer | G | Number of seconds to wait before a device is considered to be down |
ping_poll | Ping Poll | string | G, A, U | The ping polling status of the device from the default poller |
ping_received | Ping Received | integer | G | Number of ping request received from the default pollerTimeseries Data: Stats, Formats & Options |
ping_rtt | Ping RTT | float | G | The current 1 minute average ping rtt to this device in ms to 3 decimal places from the default pollerTimeseries Data: Stats, Formats & Options |
ping_rtt_max | Maximum Ping RTT | float | G | The maximum ping rtt to this device in ms to 3 decimal places from the default pollerTimeseries Data: Stats, Formats & Options |
ping_rtt_min | Minimum Ping RTT | float | G | The minimum ping rtt to this device in ms to 3 decimal places from the default pollerTimeseries Data: Stats, Formats & Options |
ping_sent | Ping Sent | integer | G | Number of ping request sent from the default pollerTimeseries Data: Stats, Formats & Options |
ping_state | Ping State | string | G | The current ping state of the device from the default pollerCan be combined with an event format for event-based analytics, see Event FormatsOne of:
|
poll | Poll State | string | G, U | The poll state of the entity
|
priv_method | SNMPv3 Privacy Method | string | G | Privacy method for SNMPv3 devices |
priv_pass | SNMPv3 Privacy Password | string | G | Privacy password for SNMPv3 devices This a private field, GET requests will instead return the state, one of:
|
region | Region | string | G, A, U | The region of the device |
retired | Retired | string | G | The device has been Retired |
site | Site | string | G, A, U | The site location of the device |
snmpEngineID | SNMP Engine ID | string | G, U | An SNMP engine’s administratively-unique identifier |
snmp_credential | SNMP Credential | integer | G, A, U | The SNMP credentials currently used to poll the device |
snmp_maxoid | SNMP Max OID | integer | G, U | Maximum number of oids to poll in a single request |
snmp_poll | SNMP Poll | string | G, A, U | The SNMP polling status of the device |
snmp_state | SNMP State | string | G, U | The current SNMP state of the device from the default pollerCan be combined with an event format for event-based analytics, see Event FormatsOne of:
|
snmp_version | SNMP Version | integer | G | The SNMP version of the device |
sysContact | Contact | string | G, U | The textual identification of the contact person for the entity |
sysDescr | System Description | string | G, U | A textual description of the entity |
sysLocation | Location | string | G, U | The physical location of the entity |
sysName | System Name | string | G, U | An administratively-assigned name for the entity |
sysObjectID | Object ID | string | G, U | The vendor’s authoritative identification of the network management subsystem contained in the entity |
sysServices | Services | integer | G, U | A value which indicates the set of services that the entity may potentially offer |
table | Table | string | G | The table to which the entity belongs |
vendor | Vendor | string | G, A, U | The vendor name for the device |
Links
Link | Description | Link Target |
macLink | Link to MAC | mac |
merakiDeviceLink | Link to Meraki Device | cdt_meraki_device |
snmpCredentialLink | Link to SNMP Credential | snmp_credential |
Example: Configuring a Device as Ping-Only
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/cdt_device/<DEVICE ID>/?indent=3" \
-d '{"data":[{"ping_poll":"on","snmp_poll":"off"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device/<DEVICE ID>'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"data":[{"ping_poll":"on","snmp_poll":"off"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1623646976,
"objects": [
{
"type": "cdt_device",
"sequence": 1,
"status": {
"success": true,
"errcode": 0
},
"data_total": 1,
"data": [
{
"name": "Houston-swt1",
"id": <DEVICE ID>,
"community": "public",
"ipaddress": "10.100.47.253",
"snmp_version": 2,
"poll": "off",
"ping_poll": "on"
}
]
}
]
}
}
Example: Returning Details on all Ping-Only Devices
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device/?fields=name,ipaddress,sysLocation,snmp_poll,ping_poll&snmp_poll_filter=IS(%27off%27)&ping_poll_filter=IS(%27on%27)"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device'
# specify fields to be returned
query += '/?fields=name,ipaddress,sysLocation,snmp_poll,ping_poll'
# Filters
query += '&snmp_poll_filter=IS("off")&ping_poll_filter=IS("on")'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623647384,
"objects":[
{
"type":"cdt_device",
"sequence":1,
"status":{
"success":true,
"errcode":0
},
"data_total":2,
"data":[
{
"name":"Houston-swt1",
"ipaddress":"10.100.47.253",
"sysLocation":"Houston",
"snmp_poll":"off",
"ping_poll":"on",
"id":425
},
{
"name":"Adelaide-swt1",
"ipaddress":"10.100.89.253",
"sysLocation":"Adelaide",
"snmp_poll":"off",
"ping_poll":"on",
"id":433
}
]
}
]
}
}
Example: Requesting Ping Metrics for a Device
Return ping times for the last 15 minutes as well as the average ping trip time for that period.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device/cdt_device/<DEVICE ID>?fields=name,ping_rtt&ping_rtt_formats=avg,vals&ping_rtt_timefilter=range%3Dnow%20-%2015m%20TO%20now"
# import requests for handling connection and encoding
import requests, json
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device/<DEVICE ID>'
# specify fields to be returned
query += '/?fields=name,ping_rtt'
# specify data formats
query += '&ping_rtt_formats=avg,vals'
# Filters
query += '&ping_rtt_timefilter=range%3Dnow%20-%2015m%20TO%20now'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623648677,
"objects":[
{
"type":"cdt_device",
"sequence":1,
"status":{
"success":true,
"errcode":0
},
"data_total":1,
"data":[
{
"name":"Houston-rtr",
"ping_rtt":{
"vals":[
23,
28,
30,
26,
32,
21,
28,
27,
26,
25,
30,
27,
30,
27
],
"avg":27.1429
},
"id":<DEVICE ID>
}
]
}
]
}
}
Example: Return Ping Availability Percentage for a group of devices, during business hours last month
Show the ping availability for each device in a specified group during business hours (6am – 6pm) over the previous month
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device?fields=name,ipaddress,ping_state&ping_state_formats=avl_inPercent&ping_state_filter_format=state&ping_state_timefilter=range=start_of_last_month%20to%20end_of_last_month;%20time%20=%2006:00%20to%2018:00;&ping_state_states=up&groups=<GROUP NAME>&links=none&indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device'
# specify fields to be returned
query += '/?fields=name,ipaddress,ping_state'
# specify data formats
query += '&ping_state_formats=avl_inPercent'
# Filters
query += '&ping_state_filter_format=state&ping_state_states=up&groups=<GROUP NAME>&ping_state_timefilter=range=start_of_last_month to end_of_last_month;time = 06:00 to 18:00;'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1615862113,
"objects": [
{
"type": "cdt_device",
"sequence": 307,
"status": {
"success": true,
"errcode": 0
},
"data_total": 59,
"data": [
{
"name": "NewYork-rtr",
"ipaddress": "10.100.44.254",
"ping_state": {
"avl_inPercent": 93.4969
},
"id": 175
},
{
"name": "Chicago-rtr",
"ipaddress": "10.100.46.254",
"ping_state": {
"avl_inPercent": 98.1472
},
"id": 195
},
{
"name": "Rome-rtr",
"ipaddress": "10.100.52.254",
"ping_state": {
"avl_inPercent": 96.6486
},
"id": 247
},
{
"name": "Paris-rtr",
"ipaddress": "10.100.53.254",
"ping_state": {
"avl_inPercent": 96.6329
},
"id": 260
},
{
"name": "Barcelona-rtr",
"ipaddress": "10.100.54.254",
"ping_state": {
"avl_inPercent": 97.481
},
"id": 269
},
{
"name": "Hamburg-rtr",
"ipaddress": "10.100.55.254",
"ping_state": {
"avl_inPercent": 97.6203
},
"id": 276
},
{
"name": "Budapest-rtr",
"ipaddress": "10.100.56.254",
"ping_state": {
"avl_inPercent": 91.905
},
"id": 284
},
{
"name": "Warsaw-rtr",
"ipaddress": "10.100.57.254",
"ping_state": {
"avl_inPercent": 97.7061
},
"id": 292
},
{
"name": "Istanbul-rtr",
"ipaddress": "10.100.61.254",
"ping_state": {
"avl_inPercent": 96.1347
},
"id": 313
},
{
"name": "Helsinki-rtr",
"ipaddress": "10.100.62.254",
"ping_state": {
"avl_inPercent": 96.4435
},
"id": 320
},
{
"name": "Lisbon-rtr",
"ipaddress": "10.100.64.254",
"ping_state": {
"avl_inPercent": 98.4008
},
"id": 331
},
{
"name": "Copenhagen-rtr",
"ipaddress": "10.100.66.254",
"ping_state": {
"avl_inPercent": 96.6578
},
"id": 341
},
{
"name": "Zurich-rtr",
"ipaddress": "10.100.67.254",
"ping_state": {
"avl_inPercent": 97.0897
},
"id": 346
},
{
"name": "Auckland-rtr",
"ipaddress": "10.100.68.254",
"ping_state": {
"avl_inPercent": 97.4721
},
"id": 350
},
{
"name": "Wellington-rtr",
"ipaddress": "10.100.69.254",
"ping_state": {
"avl_inPercent": 96.656
},
"id": 353
},
{
"name": "Shanghai-rtr",
"ipaddress": "10.100.71.254",
"ping_state": {
"avl_inPercent": 97.1329
},
"id": 358
},
{
"name": "Taipei-rtr",
"ipaddress": "10.100.72.254",
"ping_state": {
"avl_inPercent": 95.4515
},
"id": 361
},
{
"name": "Tokyo-rtr",
"ipaddress": "10.100.73.254",
"ping_state": {
"avl_inPercent": 96.988
},
"id": 364
},
{
"name": "Singapore-rtr",
"ipaddress": "10.100.75.254",
"ping_state": {
"avl_inPercent": 96.3195
},
"id": 370
},
{
"name": "Jakarta-rtr",
"ipaddress": "10.100.76.254",
"ping_state": {
"avl_inPercent": 96.5654
},
"id": 373
},
{
"name": "Manila-rtr",
"ipaddress": "10.100.77.254",
"ping_state": {
"avl_inPercent": 95.6723
},
"id": 378
},
{
"name": "KualaLumpur-rtr",
"ipaddress": "10.100.78.254",
"ping_state": {
"avl_inPercent": 96.1833
},
"id": 381
},
{
"name": "Delhi-rtr",
"ipaddress": "10.100.80.254",
"ping_state": {
"avl_inPercent": 97.6462
},
"id": 387
},
{
"name": "Mumbai-rtr",
"ipaddress": "10.100.81.254",
"ping_state": {
"avl_inPercent": 96.5621
},
"id": 390
},
{
"name": "Bangalore-rtr",
"ipaddress": "10.100.82.254",
"ping_state": {
"avl_inPercent": 93.9592
},
"id": 393
},
{
"name": "Chennai-rtr",
"ipaddress": "10.100.84.254",
"ping_state": {
"avl_inPercent": 94.9474
},
"id": 399
},
{
"name": "Brisbane-rtr",
"ipaddress": "10.100.86.254",
"ping_state": {
"avl_inPercent": 95.2075
},
"id": 405
},
{
"name": "Melbourne-rtr",
"ipaddress": "10.100.87.254",
"ping_state": {
"avl_inPercent": 96.4931
},
"id": 408
},
{
"name": "Perth-rtr",
"ipaddress": "10.100.88.254",
"ping_state": {
"avl_inPercent": 95.4568
},
"id": 411
},
{
"name": "Cairo-rtr",
"ipaddress": "10.100.90.254",
"ping_state": {
"avl_inPercent": 95.8718
},
"id": 417
},
{
"name": "Johannesburg-rtr",
"ipaddress": "10.100.91.254",
"ping_state": {
"avl_inPercent": 93.7854
},
"id": 420
},
{
"name": "PortElizabeth-rtr",
"ipaddress": "10.100.93.254",
"ping_state": {
"avl_inPercent": 94.4832
},
"id": 426
},
{
"name": "Pretoria-rtr",
"ipaddress": "10.100.94.254",
"ping_state": {
"avl_inPercent": 95.6376
},
"id": 432
},
{
"name": "LosAngeles-rtr",
"ipaddress": "10.100.45.254",
"ping_state": {
"avl_inPercent": 97.3842
},
"id": 21115
},
{
"name": "Athens-rtr",
"ipaddress": "10.100.59.254",
"ping_state": {
"avl_inPercent": 94.7204
},
"id": 21118
},
{
"name": "Dublin-rtr",
"ipaddress": "10.100.65.254",
"ping_state": {
"avl_inPercent": 96.4443
},
"id": 21121
},
{
"name": "Beijing-rtr",
"ipaddress": "10.100.70.254",
"ping_state": {
"avl_inPercent": 96.2173
},
"id": 21122
},
{
"name": "Houston-rtr",
"ipaddress": "10.100.47.254",
"ping_state": {
"avl_inPercent": 97.7115
},
"id": 26477
},
{
"name": "London-rtr",
"ipaddress": "10.100.49.254",
"ping_state": {
"avl_inPercent": 97.5155
},
"id": 26478
},
{
"name": "CapeTown-rtr",
"ipaddress": "10.100.92.254",
"ping_state": {
"avl_inPercent": 95.6713
},
"id": 26480
},
{
"name": "Kolkata-rtr",
"ipaddress": "10.100.83.254",
"ping_state": {
"avl_inPercent": 95.9105
},
"id": 54838
},
{
"name": "Adelaide-rtr",
"ipaddress": "10.100.89.254",
"ping_state": {
"avl_inPercent": 96.9231
},
"id": 55175
},
{
"name": "Berlin-rtr",
"ipaddress": "10.119.50.254",
"ping_state": {
"avl_inPercent": 96.1095
},
"id": 90353
},
{
"name": "Madrid-rtr",
"ipaddress": "10.119.51.254",
"ping_state": {
"avl_inPercent": 94.8712
},
"id": 90357
},
{
"name": "Vienna-rtr",
"ipaddress": "10.119.58.254",
"ping_state": {
"avl_inPercent": 96.621
},
"id": 90419
},
{
"name": "Stockholm-rtr",
"ipaddress": "10.119.63.254",
"ping_state": {
"avl_inPercent": 94.0744
},
"id": 90448
},
{
"name": "Bangkok-rtr",
"ipaddress": "10.119.79.254",
"ping_state": {
"avl_inPercent": 94.4871
},
"id": 90506
},
{
"name": "Phoenix-rtr",
"ipaddress": "10.100.48.254",
"ping_state": {
"avl_inPercent": 93.6085
},
"id": 158201
},
{
"name": "Seoul-rtr",
"ipaddress": "10.100.74.254",
"ping_state": {
"avl_inPercent": 93.8363
},
"id": 209909
},
{
"name": "Sydney-rtr",
"ipaddress": "10.100.85.254",
"ping_state": {
"avl_inPercent": 96.3044
},
"id": 209912
}
]
}
]
}
}
Example: Deleting Multiple Devices via IP Address
Delete multiple devices in a single call by filter the device list by ipaddress.
We will delete devices with ipaddress = 10.100.59.251, 10.100.59.251.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X DELETE \
"https://your.statseeker.server/api/v2.1/cdt_device/cdt_device/"
-d '{"fields":{"ipaddress":{"field":"ipaddress","filter":{"query":"IN(\"<IP ADDRESS 1>",\"<IP ADDRESS 2>\")"}}}}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.delete(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.delete(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"fields":{"ipaddress":{"field":"ipaddress","filter":{"query":"IN(\"<IP ADDRESS 1>\",\"<IP ADDRESS 2>\")"}}}})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623649404
}
}
Interface (cdt_port)
The cdt_port resource allows you to interact with the network interfaces being monitored by Statseeker.
The cdt_port object
Field ID | Field Title | Type | Get, Add, Update | Description |
id | ID | integer | G | The entity identifier |
name | Name | string | G | The entity name |
deviceid | Device ID | integer | G | The ID of the parent device |
idx | Index | string | G | The base SNMP index for this entity |
table | Table | string | G | The table to which the entity belongs |
poll | Poll State | string | G, U | The poll state of the entity
|
InBroadcastPkts | Rx Bcast Pkts | integer | G | Number of received broadcast packets Timeseries Data: Stats, Formats & Options |
InBroadcastPps | Rx Bcast Pps | float | G | Number of received broadcast packets per second Timeseries Data: Stats, Formats & Options |
InDiscards | Rx Discards | integer | G | Number of received discards Timeseries Data: Stats, Formats & Options |
InErrors | Rx Errors | integer | G | Number of received errors Timeseries Data: Stats, Formats & Options |
InMulticastPkts | Rx Mcast Pkts | integer | G | Number of received multicast packets Timeseries Data: Stats, Formats & Options |
InMulticastPps | Rx Mcast Pps | float | G | Number of received multicast packets per second Timeseries Data: Stats, Formats & Options |
InOctets | Rx Bytes | integer | G | Number of received bytes Timeseries Data: Stats, Formats & Options |
InOutBroadcastPkts | Total Bcast Pkts | integer | G | Combined Rx and Tx broadcast packets Timeseries Data: Stats, Formats & Options |
InOutDiscards | Total Discards | integer | G | Combined Rx and Tx Discards Timeseries Data: Stats, Formats & Options |
InOutErrors | Total Errors | integer | G | Combined Rx and Tx Errors Timeseries Data: Stats, Formats & Options |
InOutMulticastPkts | Total Mcast Pkts | integer | G | Combined Rx and Tx multicast packets Timeseries Data: Stats, Formats & Options |
InOutOctets | Total Bytes | integer | G | Combined Rx and Tx Bytes Timeseries Data: Stats, Formats & Options |
InOutSpeed | Total Speed | integer | G | Combined Rx and Tx Speed |
InOutUcastPkts | Total Ucast Pkts | integer | G | Combined Rx and Tx unicast packets Timeseries Data: Stats, Formats & Options |
InUcastPkts | Rx Ucast Pkts | integer | G | Number of received unicast packets Timeseries Data: Stats, Formats & Options |
InUcastPps | Rx Ucast Pps | float | G | Number of received unicast packets per second Timeseries Data: Stats, Formats & Options |
OutBroadcastPkts | Tx Bcast Pkts | integer | G | Number of transmitted broadcast packets Timeseries Data: Stats, Formats & Options |
OutBroadcastPps | Tx Bcast Pps | float | G | Number of transmitted broadcast packets per second Timeseries Data: Stats, Formats & Options |
OutDiscards | Tx Discards | integer | G | Number of transmitted discards Timeseries Data: Stats, Formats & Options |
OutErrors | Tx Errors | integer | G | Number of transmitted errors Timeseries Data: Stats, Formats & Options |
OutMulticastPkts | Tx Mcast Pkts | integer | G | Number of transmitted multicast packets Timeseries Data: Stats, Formats & Options |
OutMulticastPps | Tx Mcast Pps | float | G | Number of transmitted multicast packets per second Timeseries Data: Stats, Formats & Options |
OutOctets | Tx Bytes | integer | G | Number of transmitted bytes Timeseries Data: Stats, Formats & Options |
OutUcastPkts | Tx Ucast Pkts | integer | G | Number of transmitted unicast packets Timeseries Data: Stats, Formats & Options |
OutUcastPps | Tx Ucast Pps | float | G | Number of transmitted unicast packets per second Timeseries Data: Stats, Formats & Options |
RxBps | Rx Bps | float | G | Received bits per second Timeseries Data: Stats, Formats & Options |
RxDiscardsPercent | Rx Discards Percent | float | G | Rx discards percentage Timeseries Data: Stats, Formats & Options |
RxErrorPercent | Rx Errors Percent | float | G | Rx errors percentage Timeseries Data: Stats, Formats & Options |
RxTxDiscardsPercent | Total Discards Percent | float | G | Total discards percentage Timeseries Data: Stats, Formats & Options |
RxTxErrorPercent | Total Errors Percent | float | G | Total errors percentage Timeseries Data: Stats, Formats & Options |
RxUtil | Rx Util | float | G | Rx Utilization Timeseries Data: Stats, Formats & Options |
TxBps | Tx Bps | float | G | Transmitted bits per second Timeseries Data: Stats, Formats & Options |
TxDiscardsPercent | Tx Discards Percent | float | G | Tx discards percentage Timeseries Data: Stats, Formats & Options |
TxErrorPercent | Tx Errors Percent | float | G | Tx errors percentage Timeseries Data: Stats, Formats & Options |
TxUtil | Tx Util | float | G | Tx Utilization Timeseries Data: Stats, Formats & Options |
if90day | if90day | integer | G, U | Status of port usage over 90 days, one of:
|
ifAdminStatus | ifAdminStatus | string | G, U | The desired state of the interface, one of:
Can be combined with an event format for event-based analytics, see Event Formats. |
ifAlias | Alias | string | G, U | Interface Alias (ifAlias) |
ifDescr | Description | string | G, U | Interface Description (ifDescr) |
ifDuplex | ifDuplex | string | G, U | Interface Duplex, one of:
|
ifInSpeed | Rx Speed | integer | G, U | Interface Input Speed (Statseeker custom attribute) |
ifIndex | ifIndex | string | G, U | Interface Index (IF-MIB.ifIndex) |
ifName | ifName | string | G, U | Interface Name (IF-MIB.ifName) |
ifNonUnicast | NUcast Polling | string | G, U | NonUnicast Polling status of the port, one of:
|
ifOperStatus | ifOperStatus | string | G, U | Current operational status of port, one of:
Can be combined with an event format for event-based analytics, see Event Formats. |
ifOutSpeed | Tx Speed | integer | G, U | Interface Output Speed (Statseeker custom attribute) |
ifPhysAddress | PhysAddress | string | G, U | Interface MAC Address (ifPhysAddress) |
ifPoll | ifPoll | string | G, U | Polling status of the port, one of:
|
ifSpeed | Speed | integer | G, U | Interface Speed (based on ifSpeed or ifHighSpeed) |
ifTitle | Title | string | G, U | Interface Title (Statseeker custom attribute – ifTitle) |
ifType | Type | string | G, U | Interface Type, one of:
|
Links
This resource has links to the following:
- deviceLink – Link to Device (cdt_device)
- macLink – Link to MAC (mac)
- merakiPortLink – Link to Meraki Interface (cdt_meraki_port)
Example: Turning Off an Interface
In this example we will use the device and interface name to identify the interface and then update the polling status on the selected interface.
- Start with a device name and an interface name, Use these to get the interface ID
- Use the interface ID to update the polling status of the interface
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_port/?fields=name,cdt_device.name&name_filter=IS(%27<INTERFACE NAME>%27)&cdt_device.name_filter=IS(%27<DEVICE NAME>%27)&indent=3&links=none"
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/cdt_port/<INTERFACE ID>/?indent=3"
-d '{"data":[{%27poll%27:%27off%27}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqType, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
if reqType == "get":
resp = requests.get(url, headers=headers, verify=False)
else:
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
if reqType == "get":
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
else:
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port'
# specify fields to be returned
query += '/?fields=name,cdt_device.name'
# Filters
query += '&name_filter=IS("<INTERFACE NAME>")&cdt_device.name_filter=IS("<DEVICE NAME>")'
# optional response formatting
query += '&indent=3&links=none'
# set request type
reqType = 'get'
data=''
portId=''
# Run the request
resp = do_request(server, query, user, pword, reqType, data)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
portId=str(resp.json()['data']['objects'][0]['data'][0]['id'])
print("Interface ID: " + portId)
else:
print(f'Error: {resp.status_code} {resp.reason}')
# build next query
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port/' + portId
# optional response formatting
query += '/?indent=3&links=none'
# update poll status to turn off polling
data = json.dumps({"data":[{"poll":"off"}]})
# set request type
reqType = 'put'
# Run the request
resp = do_request(server, query, user, pword, reqType, data)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
#This response includes some logging (preceding the response data object) which is only relevant to the Python and Ruby implementations
200
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623655171,
"objects":[
{
"type":"cdt_port",
"sequence":5,
"status":{
"success":true,
"errcode":0
},
"data_total":1,
"data":[
{
"name":"<INTERFACE NAME>",
"cdt_device.name":"<DEVICE NAME>",
"id":<INTERFACE ID>"
}
]
}
]
}
}
"Interface ID":<INTERFACE ID>"
200
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623655172
}
}
Example: Requesting Timeseries Data from all Interfaces on a Device
Request:
- Identifying data: the name and ID of the interface
- Timeseries data: total inbound/outbound discard percentage for the reporting period, minimum, average, and 95th percentile values for inbound/outbound utilization
- Time filter of ‘start of the day until now’
- Sort results on the average inbound utilization, descending
- Filter the results to show only those interfaces on a specific device
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_port/?fields=deviceid,id,name,RxTxDiscardsPercent,TxUtil,RxUtil&RxUtil_sort=1,desc,avg&deviceid_filter=IS(%22<DEVICE ID>%22)&RxUtil_formats=95th,avg,min&TxUtil_formats=95th,avg,min&RxTxDiscardsPercent_formats=total&timefilter=range=start_of_today%20to%20now"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port'
# specify fields to be returned
query += '/?fields=deviceid,id,name,RxTxDiscardsPercent,TxUtil,RxUtil'
# specify data formats
query += '&formats=95th,avg,min&RxTxDiscardsPercent_formats=total'
# specify filters
query += '&deviceid_filter=IS("<DEVICE ID>")&timefilter=range=start_of_today to now'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
'version':'2.1',
'revision':'11',
'info':'The Statseeker RESTful API',
'data':{
'success':True,
'errmsg':'ok',
'time':1623714662,
'objects':[
{
'type':'cdt_port',
'sequence':11,
'status':{
'success':True,
'errcode':0
},
'data_total':5,
'data':[
{
'deviceid':<DEVICE ID>,
'id':5267,
'name':'Gi0/1',
'RxTxDiscardsPercent':{
'total':0
},
'TxUtil':{
'min':1.40207,
'avg':1.88639,
'95th':2.19703
},
'RxUtil':{
'min':1.27934,
'avg':1.87145,
'95th':2.11322
}
},
{
'deviceid':<DEVICE ID>,
'id':5268,
'name':'Gi0/2',
'RxTxDiscardsPercent':{
'total':0
},
'TxUtil':{
'min':1.20236,
'avg':1.9108,
'95th':2.26653
},
'RxUtil':{
'min':1.52055,
'avg':1.8971,
'95th':2.14601
}
},
{
'deviceid':<DEVICE ID>,
'id':5269,
'name':'Gi0/3',
'RxTxDiscardsPercent':{
'total':0.00919672
},
'TxUtil':{
'min':9.02538,
'avg':32.583,
'95th':71.7206
},
'RxUtil':{
'min':9.19671,
'avg':45.9247,
'95th':71.0976
}
},
{
'deviceid':<DEVICE ID>,
'id':5270,
'name':'Gi0/4',
'RxTxDiscardsPercent':{
'total':0
},
'TxUtil':{
'min':1.19819,
'avg':25.2168,
'95th':37.0137
},
'RxUtil':{
'min':0.948108,
'avg':23.3934,
'95th':37.0401
}
},
{
'deviceid':<DEVICE ID>,
'id':5271,
'name':'Gi0/5',
'RxTxDiscardsPercent':{
'total':0
},
'TxUtil':{
'min':0.833396,
'avg':0.933941,
'95th':0.950772
},
'RxUtil':{
'min':0.833468,
'avg':0.933937,
'95th':0.950873
}
}
]
}
]
}
}
Example: Return the 10 Most Congested Interfaces (inbound) over the Last 30mins
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
'https://your.statseeker.server/api/v2.1/cdt_port/?fields=deviceid,id,name,RxUtil&RxUtil_formats=vals,avg&RxUtil_timefilter=range=now%20-%2030m%20to%20now%20-%201m&RxUtil_sort=1,desc,avg&limit=10&indent=3'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port'
# specify fields to be returned
query += '/?fields=deviceid,id,name,RxUtil'
# specify data formats
query += '&RxUtil_formats=vals,avg'
# Filters and sorting
query += '&RxUtil_timefilter=range=now - 30m to now - 1m&limit=10&RxUtil_sort=1,desc,avg'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623715763,
"objects":[
{
"type":"cdt_port",
"sequence":11,
"status":{
"success":true,
"errcode":0
},
"data_total":21178,
"data":[
{
"deviceid":666,
"id":25349,
"name":"int.673",
"RxUtil":{
"vals":[
58.412,
59.2569,
62.7188,
62.8199,
60.9338,
61.2049,
62.5076,
64.7656,
68.891,
74.0913,
76.311,
79.5807,
82.8285,
89.1304,
93.4463,
94.0603,
97.8606,
102.481,
108.23,
108.733,
109.634,
111.784,
110.537,
118.258,
120.432,
121.998,
124.984,
131.161,
129.887
],
"avg":91.2737
}
},
{
"deviceid":576,
"id":14508,
"name":"Gi2/6",
"RxUtil":{
"vals":[
14.7061,
84.3344,
91.9828,
93.1667,
94.9071,
95.082,
94.8615,
91.9828,
94.6667,
95,
95.1452,
94.8721,
93.4828,
93.3333,
96.3333,
95.2404,
91.9404,
93.1525,
93.3333,
91.8333,
94.8118,
96.3663,
93.6552,
91.8333,
91.6667,
94.8118,
94.8721,
93.4828,
93.3333
],
"avg":90.8341
}
},
{
"deviceid":607,
"id":19079,
"name":"Gi5/1",
"RxUtil":{
"vals":[
32.9511,
82.6984,
94.9561,
93.7022,
92.1038,
93.1148,
95.8307,
92.4708,
94.5127,
90.6501,
91.9098,
95.467,
95.2381,
95,
95.0683,
92.6169,
95.4101,
95.1675,
93.6635,
93.4426,
94.8462,
94.8999,
94.8992,
92.2081,
94.4444,
93.8757,
95.8758,
18.9708,
2.16111
],
"avg":85.7985
}
},
{
"deviceid":560,
"id":12370,
"name":"Gi4/24",
"RxUtil":{
"vals":[
93.6138,
96.0178,
93.7018,
94.878,
95.082,
95.0123,
94.928,
94.9887,
95,
91.9887,
94.7371,
94.8776,
96.4521,
95.2669,
94.9375,
82.538,
13.7182,
1.89558,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
],
"avg":84.4241
}
},
{
"deviceid":572,
"id":14061,
"name":"Gi1/16",
"RxUtil":{
"vals":[
94.9785,
94.8721,
91.9828,
93.1667,
94.9071,
95.082,
94.8615,
91.9828,
94.6667,
95,
95.1452,
94.8721,
94.9828,
93.5,
93.2316,
93.521,
91.7614,
93.1525,
94.8333,
92,
94.8118,
96.3663,
93.6552,
60.9933,
7.39,
1.83817,
1.91872,
1.8731,
1.86667
],
"avg":77.2143
}
},
{
"deviceid":591,
"id":17044,
"name":"Gi6/20",
"RxUtil":{
"vals":[
1.87849,
36.1878,
87.6018,
93.6762,
94.8087,
95.082,
96.2094,
93.8999,
94.7404,
93.517,
93.6706,
95.8482,
95.2143,
95.0697,
95.0137,
92.5529,
95.5319,
95.1779,
93.5706,
93.3333,
94.9524,
94.9097,
96.3131,
95.2415,
93.5833,
22.1281,
3.58259,
1.87595,
1.895
],
"avg":77.1402
}
},
{
"deviceid":92491,
"id":93714,
"name":"Gi5/21",
"RxUtil":{
"vals":[
75.9317,
76.1367,
75.4496,
74.7143,
74.1968,
75.9783,
76.6457,
76.8767,
76.9577,
76.8083,
76.3996,
76.9991,
77.3082,
76.5974,
75.1579,
74.208,
75.7776,
74.1712,
75.5847,
76.9485,
73.8196,
73.5756,
74.4403,
73.1962,
73.1154,
70.7376,
71.3308,
74.8943,
76.6727
],
"avg":75.1942
}
},
{
"deviceid":468,
"id":7245,
"name":"Gi0/4",
"RxUtil":{
"vals":[
74.404,
74.1828,
76.5702,
77.0775,
77.3333,
75.5785,
73.5372,
74.079,
74.8937,
73.9347,
72.175,
73.2957,
72.9874,
76.0928,
76.1607,
74.4128,
74.838,
75.9916,
77.3333,
77.2873,
74.482,
72.616,
74.6077,
73.5125,
71.4153,
70.4813,
76.5989,
77.155,
74.8029
],
"avg":74.753
}
},
{
"deviceid":542,
"id":11260,
"name":"Gi2/20",
"RxUtil":{
"vals":[
95.1452,
94.9481,
93.4181,
94.8958,
93.6339,
94.7923,
93.3563,
91.9333,
94.6271,
93.428,
94.9672,
94.8721,
94.9828,
93.5984,
94.7434,
95.1367,
93.3778,
93.3103,
96.3333,
93.6667,
94.9785,
94.9481,
62.6062,
7.5439,
1.87197,
1.89782,
1.86734,
1.83621,
1.89333
],
"avg":74.2969
}
},
{
"deviceid":603,
"id":18345,
"name":"Gi3/11",
"RxUtil":{
"vals":[
75.2432,
76.9031,
74.3828,
73.5258,
73.8093,
74.4585,
75.7062,
73.6734,
74.6045,
74.9261,
73.7239,
74.2233,
75.8226,
75.2778,
74.8531,
76.0055,
74.3657,
75.7361,
73.3104,
70.7072,
72.137,
71.5355,
75.3817,
72.7648,
72.9251,
73.4308,
72.6468,
73.2978,
72.2705
],
"avg":74.0568
}
}
]
}
]
}
}
Example: Return the 10 Busiest Interfaces, According to their 90th Percentile Values, over the Last Hour
We are returning the deviceid and interface name to identify the interface.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_port/?fields=cdt_device.name,name,RxUtil,InOctets&timefilter=range=now%20-%201h%20to%20now&RxUtil_formats=avg,max&InOctets_formats=percentile&InOctets_stats=%7B%22percentile%22:90%7D&InOctets_sort=1,desc,percentile&limit=10&indent=3&links=none"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port'
# specify fields to be returned
query += '?fields=cdt_device.name,name,RxUtil,InOctets'
# specify data formats
query += '&RxUtil_formats=avg,max&InOctets_formats=percentile&InOctets_stats={"percentile":90}'
# Filters and sorting
query += '&timefilter=range=now -1h to now&InOctets_sort=1,desc,percentile&limit=10'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623720470,
"objects":[
{
"type":"cdt_port",
"sequence":11,
"status":{
"success":true,
"errcode":0
},
"data_total":21178,
"data":[
{
"cdt_device.name":"R-GMM-AGG-07",
"name":"Po15",
"RxUtil":{
"max":15.4582,
"avg":13.5477
},
"InOctets":{
"percentile":112727000000.0
},
"id":28487
},
{
"cdt_device.name":"R-GMM-AGG-07",
"name":"Tu2924",
"RxUtil":{
"max":7.72051,
"avg":6.92405
},
"InOctets":{
"percentile":57139800000.0
},
"id":28512
},
{
"cdt_device.name":"R-GMM-AGG-07",
"name":"Tu3937",
"RxUtil":{
"max":7.71764,
"avg":6.82324
},
"InOctets":{
"percentile":54429300000.0
},
"id":28563
},
{
"cdt_device.name":"R-GMM-AGG-07",
"name":"Tu2942",
"RxUtil":{
"max":7.61665,
"avg":5.65903
},
"InOctets":{
"percentile":53692000000.0
},
"id":28528
},
{
"cdt_device.name":"R-GMM-AGG-07",
"name":"Tu2912",
"RxUtil":{
"max":4.5065,
"avg":3.86295
},
"InOctets":{
"percentile":32249500000.0
},
"id":28504
},
{
"cdt_device.name":"CDT-IP-Interface-Stats",
"name":"Ethernet3/30",
"RxUtil":{
"max":38.799,
"avg":35.7174
},
"InOctets":{
"percentile":28524500000.0
},
"id":25042
},
{
"cdt_device.name":"CDT-IP-Interface-Stats",
"name":"Ethernet3/27",
"RxUtil":{
"max":38.9375,
"avg":34.1723
},
"InOctets":{
"percentile":28171200000.0
},
"id":25038
},
{
"cdt_device.name":"R-GMM-AGG-07",
"name":"Tu3936",
"RxUtil":{
"max":3.8409,
"avg":3.41211
},
"InOctets":{
"percentile":27906200000.0
},
"id":28562
},
{
"cdt_device.name":"CDT-IP-Interface-Stats",
"name":"Ethernet3/18",
"RxUtil":{
"max":38.4306,
"avg":35.0934
},
"InOctets":{
"percentile":27854000000.0
},
"id":25028
},
{
"cdt_device.name":"CDT-IP-Interface-Stats",
"name":"Ethernet3/11",
"RxUtil":{
"max":35.2323,
"avg":28.6036
},
"InOctets":{
"percentile":25326800000.0
},
"id":25021
}
]
}
]
}
}
Example: Return the Inbound Traffic 90th Percentile for the Last Hour on a specific Interface
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_port/<INTERFACE ID>/RxUtil?timefilter=range=%20now%20-%201h%20to%20now&formats=percentile&stats=%7B%22percentile%22:90%7D&links=none&indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port/<INTERFACE ID>/RxUtil'
# specify data formats
query += '/?formats=percentile&stats={"percentile":90}'
# Filters
query += '&timefilter=range=now -1h to now'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
'version':'2.1',
'revision':'11',
'info':'The Statseeker RESTful API',
'data':{
'success':True,
'errmsg':'ok',
'time':1623730579,
'objects':[
{
'type':'cdt_port',
'sequence':11,
'status':{
'success':True,
'errcode':0
},
'data_total':1,
'data':[
{
'RxUtil':{
'percentile':96.375
},
'id':<INTERFACE ID>
}
]
}
]
}
}
Example: Return traffic rate metrics for an interface, during business hours on a specific date
We are:
- Filtering to a specified interface on a specified device
- Restricting the response to between 6am – 6pm, on March 5th 2020
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
'https://your.statseeker.server/api/v2.1/cdt_port?fields=name,deviceid,cdt_device.name,ifAlias,ifDescr,RxBps&name_filter=IS("<INTERFACE NAME>")&cdt_device.name_filter=IS("<DEVICE NAME>")&formats=95th,median,avg&timefilter=range=2020-03-05 to 2020-03-06;time=08:00 to 18:00&indent=3&links=none'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port'
# specify fields to be returned
query += '/?fields=name,deviceid,cdt_device.name,ifAlias,ifDescr,RxBps'
# specify data formats
query += '&formats=95th,median,avg'
# Filters
query += '&name_filter=IS("<INTERFACE NAME>")&cdt_device.name_filter=IS("<DEVICE NAME>")&timefilter=range=2020-03-05 to 2020-03-06;time=08:00 to 18:00'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1615501619,
"objects": [
{
"type": "cdt_port",
"sequence": 1,
"status": {
"success": true,
"errcode": 0
},
"data_total": 1,
"data": [
{
"name": "<INTERFACE NAME>",
"deviceid": 26478,
"cdt_device.name": "<DEVICE NAME>",
"ifAlias": "Link to SanJose-core",
"ifDescr": "GigabitEthernet0/1",
"RxBps": {
"avg": 19653900.0,
"median": 19410600.0,
"95th": 23579900.0
}
"id": 26486,
}
]
}
]
}
}
Group
The group resource allows you to create and populate the groups used by Statseeker for reporting, and for authorization when restricting visibility and access to functionality for users within Statseeker.
The group object
Field ID | Field Title | Type | Get, Add, Update | Description | ||||||||||||||||
entities | Entities | object | G, U | The entities that are assigned to this group
|
||||||||||||||||
id | ID | integer | G | ID of the group | ||||||||||||||||
name | Name | string | G, A (required), U | Name of the group |
Group Mode
The Group object requires a mode be specified when performing PUT requests to make changes to the group’s members (entities). The mode specifies what the API should do with the entities when making changes.
Mode | Description |
clear | Remove the specified entities from the group |
set | Remove all existing entities and then add the specified entities to the group |
add | Add the specified entities to the group |
Example: Create a Group
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X POST \
"https://your.statseeker.server/api/v2.1/group/?indent=3" \
-d '{"data":[{"name":"<GROUP NAME>"}]}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.post(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.post(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"data":[{"name":"<GROUP NAME>"}]})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version":"2.1",
"revision":"11",
"info":"The Statseeker RESTful API",
"data":{
"success":true,
"errmsg":"ok",
"time":1623731624,
"objects":[
{
"type":"group",
"sequence":8,
"status":{
"success":true,
"errcode":0
},
"data_total":1,
"data":[
{
"name":"<GROUP NAME>",
"id":94551
}
]
}
]
}
}
Example: Populate a Group
When populating a group you use the /entities field endpoint and must specify a mode instructing the API on how to populate the group.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X PUT \
"https://your.statseeker.server/api/v2.1/group/<GROUP ID>/entities/?indent=3" \
-d '{"value":[286,287,288,289,290,291,292,293],"mode":"add"}'
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword, reqData):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.put(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.put(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group/<GROUP ID>/entities'
# optional response formatting
query += '/?indent=3&links=none'
# data
reqData = json.dumps({"value":[286,287,288,289,290,291,292,293],"mode":"add"})
# Run the request
resp = do_request(server, query, user, pword, reqData)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info":"The Statseeker RESTful API",
"data":{
"errmsg":"ok",
"success":true,
"time":1496200162
},
"links":[
{
"link":"/api/v2.1/group/<GROUP ID>/entities?&indent=3",
"rel":"self"
},
{
"link":"/api/v2.1",
"rel":"base"
},
{
"link":"/api/v2.1/group/<GROUP ID>",
"rel":"collection"
}
],
"api_version":"2.1"
}
Example: Deleting a Group
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X DELETE \
"https://your.statseeker.server/api/v2.1/group/<GROUP ID>/?indent=3&links=none"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.delete(url, headers=headers, data=reqData, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.delete(url, headers=headers, auth=(user, pword), data=reqData, verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group/<GROUP ID>'
# optional response formatting
query += '/?indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"errmsg": "ok",
"success": true,
"time": 1496200297
},
"api_version": "2.1"
}
Example: Which Groups contain a specific entity (device, interface, user, etc.)
Return details on all groups containing a specified device.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/group/?fields=id,name,entities&entities_formats=count&entities_filter==<DEVICE ID>&entities_filter_format=list
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/group'
# specify fields to be returned
query += '/?fields=id,name,entities'
# specify data formats
query += '&entities_formats=count'
# Filters
query += '&entities_filter==<DEVICE ID>&entities_filter_format=list'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "13",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1615841935,
"objects": [
{
"type": "group",
"sequence": 9,
"status": {
"success": true,
"errcode": 0
},
"data_total": 5,
"data": [
{
"id": 156472,
"name": "10.100.*.*",
"entities": {
"count": 51173
}
},
{
"id": 156893,
"name": "US Hardware",
"entities": {
"count": 12044
}
},
{
"id": 233023,
"name": "Switches",
"entities": {
"count": 58583
}
},
{
"id": 233024,
"name": "Chicago",
"entities": {
"count": 2347
}
},
{
"id": 233025,
"name": "Chicago Switches",
"entities": {
"count": 1985
}
}
]
}
]
}
}
Using ‘group_by’ for Data Aggregation
The group_by parameter can be used to aggregate data returned from a query, for more information on the syntax and requirements of using group_by, see the parameters available when using GET request to a resource-level endpoint.
Example: Total traffic across all interfaces on a device
Request the total bytes for all interfaces on every device in the Routers group for the previous 4hrs, then aggregate these values by the parent device, returning the total traffic for the device.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_port/?fields=cdt_device.name,InOutOctets&formats=total&groups=Routers&timefilter=range%20=%20now%20-4h%20to%20now&group_by={cdt_device.name}&InOutOctets_aggregation_format=total&limit=0&links=none&indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_port'
# specify fields to be returned
query += '/?fields=cdt_device.name,InOutOctets'
# specify filters
query += '&groups=Routers&timefilter=range%20=%20now%20-4h%20to%20now&limit=0'
# specify data formats
query += '&formats=total&InOutOctets_aggregation_format=total'
# group_by
query += '&group_by={cdt_device.name}'
# optional response formatting
query += '&links=none&indent=3'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "12",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1669678227,
"objects": [
{
"type": "cdt_port",
"sequence": 0,
"status": {
"success": true,
"errcode": 0
},
"data_total": 56,
"data": [
{
"cdt_device.name": "Adelaide-rtr",
"InOutOctets": {
"total": 1175110000000.0
}
},
{
"cdt_device.name": "Athens-rtr",
"InOutOctets": {
"total": 1645590000000.0
}
},
{
"cdt_device.name": "Atlanta-Router1",
"InOutOctets": {
"total": 97008800000000.0
}
},
{
"cdt_device.name": "Auckland-rtr",
"InOutOctets": {
"total": 1539760000000.0
}
},
{
"cdt_device.name": "Baltimore-Router1",
"InOutOctets": {
"total": 59582500000.0
}
},
{
"cdt_device.name": "Bangalore-rtr",
"InOutOctets": {
"total": 198421000000.0
}
},
{
"cdt_device.name": "Bangkok-rtr",
"InOutOctets": {
"total": 560997000000.0
}
},
{
"cdt_device.name": "Barcelona-rtr",
"InOutOctets": {
"total": 1704780000000.0
}
},
{
"cdt_device.name": "Beijing-rtr",
"InOutOctets": {
"total": 129280000000.0
}
},
{
"cdt_device.name": "Brisbane-rtr",
"InOutOctets": {
"total": 2276180000000.0
}
},
{
"cdt_device.name": "Budapest-rtr",
"InOutOctets": {
"total": 2740350000000.0
}
},
{
"cdt_device.name": "Cairo-rtr",
"InOutOctets": {
"total": 2842820000000.0
}
},
{
"cdt_device.name": "CapeTown-rtr",
"InOutOctets": {
"total": 1930000000000.0
}
},
{
"cdt_device.name": "Chennai-rtr",
"InOutOctets": {
"total": 121450000000.0
}
},
{
"cdt_device.name": "Chicago-rtr",
"InOutOctets": {
"total": 2563840000000.0
}
},
{
"cdt_device.name": "Copenhagen-rtr",
"InOutOctets": {
"total": 1343370000000.0
}
},
{
"cdt_device.name": "Dallas-Router1",
"InOutOctets": {
"total": 17923100000000.0
}
},
{
"cdt_device.name": "Dallas-Router11",
"InOutOctets": {
"total": 14284700000000.0
}
},
{
"cdt_device.name": "Delhi-rtr",
"InOutOctets": {
"total": 3560040000000.0
}
},
{
"cdt_device.name": "Denver-Router1",
"InOutOctets": {
"total": 45548100000000.0
}
},
{
"cdt_device.name": "Denver-Router3",
"InOutOctets": {
"total": 1077640000000.0
}
},
{
"cdt_device.name": "Development-router2",
"InOutOctets": {
"total": 8002710000.0
}
},
{
"cdt_device.name": "Dublin-rtr",
"InOutOctets": {
"total": 2349700000000.0
}
},
{
"cdt_device.name": "Helsinki-rtr",
"InOutOctets": {
"total": 1523310000000.0
}
},
{
"cdt_device.name": "Houston-rtr",
"InOutOctets": {
"total": 2465680000000.0
}
},
{
"cdt_device.name": "Jakarta-rtr",
"InOutOctets": {
"total": 1730530000000.0
}
},
{
"cdt_device.name": "Kolkata-rtr",
"InOutOctets": {
"total": 377560000000.0
}
},
{
"cdt_device.name": "KualaLumpur-rtr",
"InOutOctets": {
"total": 1229840000000.0
}
},
{
"cdt_device.name": "LasVegas-Router1",
"InOutOctets": {
"total": 514681000000.0
}
},
{
"cdt_device.name": "Lisbon-rtr",
"InOutOctets": {
"total": 3719270000000.0
}
},
{
"cdt_device.name": "London-rtr",
"InOutOctets": {
"total": 1638650000000.0
}
},
{
"cdt_device.name": "LosAngeles-Router1",
"InOutOctets": {
"total": 831289000000.0
}
},
{
"cdt_device.name": "LosAngeles-rtr",
"InOutOctets": {
"total": 2485100000000.0
}
},
{
"cdt_device.name": "Manila-rtr",
"InOutOctets": {
"total": 1811560000000.0
}
},
{
"cdt_device.name": "Melbourne-rtr",
"InOutOctets": {
"total": 1788680000000.0
}
},
{
"cdt_device.name": "Memphis-Router1",
"InOutOctets": {
"total": 67885500000000.0
}
},
{
"cdt_device.name": "Milwaukee-Router1",
"InOutOctets": {
"total": 12570700000000.0
}
},
{
"cdt_device.name": "Mumbai-rtr",
"InOutOctets": {
"total": 150878000000.0
}
},
{
"cdt_device.name": "Nashville-Router1",
"InOutOctets": {
"total": 1759260000000.0
}
},
{
"cdt_device.name": "NewYork-Router1",
"InOutOctets": {
"total": 3185650000000.0
}
},
{
"cdt_device.name": "NewYork-Router2",
"InOutOctets": {
"total": 36924300000000.0
}
},
{
"cdt_device.name": "NewYork-rtr",
"InOutOctets": {
"total": 1760470000000.0
}
},
{
"cdt_device.name": "Perth-rtr",
"InOutOctets": {
"total": 188290000000.0
}
},
{
"cdt_device.name": "Phoenix-rtr",
"InOutOctets": {
"total": 1984780000000.0
}
},
{
"cdt_device.name": "PortElizabeth-rtr",
"InOutOctets": {
"total": 1614360000000.0
}
},
{
"cdt_device.name": "SanDiego-Router1",
"InOutOctets": {
"total": 21530400000000.0
}
},
{
"cdt_device.name": "SanFrancisco-Router1",
"InOutOctets": {
"total": 7373830000000.0
}
},
{
"cdt_device.name": "Seattle-Router1",
"InOutOctets": {
"total": 1362710000000.0
}
},
{
"cdt_device.name": "Shanghai-rtr",
"InOutOctets": {
"total": 824177000000.0
}
},
{
"cdt_device.name": "Singapore-rtr",
"InOutOctets": {
"total": 171115000000.0
}
},
{
"cdt_device.name": "Taipei-rtr",
"InOutOctets": {
"total": 2047740000000.0
}
},
{
"cdt_device.name": "Tokyo-rtr",
"InOutOctets": {
"total": 1655830000000.0
}
},
{
"cdt_device.name": "UPS-voip-routers",
"InOutOctets": {
"total": 189781000.0
}
},
{
"cdt_device.name": "Warsaw-rtr",
"InOutOctets": {
"total": 3137920000000.0
}
},
{
"cdt_device.name": "Wellington-rtr",
"InOutOctets": {
"total": 1554150000000.0
}
},
{
"cdt_device.name": "Zurich-rtr",
"InOutOctets": {
"total": 2984610000000.0
}
}
]
}
]
}
}
Example: Count of devices in each group
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device?fields=count,~group~&count_field=id&count_aggregation_format=count&group_by={~group~}&limit=0&indent=3&links=none"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device'
# specify fields to be returned
query += '/?fields=count,~group~&count_field=id'
# specify data formats
query += '&count_aggregation_format=count'
# specify filters
query += '&limit=0'
# group_by
query += '&group_by={~group~}'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
'version':'2.1',
'revision':'14',
'info':'The Statseeker RESTful API',
'data':{
'success':True,
'errmsg':'ok',
'time':1669615015,
'objects':[
{
'type':'cdt_device',
'sequence':1,
'status':{
'success':True,
'errcode':0
},
'data_total':7,
'data':[
{
'count':22,
'~group~':'London'
},
{
'count':9,
'~group~':'Barcelona'
},
{
'count':66,
'~group~':'Routers'
},
{
'count':84,
'~group~':'Servers'
},
{
'count':332,
'~group~':'Switches'
},
{
'count':2,
'~group~':'one'
},
{
'count':86,
'~group~':None
}
]
}
]
}
}
Example: Average current ping of all devices in each group
Request the current ping for all devices in the specified groups, then aggregate those values returning the average ping and device count for each group.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
"https://your.statseeker.server/api/v2.1/cdt_device?fields=~group~,ping_rtt,deviceCount&deviceCount_field=id&ping_rtt_formats=current&ping_rtt_aggregation_format=avg&deviceCount_aggregation_format=count&groups=Barcelona,Paris,London,Berlin,Dublin,Hamburg&group_by={~group~}&~group~_sort=1,asc&limit=0&links=none&indent=3"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/cdt_device'
# specify fields to be returned
query += '/?fields=~group~,ping_rtt,deviceCount&deviceCount_field=id'
# specify data formats
query += '&ping_rtt_formats=current&ping_rtt_aggregation_format=avg&deviceCount_aggregation_format=count'
# specify filters
query += '&groups=Barcelona,Paris,London,Berlin,Dublin,Hamburg&limit=0'
# group_by
query += '&group_by={~group~}'
# optional response formatting
query += '&~group~_sort=1,asc&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "12",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1669685626,
"objects": [
{
"type": "cdt_device",
"sequence": 1,
"status": {
"success": true,
"errcode": 0
},
"data_total": 6,
"data": [
{
"~group~": "Barcelona",
"ping_rtt": {
"current": 40.8889
},
"deviceCount": 18
},
{
"~group~": "Berlin",
"ping_rtt": {
"current": 43
},
"deviceCount": 6
},
{
"~group~": "Dublin",
"ping_rtt": {
"current": 42.5556
},
"deviceCount": 18
},
{
"~group~": "Hamburg",
"ping_rtt": {
"current": 30.5
},
"deviceCount": 4
},
{
"~group~": "London",
"ping_rtt": {
"current": 50.0476
},
"deviceCount": 44
},
{
"~group~": "Paris",
"ping_rtt": {
"current": 51.55
},
"deviceCount": 42
}
]
}
]
}
}
Syslog
The syslog resource provides access to the syslog messages collected by Statseeker. The contents of these records can be reported on and used to trigger alerts.
The syslog Object
Field ID | Field Title | Type | Get, Add, Update | Description |
deviceid | Device ID | integer | G | The ID of the parent device of the entity that owns this message |
entityid | Entity ID | integer | G | The ID of the entity that owns this message |
id | ID | integer | G | Message Identifier |
text | Message Text | string | G | The message text |
time | Time | integer | G, A | Message Time |
type | Type | string | G, A | Message Type |
device | Device | string | The name of the parent device of the entity that owns this message | G, A |
entity | Entity | string | G | The name of the entity that owns this message |
Options
- lastx – Display the last x records, rather than using a timefilter
Links
This resource has links to the following:
Example: Retrieving details, from Syslog Records, on devices that ‘went down’ in the previous 24hrs
In this example we will retrieve details on all Syslog messages that contain a specified string, in the previous 24h hours.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
-u user:pword "https://your.statseeker.server/api/v2.1/syslog/?indent=3&links=none&fields=entity,entityid,text,type,time&text_filter=LIKE(%22<FILTER STRING>%22)&time_timefilter=range%20=%20now%20-%201d%20to%20now"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/syslog'
# specify fields to be returned
query += '/?fields=entity,entityid,text,type,time'
# Filters
query += '&text_filter=LIKE("<FILTER STRING>")&time_timefilter=range = now - 1d to now'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"objects": [{
"status": {
"errcode": 0,
"success": true
},
"data": [{
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/10, changed state to down",
"entity": "LosAngeles-swt1",
"time": 1490240152,
"entityid": 174,
"type": "syslog",
"id": 126619
}, {
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/7, changed state to down",
"entity": "Helsinki-swt1",
"time": 1490245557,
"entityid": 313,
"type": "syslog",
"id": 126626
}, {
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1/0/12, changed state to down",
"entity": "CapeTown-swt2",
"time": 1490246759,
"entityid": 412,
"type": "syslog",
"id": 126628
}, {
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/18, changed state to down",
"entity": "Mumbai-swt2",
"time": 1490251556,
"entityid": 379,
"type": "syslog",
"id": 126631
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface FastEthernet0/19, changed state to down",
"entity": "Unknown (10.100.52.25)",
"time": 1490270147,
"entityid": 0,
"type": "syslog",
"id": 126641
}, {
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/24, changed state to down",
"entity": "Budapest-swt4",
"time": 1490271059,
"entityid": 271,
"type": "syslog",
"id": 126642
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface FastEthernet0/18, changed state to down",
"entity": "Phoenix-swt4",
"time": 1490271643,
"entityid": 201,
"type": "syslog",
"id": 126643
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface FastEthernet0/7, changed state to down",
"entity": "Bangkok-swt1",
"time": 1490279033,
"entityid": 374,
"type": "syslog",
"id": 126647
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface GigabitEthernet1/0/20, changed state to down",
"entity": "Unknown (10.100.52.25)",
"time": 1490286824,
"entityid": 0,
"type": "syslog",
"id": 126652
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface GigabitEthernet2/0/16, changed state to down",
"entity": "Chicago-rtr",
"time": 1490294444,
"entityid": 185,
"type": "syslog",
"id": 126657
}, {
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/8, changed state to down",
"entity": "Tokyo-swt1",
"time": 1490301348,
"entityid": 355,
"type": "syslog",
"id": 126661
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface GigabitEthernet1/0/8, changed state to down",
"entity": "LosAngeles-rtr",
"time": 1490305548,
"entityid": 175,
"type": "syslog",
"id": 126664
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface GigabitEthernet1/0/18, changed state to down",
"entity": "Helsinki-swt3",
"time": 1490307644,
"entityid": 311,
"type": "syslog",
"id": 126667
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface GigabitEthernet1/0/14, changed state to down",
"entity": "Athens-rtr",
"time": 1490316041,
"entityid": 299,
"type": "syslog",
"id": 126671
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface FastEthernet0/28, changed state to down",
"entity": "Melbourne-swt2",
"time": 1490316942,
"entityid": 397,
"type": "syslog",
"id": 126672
}, {
"text": "local0.info %LINEPROTO-5-UPDOWN: Line protocol on Interface FastEthernet0/11, changed state to down",
"entity": "Chicago-swt2",
"time": 1490317855,
"entityid": 183,
"type": "syslog",
"id": 126673
}, {
"text": "local0.info %LINK-3-UPDOWN: Interface GigabitEthernet3/0/5, changed state to down",
"entity": "Copenhagen-swt3",
"time": 1490318756,
"entityid": 331,
"type": "syslog",
"id": 126674
}],
"type": "syslog",
"data_total": 17
}],
"errmsg": "ok",
"success": true,
"time": 1490325593
},
"api_version": "2.1"
}
Device and Interface Event Records (event_record)
The event_record resource provides access to the device and interface events recorded by Statseeker. The contents of these records can be reported on and used to trigger alerts.
The event_record Object
Field ID | Field Title | Type | Get, Add, Update | Description |
delta | Delta | integer | G | The number of seconds since the last record of the same event |
device | Device | string | G | The name of the device that owns the entity |
deviceid | Device ID | integer | G | The ID of the device that owns the entity |
entity | Entity | string | G | The name of the entity that owns the event |
entityid | Entity ID | integer | G, A | The ID of the entity that owns the entity |
entityTypeName | Entity Type Name | string | G | The name of the type of entity that owns the event |
entityTypeTitle | Entity Type Title | string | G | The title of the type of entity that owns the event |
event | Event | string | G, A | The event text associated to the record |
eventid | Event ID | integer | G, A (required) | The event id associated to the record |
id | ID | string | G | Event Record Identifier |
note | Note | string | G, U | The note associated with the record |
state | State | string | G, A (required) | The state text associated to the record |
stateid | State ID | integer | G, A | The state id associated to the record |
time | Time | integer | G, A | Epoch time that the record was created |
Options
- lastx – Display the last x records, rather than using a timefilter
Links
This resource has links to the following:
Example: Retrieving details, from Event Records, of interfaces that recovered from a ‘down’ state in the previous 3hrs
The delta value, found in each record in the response, details the duration that the interface was in a down state.
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
-u user:pword "https://your.statseeker.server/api/v2.1/event_record/?indent=3&links=none&fields=event,eventid,device,deviceid,state,delta,time&state_filter=IS(%22up%22)&event_filter=IS(%22IF-MIB.ifOperStatus%22)&time_timefilter=range%20=%20now%20-%203h%20to%20now"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/event_record'
# specify fields to be returned
query += '/?fields=event,eventid,device,deviceid,state,delta,time'
# Filters
query += '&state_filter=IS("up")&event_filter=IS("IF-MIB.ifOperStatus")&time_timefilter=range = now - 3h to now'
# optional response formatting
query += '&indent=3&links=none'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"info": "The Statseeker RESTful API",
"data": {
"objects": [{
"status": {
"errcode": 0,
"success": true
},
"data": [{
"eventid": 17,
"time": 1490328180,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 185,
"delta": 23520,
"device": "Chicago-rtr",
"id": "58D49A74-1-1"
}, {
"eventid": 81,
"time": 1490324821,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 284,
"delta": 3960,
"device": "Warsaw-rtr",
"id": "58D48D55-1-1"
}, {
"eventid": 81,
"time": 1490327161,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 284,
"delta": 480,
"device": "Warsaw-rtr",
"id": "58D49679-1-1"
}, {
"eventid": 97,
"time": 1490329921,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 293,
"delta": 3540,
"device": "Athens-srv1",
"id": "58D4A141-1-1"
}, {
"eventid": 97,
"time": 1490332501,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 293,
"delta": 1080,
"device": "Athens-srv1",
"id": "58D4AB55-1-1"
}, {
"eventid": 129,
"time": 1490326021,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 334,
"delta": 4260,
"device": "Copenhagen-rtr",
"id": "58D49205-1-1"
}, {
"eventid": 145,
"time": 1490324821,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 350,
"delta": 6900,
"device": "Shanghai-rtr",
"id": "58D48D55-2-1"
}, {
"eventid": 145,
"time": 1490328421,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 350,
"delta": 2040,
"device": "Shanghai-rtr",
"id": "58D49B65-1-1"
}, {
"eventid": 177,
"time": 1490329802,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 378,
"delta": 19741,
"device": "Delhi-rtr",
"id": "58D4A0CA-1-1"
}, {
"eventid": 209,
"time": 1490332022,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 390,
"delta": 840,
"device": "Chennai-rtr",
"id": "58D4A976-1-1"
}, {
"eventid": 209,
"time": 1490332142,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 390,
"delta": 60,
"device": "Chennai-rtr",
"id": "58D4A9EE-1-1"
}, {
"eventid": 209,
"time": 1490332502,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 390,
"delta": 120,
"device": "Chennai-rtr",
"id": "58D4AB56-1-1"
}, {
"eventid": 225,
"time": 1490325842,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 414,
"delta": 840,
"device": "CapeTown-rtr",
"id": "58D49152-1-1"
}, {
"eventid": 225,
"time": 1490328002,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 414,
"delta": 1080,
"device": "CapeTown-rtr",
"id": "58D499C2-1-1"
}, {
"eventid": 241,
"time": 1490323442,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 417,
"delta": 300,
"device": "PortElizabeth-rtr",
"id": "58D487F2-1-1"
}, {
"eventid": 241,
"time": 1490327942,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 417,
"delta": 3900,
"device": "PortElizabeth-rtr",
"id": "58D49986-1-1"
}, {
"eventid": 241,
"time": 1490329982,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 417,
"delta": 1440,
"device": "PortElizabeth-rtr",
"id": "58D4A17E-1-1"
}, {
"eventid": 257,
"time": 1490326322,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 423,
"delta": 6060,
"device": "Pretoria-rtr",
"id": "58D49332-1-1"
}, {
"eventid": 257,
"time": 1490330462,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 423,
"delta": 2280,
"device": "Pretoria-rtr",
"id": "58D4A35E-2-1"
}, {
"eventid": 257,
"time": 1490331122,
"event": "IF-MIB.ifOperStatus",
"state": "up",
"deviceid": 423,
"delta": 360,
"device": "Pretoria-rtr",
"id": "58D4A5F2-1-1"
}],
"type": "event_record",
"data_total": 20
}],
"errmsg": "ok",
"success": true,
"time": 1490333357
},
"api_version": "2.0"
}
Example: Retrieving details, from Event Records, of device down events in the previous 3hrs
curl \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer <YOUR_AUTH_TOKEN>" \
-X GET \
-u user:pword "https://your.statseeker.server/api/v2.1/event_record/?fields=event,eventid,device,deviceid,state,time&state_filter=IS('down')&event_filter=IS('ping_state')&time_timefilter=range%20=%20now%20-%203h%20to%20now&timefmt=%25A%20%25H:%25M:%25S%20(%25y-%25m-%25d)&links=none&limit=0"
# import requests for handling connection and encoding
import requests
import json
def do_request(server, query, user, pword):
# Run auth request
headers = { 'Accept': 'application/json', 'Content-Type': 'application/x-www-form-urlencoded' }
url = f'https://{server}/ss-auth'
authData = {'user': user, 'password': pword}
resp = requests.post(url, headers=headers, data=authData, verify=False)
headers['Content-Type'] = 'application/json'
url = f'https://{server}/{query}'
print(f'Get Auth Token: {resp.status_code}')
if resp.status_code == 200:
# Authentication successful
myToken = resp.json()
headers['Authorization'] = f'Bearer {myToken["access_token"]}'
resp = requests.get(url, headers=headers, verify=False)
if resp.status_code == 401:
print(f'Token auth failed: {resp.status_code}, trying basic auth')
# Either Authentication was unsuccessful, or API set to use basic auth
# Try basic auth
resp = requests.get(url, headers=headers, auth=(user, pword), verify=False)
return resp
# Statseeker Server IP
server = 'your.statseeker.server'
# credentials
user = 'api_user'
pword = 'user_password'
# API root endpoint
query = 'api/v2.1'
# specify target endpoint
query += '/event_record'
# specify fields to be returned
query += '/?fields=event,eventid,device,deviceid,state,time'
# Filters
query += '&state_filter=IS("down")&event_filter=IS("ping_state")&timefmt=%A %H:%M:%S (%y-%m-%d)'
# optional response formatting
query += '&indent=3&links=none&limit=0'
# Run the request
resp = do_request(server, query, user, pword)
if resp.status_code == 200:
print(resp.status_code)
print(resp.json())
else:
print(f'Error: {resp.status_code} {resp.reason}')
{
"version": "2.1",
"revision": "14",
"info": "The Statseeker RESTful API",
"data": {
"success": true,
"errmsg": "ok",
"time": 1691532291,
"objects": [
{
"type": "event_record",
"sequence": 0,
"status": {
"success": true,
"errcode": 0
},
"data_total": 6,
"data": [
{
"event": ".ping_state",
"eventid": 432,
"device": "LosAngeles-srv1",
"deviceid": 348,
"state": "down",
"time": "Wednesday 06:54:54 (23-08-09)",
"id": "64D2AB9E-1-0"
},
{
"event": ".ping_state",
"eventid": 544,
"device": "LosAngeles-swt2",
"deviceid": 354,
"state": "down",
"time": "Wednesday 06:54:47 (23-08-09)",
"id": "64D2AB97-1-0"
},
{
"event": ".ping_state",
"eventid": 720,
"device": "Chicago-ups2",
"deviceid": 358,
"state": "down",
"time": "Wednesday 07:31:37 (23-08-09)",
"id": "64D2B439-1-0"
},
{
"event": ".ping_state",
"eventid": 1296,
"device": "Phoenix-ups2",
"deviceid": 374,
"state": "down",
"time": "Wednesday 06:54:51 (23-08-09)",
"id": "64D2AB9B-2-0"
},
{
"event": ".ping_state",
"eventid": 1568,
"device": "London-ups2",
"deviceid": 383,
"state": "down",
"time": "Wednesday 06:54:38 (23-08-09)",
"id": "64D2AB8E-1-0"
},
{
"event": ".ping_state",
"eventid": 1616,
"device": "London-srv1",
"deviceid": 384,
"state": "down",
"time": "Wednesday 05:55:50 (23-08-09)",
"id": "64D29DC6-1-0"
}
]
}
]
}
}