Skip to main content

API Reference

TimeDetect API (1.0)

Download OpenAPI specification:Download

The API has three main processes:

  • Uploading approved time registrations: Validates and processes registration data for the machine learning models to train on.
  • Model training: Prepares a hierarchy of machine learning models for each dataset.
  • Prediction: Generate predictions with the machine learning models to and highlight anomalies.

Upload data

Get Presigned URL

Obtain a presigned URL to upload data to the platform. This URL allows you to make a PUT request with the data in the request body. Depending on the optional 'type' query parameter, the uploaded data can either trigger a prediction job or be stored for general use.

The URL is valid for 60 minutes. The user must specify the Tenant-ID, which is required in the request header.

  • Use 'prediction' as the type to trigger processing that initiates a prediction job with the uploaded data.
  • Use 'raw_data' as the type for standard data uploads. This is the default behavior if the type parameter is not specified.

Authorizations:
visma_connect
query Parameters
type
string
Default: "raw_data"
Enum: "prediction" "raw_data"

Determines the type of data the presigned URL will be used for. "prediction" indicates prediction data; "raw_data" indicates that it will be used as training data. Defaults to "raw_data" if not provided.

header Parameters
tenantId
required
string

The ID of the tenant.

Responses

Response samples

Content type
application/json
{
  • "url": "string",
  • "jobId": "string",
  • "message": "string"
}

Get Job Status

Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

jobId
required
string

The ID of the job to check the status for.

Responses

Response samples

Content type
application/json
{
  • "status": "inProgress",
  • "datasetsStatus": [
    ],
  • "message": "string"
}

Upload Raw Data

This endpoint is for uploading data to a presigned URL. Unlike the ther endpoints, this does not use the base API URL. Obtain the presigned URL by calling GET /presigned_url and use it as is for this PUT request.

By sending a PUT request to the presigned url (generated by calling GET /presigned_url), the validation process is triggered.

For a dataset with with 4000 registrations, the job usually takes less than 2 minutes to complete, while for a dataset with 55'000 registrations, it typically takes less than 10 minutes.

Do not send the authentication token, the authentication is handled through the presigned url. This process validates the data and prepares it for model training. The status of the validation process can be fetched by calling GET /status.

Important: Ensure that the header of your PUT request is empty. It should not contain a "Content-Type" or any other headers. This is crucial for the request to be processed correctly.

Note: The request body and response schemas can be found by expanding the arrows in the sections below.

Request Body schema: application/json
required

The body needs to contain the necessary data about registrations.

required
Array of objects (Raw Data Upload Request Dataset) non-empty

Datasets containing raw data for training.

object

Details for the webhook endpoint to call when a job finishes.

Responses

Callbacks

Request samples

Content type
application/json
{
  • "datasets": [
    ],
  • "webhook": {}
}

Callback payload samples

Callback
POST: Sends a notification that a job has ended
Content type
application/json
{
  • "status": "success",
  • "jobId": "7720a8c02c664d80a69ed2141b731ee3"
}

Training

Get Job Status

Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

jobId
required
string

The ID of the job to check the status for.

Responses

Response samples

Content type
application/json
{
  • "status": "inProgress",
  • "datasetsStatus": [
    ],
  • "message": "string"
}

Start Trainer

This endpoint will do 1 out of 2 things, depending on the boolean rebuildModels field:

If rebuildModels = True:

  • Starts the machine learning pipeline which traines from scratch and stores a model for each datasetId included in the request body. This will overwrite exsiting models. The status of the rebuild procedure can be fetched by calling GET /status endpoint with the provided jobId.

  • If rebuildModels = False:

  • Starts the machine learning pipeline which updates the model for each datasetId included in the request body. Updating a model means continuing the training procedure on recent data to make sure the models can use all the latest information available in the predictions. The status of the update procedure can be fetched by calling GET /status endpoint with the provided jobId.

    A full retrain for a dataset with 50 employees takes less than 2 minutes to complete, while for 500 employees it takes less than 10 minutes.

    Note: The response schemas can be found by expanding the arrows in the sections below.

  • Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    Tenant ID

    Request Body schema: application/json
    required

    The body contains confuguration to be used to initiate training procedure for each specified dataset ID

    required
    Array of objects (Start Trainer Request Dataset Ids)

    List with objects containing the parameters for each dataset.

    object

    Details for the webhook endpoint to call when a job finishes.

    Responses

    Callbacks

    Request samples

    Content type
    application/json
    {
    • "parameters": [
      ],
    • "webhook": {}
    }

    Response samples

    Content type
    application/json
    {
    • "jobId": "string",
    • "message": "string"
    }

    Callback payload samples

    Callback
    POST: Sends a notification that a job has ended
    Content type
    application/json
    {
    • "status": "success",
    • "jobId": "7720a8c02c664d80a69ed2141b731ee3"
    }

    Predictions

    Get Job Status

    Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    The ID of the tenant.

    jobId
    required
    string

    The ID of the job to check the status for.

    Responses

    Response samples

    Content type
    application/json
    {
    • "status": "inProgress",
    • "datasetsStatus": [
      ],
    • "message": "string"
    }

    Create Prediction

    Starts the prediction procedure which computes and stores predictions for each DatasetId included in the request body. For 300 registrations the predictions job usually takes less than 2 minutes to complete, while for 5'000 registrations, it typically takes less than 12 minutes.

    The status of the prediction procedure can be fetched by calling GET /status with the provided jobId. The predictions can be fetched by calling GET /results, also with the provided jobId.

    A prediction will be made for each registration sent in, but the registrations will not be used to update the models. To update the models, upload data through the PUT /[presigned_url] endpoint and explicitly update the models by calling the POST /start_trainer endpoint.

    Request Size Limitation: To ensure successful processing, it is recommended to limit the number of registrations in your prediction request. This is due to a hard limit of 6MB in the prediction requetst body. As a guideline, limit your prediction requests to approximately 12,000 registrations. This helps in avoiding response size issues and ensures smoother retrieval of results.

    Note: The request body and response schemas can be found by expanding the arrows in the sections below.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    Tenant ID

    Request Body schema: application/json
    required

    The data should be sent as a list of registrations per dataset.

    required
    Array of objects (Create Prediction Request Dataset Parameters)

    Prediction parameters for each dataset.

    object

    Details for the webhook endpoint to call when a job finishes.

    Responses

    Callbacks

    Request samples

    Content type
    application/json
    {
    • "parameters": [
      ],
    • "webhook": {}
    }

    Response samples

    Content type
    application/json
    {
    • "jobId": "string",
    • "message": "string"
    }

    Callback payload samples

    Callback
    POST: Sends a notification that a job has ended
    Content type
    application/json
    {
    • "status": "success",
    • "jobId": "7720a8c02c664d80a69ed2141b731ee3"
    }

    Real-time Prediction

    A prediction will be made for each registration sent in. This is a fast process and will only return predictions on a registration level and not on an aggregated level, which is why this endpoint can be executed in real-time. The registrations will not be used to update the models.

    The real-time prediction endpoint usually takes about 1-1.5 seconds to complete for one registration.

    Note: The request body and response schemas can be found by expanding the arrows in the sections below.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    Tenant ID

    Request Body schema: application/json
    required

    The body should contain the datasetIds that suggestions should be computed for, a sales forecast for each dataset, and required inventory-related data.

    required
    Array of objects (Real Time Prediction Request Dataset Parameters)

    Prediction parameters for each dataset.

    Array
    datasetId
    required
    string

    The unique ID of the dataset used to train the model to make predictions on.

    customerId
    string

    A unique ID for the customer.

    required
    Array of objects (Raw Data Upload Request Registration) non-empty

    A list of registrations to make predictions for.

    Responses

    Request samples

    Content type
    application/json
    {
    • "parameters": [
      ]
    }

    Response samples

    Content type
    application/json
    {
    • "results": [
      ]
    }

    To be overwritten by summary in the trainer's path file

    To be overwritten by description in the trainer's path file

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    Tenant ID

    Responses

    Response samples

    Content type
    application/json
    {
    • "error": "string",
    • "code": "CLIENT_ID_MISSING",
    • "message": "string",
    • "details": "string",
    • "timestamp": "string",
    • "error_uuid": "string"
    }

    Get Prediction Results

    Anomaly scoring is a critical aspect of this schema, as indicated by the "anomalyScore" field, which quantifies how unusual a registration is. The "significantFields" field explains why a registration was deemed an anomaly by highlighting which specific features were influenced in the anomaly-score. The weekday is derived from the date and is returned as a significant field in certain cases, such as for weekends.

    Note: The response schemas can be found by expanding the arrows in the sections below.

    Important: If the initial prediction request contained a large number of registrations, especially those spanning wide date ranges, the results might exceed the 6MB response size. In such cases, a 500 error will be returned. To avoid this, it is recommended to adhere to the guideline of limiting prediction requests to approximately 4,000 registrations. This precaution helps in ensuring that the response size remains within manageable limits, thereby preventing potential errors during result retrieval.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    Tenant ID

    jobId
    required
    string

    The unique ID of the job

    page
    integer >= 1
    Default: 1

    The page number

    Responses

    Response samples

    Content type
    application/json
    {
    • "message": "string",
    • "page": 1,
    • "pages": 1,
    • "results": [
      ]
    }

    Data management

    Get Data

    This endpoint returns list of dataset IDs that have been uploaded for the given tenant.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    The ID of the tenant.

    Responses

    Response samples

    Content type
    application/json
    {
    • "countOfDatasets": 1,
    • "datasetIds": [
      ]
    }

    Get Data for Dataset

    This endpoint allows you to get information about specific dataset.

    Authorizations:
    visma_connect
    path Parameters
    datasetId
    required
    string

    The dataset ID to delete data for.

    header Parameters
    tenantId
    required
    string

    The ID of the tenant.

    Responses

    Response samples

    Content type
    application/json
    Example
    {
    • "startDate": "2022-04-01",
    • "endDate": "2022-04-08",
    • "intervalGranularity": "D",
    • "numberOfIntervalsWithRecords": 5,
    • "numberOfIntervalsWithoutRecords": 3,
    • "numberOfIntervalsTotal": 8
    }

    Delete Data

    Delete uploaded data for a specific dataset ID.

    Authorizations:
    visma_connect
    path Parameters
    datasetId
    required
    string

    The dataset ID to delete data for.

    query Parameters
    fromDate
    string

    Earliest data point to be deleted. If not specified, all data until the "toDate" will be deleted.

    toDate
    string

    The latest data point to be deleted. If not specified, all data from the "fromDate" will be deleted.

    header Parameters
    tenantId
    required
    string

    The ID of the tenant.

    Responses

    Response samples

    Content type
    application/json
    {
    • "message": "string"
    }

    Feedback

    Upload Feedback

    This endpoint allows users to upload feedback for predictions made by the AI model. The feedback helps improve the model by providing insights into the accuracy of its predictions.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    Tenant ID

    Request Body schema: application/json
    required

    The body should contain the datasets for which feedback is being provided, including the dataset ID, job ID, and feedback for each prediction.

    required
    Array of objects
    Array
    datasetId
    required
    string

    Unique identifier for the dataset

    jobId
    required
    string

    Unique identifier for the job that generated the predictions

    required
    Array of objects

    Responses

    Request samples

    Content type
    application/json
    {
    • "datasets": [
      ]
    }

    Response samples

    Content type
    application/json
    {
    • "message": "Feedback successfully uploaded"
    }

    Subscription management

    Get Subscriptions

    This endpoint provides an overview of the subscriptions for a tenant, with optional filtering by dataset ID and subscription IDs.

    Authorizations:
    visma_connect
    query Parameters
    datasetId
    string

    Optional dataset ID to filter subscriptions.

    subscriptionIds
    string

    Optional comma-separated list of subscription IDs to filter within the dataset.

    header Parameters
    tenantId
    required
    string

    The ID of the tenant.

    Responses

    Response samples

    Content type
    application/json
    {
    • "countOfDatasets": 0,
    • "countOfSubscriptionIds": 0,
    • "subscriptionObjects": [
      ]
    }

    Unsubscribe

    Unsubscribe for a list of subscription objects.

    Authorizations:
    visma_connect
    header Parameters
    tenantId
    required
    string

    The ID of the tenant.

    Request Body schema: application/json
    required

    The dataset IDs and optionally subscription IDs to unsubscribe.

    required
    Array of objects (Subscription Object)

    List of subscription objects.

    Array
    datasetId
    required
    string

    The dataset ID.

    subscriptionIds
    Array of strings

    Responses

    Request samples

    Content type
    application/json
    {
    • "subscriptionObjects": [
      ]
    }

    Response samples

    Content type
    application/json
    {
    • "countOfUnsubscribedDatasets": 0,
    • "countOfUnsubscribedSubscriptionIds": 0,
    • "unsubscribedSubscriptionObjects": [
      ]
    }

    Health check

    Health Check

    Check the health of the service.

    Responses