Skip to main content

API Reference

Inventory Optimization API (1.0)

Download OpenAPI specification:Download

The Inventory Optimization API helps stock-holding companies manage their inventory efficiently, by generating sales forecasts, purchase order suggestions and inventory classifications based on historical sales orders, current inventory levels and user preferences.

Base URL and API Scope

The base url for this API is https://api.machine-learning-factory.{{ "" if env.name == "prod" else ("stage." if env.name == "stage" else "test.") }}visma.com/io.

Upload data

Get Presigned URL

Get a presigned URL for uploading raw data to the platform. Make a PUT request to the URL with the raw data in the request body. The URL is valid for 60 minutes.

The user determines the Tenant-ID, which must be included in the request header.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

Responses

Response samples

Content type
application/json
{
  • "url": "string",
  • "jobId": "string",
  • "message": "string"
}

Get Job Status

Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

jobId
required
string

The ID of the job to check the status for.

Responses

Response samples

Content type
application/json
{
  • "status": "inProgress",
  • "datasetsStatus": [
    ],
  • "message": "string"
}

Upload Raw Data

When you have retrieved a presigned URL from the GET /presigned_url endpoint, the next step is to make a PUT request to the URL. The body of the PUT request should contain the datasets that you want to upload, following the exact schema defined below. Please make sure that all keys in the body are in camelCase (like you see in the schema), and that the values are of the correct type and following the defined requirements.

Note that you will most likely receive a 200 response code even if the data you provide is invalid. This is because the data is not validated until the job is run. Therefore, to make sure that everything went well, you need to check the status of the job. This is described below.

Step-by-step instructions

  1. Create a JSON file containing the datasets for the tenant.
    1. Make sure the JSON adheres to the request body of PUT /[presigned_url].
  2. Call GET /presigned_url with the clientId and tenantId in the header.
    1. The endpoint returns a presigned URL and a jobId.
  3. Call PUT /[presigned_url] by inserting the URL from the previous step.
  4. Do one of the following...
    1. Call GET /status with the tenantId and jobId in the header until the status is “success”.
    2. Provide a webhook in the body of the PUT request in step 3 to receive a request when the job is finished running.
  5. If any datasets are invalid, the job status will be “invalid” and the invalid datasets will have their statuses set to “invalid”. In this case, no data will be stored. Hopefully, the error message in the status explains properly what went wrong. If not, get in touch with us and we'll help you.
Request Body schema: application/json
required

The body needs to contain the necessary data used for sales forecasting.

object

Details for the webhook endpoint to call when a job finishes

required
Array of objects (Raw Data Upload Request Datasets) non-empty

The datasets containing raw data for the sales trainer

Responses

Callbacks

Request samples

Content type
application/json
{
  • "webhook": {
    },
  • "datasets": [
    ]
}

Callback payload samples

Callback
POST: Sends a notification that a job has ended
Content type
application/json
{
  • "status": "success",
  • "jobId": "7720a8c02c664d80a69ed2141b731ee3",
  • "message": "The data upload job finished successfully"
}

Training

Get Job Status

Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

jobId
required
string

The ID of the job to check the status for.

Responses

Response samples

Content type
application/json
{
  • "status": "inProgress",
  • "datasetsStatus": [
    ],
  • "message": "string"
}

Start Trainer

Starts a new trainer job. In the job, a number of models are trained and evaluated for each dataset. The best model performing model on the most recent time period is selected and stored in our backend, ready for prediction. To train models for multiple datasets, provide one `parametersArray` object for each dataset.

Step-by-step instructions

  1. Determine the tenant and dataset(s) to run the trainer job for.
  2. Create the parametersArray with parameters for each dataset.
  3. Send a `POST` request to this endpoint, following the schema below.
    1. The endpoint will return a jobId.
  4. Do one of the following...
    1. Call GET /status with the tenantId and jobId in the header until the status is “success”.
    2. Provide a webhook in the body of the `POST` request in step 3 to receive a request when the job is finished running. See the Callbacks below for details about this request.
Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

Request Body schema: application/json
required
freezeDate
string^\d{4}-\d{2}-\d{2}$

If the endpoint is called for testing or development purposes and you want to train until a date other than the current date, the date to train up until can be set with this field.

Date is required to be in the format 'YYYY-MM-DD', and between '2000-01-01' and today.

object

Details for the webhook endpoint to call when a job finishes

required
Array of objects (Start Trainer Request Dataset Parameters)

Array with objects containing dataset ID and corresponding trainer job parameters.

Responses

Callbacks

Request samples

Content type
application/json
{
  • "freezeDate": "string",
  • "webhook": {
    },
  • "parametersArray": [
    ]
}

Response samples

Content type
application/json
{
  • "jobId": "string",
  • "message": "string"
}

Callback payload samples

Callback
POST: Sends a notification that a job has ended
Content type
application/json
{
  • "status": "success",
  • "jobId": "7720a8c02c664d80a69ed2141b731ee3",
  • "message": "The training job finished successfully"
}

Predictions

Get Job Status

Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

jobId
required
string

The ID of the job to check the status for.

Responses

Response samples

Content type
application/json
{
  • "status": "inProgress",
  • "datasetsStatus": [
    ],
  • "message": "string"
}

Create Prediction

Starts a prediction job which computes and stores forecasts and purchase order suggestions for each datasetId included in the request body. Fetch the results from the results endpoint.

Step-by-step instructions

  1. Determine the tenant and dataset(s) to create predictions for.
  2. Create the parametersArray with parameters for each dataset.
  3. Send a `POST` request to this endpoint following the schema below.
    1. The endpoint will return a jobId.
  4. Do one of the following...
    1. Call GET /status with the tenantId and jobId in the header until the status is “success”.
    2. Provide a webhook in the body of the PUT request in step 3 to receive a request when the job is finished running.
  5. If the job status is "success", send a `GET` request to the /result endpoint with the jobId in the header to fetch the predictions.
Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

Request Body schema: application/json
required

The body should contain the datasetIds that predictions should be computed for, in addition to required inventory-related data.

freezeDate
string^\d{4}-\d{2}-\d{2}$

If the endpoint is called for testing or development purposes and you want to predict from a date other than the current date, the date to predict from can be set with this field.

Date is required to be in the format 'YYYY-MM-DD', and between '2000-01-01' and today.

required
Array of objects (Create Prediction Request Dataset Parameters)

Array with one parameter object for each dataset.

object

Details for the webhook endpoint to call when a job finishes.

Responses

Callbacks

Request samples

Content type
application/json
{
  • "freezeDate": "string",
  • "parametersArray": [
    ],
  • "webhook": {
    }
}

Response samples

Content type
application/json
{
  • "jobId": "string",
  • "message": "string"
}

Callback payload samples

Callback
POST: Sends a notification that a job has ended
Content type
application/json
{
  • "status": "success",
  • "jobId": "7720a8c02c664d80a69ed2141b731ee3",
  • "message": "The prediction job finished successfully"
}

Real-Time Prediction

Computes and returns inventory suggestions for each datasetId provided in the parametersArray of the requestBody in real time.

As the forecast is provided in the request, the forecasting functionality is bypassed to start the inventory suggestion calculation directly. This is a fast process, which is why this endpoint can be executed in real-time.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

Request Body schema: application/json
required

The body should contain the datasetIds that suggestions should be computed for, a sales forecast for each dataset, and required inventory-related data.

freezeDate
string^\d{4}-\d{2}-\d{2}$

If the endpoint is called for testing or development purposes and you want to predict from a date other than the current date, the date to predict from can be set with this field.

Date is required to be in the format 'YYYY-MM-DD', and between '2000-01-01' and today.

required
Array of objects (Real Time Prediction Request Dataset Parameters)

Array with one parameter object for each dataset.

Responses

Request samples

Content type
application/json
{
  • "freezeDate": "string",
  • "parametersArray": [
    ]
}

Response samples

Content type
application/json
{
  • "message": "string",
  • "results": [
    ]
}

Get Prediction Results

Get the results of a prediction job. These include forecasts, purchase order suggestions, and the historical sales data that were used in the prediction process. The forecasts include a forecast interval, with upper and lower bounds, and a decomposition of the forecast value into trend, seasonality, and noise.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

jobId
required
string

The unique ID of the job

page
integer >= 1
Default: 1

The page number

Responses

Response samples

Content type
application/json
{
  • "message": "string",
  • "page": 1,
  • "pages": 1,
  • "results": [
    ]
}

Data management

Get Data

This endpoint returns list of dataset IDs that have been uploaded for the given tenant.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

Responses

Response samples

Content type
application/json
{
  • "countOfDatasets": 1,
  • "datasetIds": [
    ]
}

Get Data for Dataset

This endpoint allows you to get information about specific dataset.

Authorizations:
visma_connect
path Parameters
datasetId
required
string

The dataset ID to delete data for.

header Parameters
tenantId
required
string

The ID of the tenant.

Responses

Response samples

Content type
application/json
Example
{
  • "startDate": "2022-04-01",
  • "endDate": "2022-04-08",
  • "intervalGranularity": "D",
  • "numberOfIntervalsWithRecords": 5,
  • "numberOfIntervalsWithoutRecords": 3,
  • "numberOfIntervalsTotal": 8
}

Delete Data

Delete uploaded data for a specific dataset ID.

Authorizations:
visma_connect
path Parameters
datasetId
required
string

The dataset ID to delete data for.

query Parameters
fromDate
string

Earliest data point to be deleted. If not specified, all data until the "toDate" will be deleted.

toDate
string

The latest data point to be deleted. If not specified, all data from the "fromDate" will be deleted.

header Parameters
tenantId
required
string

The ID of the tenant.

Responses

Response samples

Content type
application/json
{
  • "message": "string"
}

Feedback

Upload Feedback

This endpoint allows users to upload feedback for predictions made by the AI model. The feedback helps improve the model by providing insights into the accuracy of its predictions.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

Responses

Response samples

Content type
application/json
{
  • "message": "Feedback successfully uploaded",
  • "status": "success"
}

Subscription management

Get Subscriptions

This endpoint provides an overview of the subscriptions for a tenant, with optional filtering by dataset ID and subscription IDs.

Authorizations:
visma_connect
query Parameters
datasetId
string

Optional dataset ID to filter subscriptions.

subscriptionIds
string

Optional comma-separated list of subscription IDs to filter within the dataset.

header Parameters
tenantId
required
string

The ID of the tenant.

Responses

Response samples

Content type
application/json
{
  • "countOfDatasets": 0,
  • "countOfSubscriptionIds": 0,
  • "subscriptionObjects": [
    ]
}

Unsubscribe

Unsubscribe for a list of subscription objects.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

The ID of the tenant.

Request Body schema: application/json
required

The dataset IDs and optionally subscription IDs to unsubscribe.

required
Array of objects (Subscription Object)

List of subscription objects.

Array
datasetId
required
string

The dataset ID.

subscriptionIds
Array of strings

Responses

Request samples

Content type
application/json
{
  • "subscriptionObjects": [
    ]
}

Response samples

Content type
application/json
{
  • "countOfUnsubscribedDatasets": 0,
  • "countOfUnsubscribedSubscriptionIds": 0,
  • "unsubscribedSubscriptionObjects": [
    ]
}

Inventory classification

Start Inventory Classification

Starts an inventory classification job which conducts ABC classification, trend and seasonality identification, and demand type classification for each datasetId included in the request.

Step-by-step instructions

  1. Determine the tenant and dataset(s) to run the classification job for.
  2. Set the date range for the data to base the classification on.
  3. Determine the ABC driver.
  4. Send a `POST` request to this endpoint, following the schema below.
    1. The endpoint will return a jobId.
  5. Do one of the following...
    1. Call GET /status with the tenantId and jobId in the header until the status is “success”.
    2. Provide a webhook in the body of the `POST` request to receive a request when the job is finished running. See the Callbacks below for details about this request.
Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

Request Body schema: application/json
required
object

Details for the webhook endpoint to call when a job finishes.

datasetIds
required
Array of strings (Start Inventory Classification Request Dataset Ids)

Array with the ID of the datasets to run classification for.

abcDriver
required
string
Enum: "profit" "revenue" "quantity"

The value driver to be used in the ABC classification.

object (Start Inventory Classification Request Date Range)

The date range to be used for the inventory classification.

If endDate is provided, it will be used as the current date, that is, you can provide a date from the past to see what would your inventory be classified at that date, for example, a dataset with two years of sales might be classified as New if endDate is set to almost two years back.

Responses

Callbacks

Request samples

Content type
application/json
{
  • "webhook": { },
  • "datasetIds": [
    ],
  • "abcDriver": "profit",
  • "dateRange": {
    }
}

Response samples

Content type
application/json
{
  • "jobId": "string",
  • "message": "string"
}

Callback payload samples

Callback
POST: Sends a notification that a job has ended
Content type
application/json
{
  • "status": "success",
  • "jobId": "7720a8c02c664d80a69ed2141b731ee3",
  • "message": "The classification job finished successfully"
}

Get Inventory Classification Results

Get the results of an inventory classification job. These include a product's ABC category, its seasonality and trend, and its demand type.

Authorizations:
visma_connect
header Parameters
tenantId
required
string

Tenant ID

jobId
required
string

The unique ID of the job

Responses

Response samples

Content type
application/json
{
  • "message": "string",
  • "results": [
    ]
}

Health check

Health Check

Check the health of the service.

Responses