API Reference
Inventory Optimization API (1.0)
Download OpenAPI specification:Download
The Inventory Optimization API helps stock-holding companies manage their inventory efficiently, by generating sales forecasts, purchase order suggestions and inventory classifications based on historical sales orders, current inventory levels and user preferences.
Base URL and API Scope
The base url for this API is https://api.machine-learning-factory.{{ "" if env.name == "prod" else ("stage." if env.name == "stage" else "test.") }}visma.com/io.
Get Presigned URL
Obtain a presigned URL to upload data to the platform. This URL allows you to make a PUT request with the data in the request body. Depending on the optional 'type' query parameter, the uploaded data can either trigger a prediction job or be stored for general use.
The URL is valid for 60 minutes. The user must specify the Tenant-ID, which is required in the request header.
- Use 'prediction' as the type to trigger processing that initiates a prediction job with the uploaded data.
- Use 'raw_data' as the type for standard data uploads. This is the default behavior if the type parameter is not specified.
Authorizations:
query Parameters
type | string Default: "raw_data" Enum: "prediction" "raw_data" Determines the type of data the presigned URL will be used for. "prediction" indicates prediction data; "raw_data" indicates that it will be used as training data. Defaults to "raw_data" if not provided. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 429
- 500
{- "url": "string",
- "jobId": "string",
- "message": "string"
}
Get Job Status
Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
jobId required | string The ID of the job to check the status for. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "status": "inProgress",
- "datasetsStatus": [
- {
- "datasetId": "string",
- "status": "success",
- "message": {
- "code": "SUCCESS",
- "message": "string"
}
}
], - "message": "string"
}
Upload Raw Data
When you have retrieved a presigned URL from the GET /presigned_url endpoint, the next step is to make a PUT request to the URL. The body of the PUT request should contain the datasets that you want to upload, following the exact schema defined below. Please make sure that all keys in the body are in camelCase (like you see in the schema), and that the values are of the correct type and following the defined requirements.
Note that you will most likely receive a 200 response code even if the data you provide is invalid. This is because the data is not validated until the job is run. Therefore, to make sure that everything went well, you need to check the status of the job. This is described below.
Step-by-step instructions
- Create a JSON file containing the datasets for the tenant.
- Make sure the JSON adheres to the request body of PUT /[presigned_url].
- Call GET /presigned_url with the clientId and tenantId in the header.
- The endpoint returns a presigned URL and a jobId.
- Call PUT /[presigned_url] by inserting the URL from the previous step.
- Do one of the following...
- Call GET /status with the tenantId and jobId in the header until the status is “success”.
- Provide a webhook in the body of the PUT request in step 3 to receive a request when the job is finished running.
- If any datasets are invalid, the job status will be “invalid” and the invalid datasets will have their statuses set to “invalid”. In this case, no data will be stored. Hopefully, the error message in the status explains properly what went wrong. If not, get in touch with us and we'll help you.
Request Body schema: application/jsonrequired
The body needs to contain the necessary data used for sales forecasting.
object Details for the webhook endpoint to call when a job finishes | |
required | Array of objects (Raw Data Upload Request Datasets) non-empty The datasets containing raw data for the sales trainer |
Responses
Callbacks
Request samples
- Payload
{- "webhook": {
- "webhookUrl": "string",
- "webhookApiKey": "string"
}, - "datasets": [
- {
- "datasetId": "string",
- "transactions": [
- {
- "transactionId": "string",
- "departureDate": "string",
- "quantity": 0,
- "unitPrice": 0,
- "unitCost": 0
}
]
}
]
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The data upload job finished successfully"
}
Get Job Status
Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
jobId required | string The ID of the job to check the status for. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "status": "inProgress",
- "datasetsStatus": [
- {
- "datasetId": "string",
- "status": "success",
- "message": {
- "code": "SUCCESS",
- "message": "string"
}
}
], - "message": "string"
}
Start Trainer
Starts a new trainer job. In the job, a number of models are trained and evaluated for each dataset. The best model performing model on the most recent time period is selected and stored in our backend, ready for prediction. To train models for multiple datasets, provide one `parametersArray` object for each dataset.
Step-by-step instructions
- Determine the tenant and dataset(s) to run the trainer job for.
- Create the parametersArray with parameters for each dataset.
- Send a `POST` request to this endpoint, following the schema below.
- The endpoint will return a jobId.
- Do one of the following...
- Call GET /status with the tenantId and jobId in the header until the status is “success”.
- Provide a webhook in the body of the `POST` request in step 3 to receive a request when the job is finished running. See the Callbacks below for details about this request.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
freezeDate | string^\d{4}-\d{2}-\d{2}$ If the endpoint is called for testing or development purposes and you want to train until a date other than the current date, the date to train up until can be set with this field. Date is required to be in the format 'YYYY-MM-DD', and between '2000-01-01' and today. |
object Details for the webhook endpoint to call when a job finishes | |
required | Array of objects (Start Trainer Request Dataset Parameters) Array with objects containing dataset ID and corresponding trainer job parameters. |
Responses
Callbacks
Request samples
- Payload
{- "freezeDate": "string",
- "webhook": {
- "webhookUrl": "string",
- "webhookApiKey": "string"
}, - "parametersArray": [
- {
- "datasetId": "string",
- "frequency": "D",
- "horizon": 1
}
]
}
Response samples
- 202
- 400
- 401
- 403
- 413
- 429
- 500
{- "jobId": "string",
- "message": "string"
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The training job finished successfully"
}
Get Job Status
Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
jobId required | string The ID of the job to check the status for. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "status": "inProgress",
- "datasetsStatus": [
- {
- "datasetId": "string",
- "status": "success",
- "message": {
- "code": "SUCCESS",
- "message": "string"
}
}
], - "message": "string"
}
Create Prediction
Starts a prediction job which computes and stores forecasts and purchase order suggestions for each datasetId included in the request body. Fetch the results from the results endpoint.
Step-by-step instructions
- Determine the tenant and dataset(s) to create predictions for.
- Create the parametersArray with parameters for each dataset.
- Send a `POST` request to this endpoint following the schema below.
- The endpoint will return a jobId.
- Do one of the following...
- Call GET /status with the tenantId and jobId in the header until the status is “success”.
- Provide a webhook in the body of the PUT request in step 3 to receive a request when the job is finished running.
- If the job status is "success", send a `GET` request to the /result endpoint with the jobId in the header to fetch the predictions.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
The body should contain the datasetIds that predictions should be computed for, in addition to required inventory-related data.
freezeDate | string^\d{4}-\d{2}-\d{2}$ If the endpoint is called for testing or development purposes and you want to predict from a date other than the current date, the date to predict from can be set with this field. Date is required to be in the format 'YYYY-MM-DD', and between '2000-01-01' and today. |
required | Array of objects (Create Prediction Request Dataset Parameters) Array with one parameter object for each dataset. |
object Details for the webhook endpoint to call when a job finishes. |
Responses
Callbacks
Request samples
- Payload
{- "freezeDate": "string",
- "parametersArray": [
- {
- "datasetId": "string",
- "currentInventoryLevel": 0,
- "wantedServiceLevel": 0.5,
- "supplier": {
- "supplierId": "string",
- "batchSize": 0,
- "unitPrice": 0,
- "leadTime": {
- "value": 0,
- "granularity": "D"
}
}, - "planningPeriod": {
- "value": 1,
- "granularity": "D"
}, - "inventoryOrders": [
- {
- "estimatedDeliveryDate": "string",
- "quantity": 0
}
], - "futureSalesOrders": [
- {
- "departureDate": "string",
- "quantity": 0
}
], - "minimumInventory": 0,
- "maximumInventory": 0,
- "replenishmentInterval": {
- "value": 1,
- "granularity": "D"
}
}
], - "webhook": {
- "webhookUrl": "string",
- "webhookApiKey": "string"
}
}
Response samples
- 202
- 400
- 401
- 403
- 413
- 429
- 500
{- "jobId": "string",
- "message": "string"
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The prediction job finished successfully"
}
Real-Time Prediction
Computes and returns inventory suggestions for each datasetId provided in the parametersArray of the requestBody in real time.
As the forecast is provided in the request, the forecasting functionality is bypassed to start the inventory suggestion calculation directly. This is a fast process, which is why this endpoint can be executed in real-time.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
The body should contain the datasetIds that suggestions should be computed for, a sales forecast for each dataset, and required inventory-related data.
freezeDate | string^\d{4}-\d{2}-\d{2}$ If the endpoint is called for testing or development purposes and you want to predict from a date other than the current date, the date to predict from can be set with this field. Date is required to be in the format 'YYYY-MM-DD', and between '2000-01-01' and today. |
required | Array of objects (Real Time Prediction Request Dataset Parameters) Array with one parameter object for each dataset. |
Responses
Request samples
- Payload
{- "freezeDate": "string",
- "parametersArray": [
- {
- "datasetId": "string",
- "currentInventoryLevel": 0,
- "wantedServiceLevel": 0.5,
- "supplier": {
- "supplierId": "string",
- "batchSize": 0,
- "unitPrice": 0,
- "leadTime": {
- "value": 0,
- "granularity": "D"
}
}, - "planningPeriod": {
- "value": 1,
- "granularity": "D"
}, - "forecast": [
- {
- "date": "string",
- "predictedQuantity": 0,
- "predictedSeason": 0,
- "predictedTrend": 0,
- "predictedNoise": 0,
- "lowerQuantity": 0,
- "upperQuantity": 0
}
], - "inventoryOrders": [
- {
- "estimatedDeliveryDate": "string",
- "quantity": 0
}
], - "futureSalesOrders": [
- {
- "departureDate": "string",
- "quantity": 0
}
], - "frequency": "D",
- "minimumInventory": 0,
- "maximumInventory": 0,
- "replenishmentInterval": {
- "value": 1,
- "granularity": "D"
}
}
]
}
Response samples
- 200
- 400
- 401
- 403
- 404
- 413
- 429
- 500
{- "message": "string",
- "results": [
- {
- "datasetId": "string",
- "supplierId": "string",
- "safetyStockSuggestion": {
- "quantity": 0,
- "validDateInterval": {
- "startDate": "string",
- "endDate": "string"
}
}, - "reorderPointSuggestion": {
- "quantity": 0,
- "validDateInterval": {
- "startDate": "string",
- "endDate": "string"
}
}, - "replenishmentSuggestion": {
- "quantity": 0,
- "validDateInterval": {
- "startDate": "string",
- "endDate": "string"
}
}
}
]
}
Get Prediction Results
Get the results of a prediction job. These include forecasts, purchase order suggestions, and the historical sales data that were used in the prediction process. The forecasts include a forecast interval, with upper and lower bounds, and a decomposition of the forecast value into trend, seasonality, and noise.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
jobId required | string The unique ID of the job |
page | integer >= 1 Default: 1 The page number |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "message": "string",
- "page": 1,
- "pages": 1,
- "results": [
- {
- "datasetId": "string",
- "supplierId": "string",
- "safetyStockSuggestion": {
- "quantity": 0,
- "validDateInterval": {
- "startDate": "string",
- "endDate": "string"
}
}, - "reorderPointSuggestion": {
- "quantity": 0,
- "validDateInterval": {
- "startDate": "string",
- "endDate": "string"
}
}, - "replenishmentSuggestion": {
- "quantity": 0,
- "validDateInterval": {
- "startDate": "string",
- "endDate": "string"
}
}, - "forecast": [
- {
- "date": "string",
- "predictedQuantity": 0,
- "predictedSeason": 0,
- "predictedTrend": 0,
- "predictedNoise": 0,
- "lowerQuantity": 0,
- "upperQuantity": 0
}
], - "historicalData": [
- {
- "date": "string",
- "quantity": 0
}
]
}
]
}
Get Data
This endpoint returns list of dataset IDs that have been uploaded for the given tenant.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "countOfDatasets": 1,
- "datasetIds": [
- "dataset-1"
]
}
Get Data for Dataset
This endpoint allows you to get information about specific dataset.
Authorizations:
path Parameters
datasetId required | string The dataset ID to delete data for. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "startDate": "2022-04-01",
- "endDate": "2022-04-08",
- "intervalGranularity": "D",
- "numberOfIntervalsWithRecords": 5,
- "numberOfIntervalsWithoutRecords": 3,
- "numberOfIntervalsTotal": 8
}
Delete Data
Delete uploaded data for a specific dataset ID.
Authorizations:
path Parameters
datasetId required | string The dataset ID to delete data for. |
query Parameters
fromDate | string Earliest data point to be deleted. If not specified, all data until the "toDate" will be deleted. |
toDate | string The latest data point to be deleted. If not specified, all data from the "fromDate" will be deleted. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "message": "string"
}
Upload Feedback
This endpoint allows users to upload feedback for predictions made by the AI model. The feedback helps improve the model by providing insights into the accuracy of its predictions.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 413
- 429
- 500
{- "message": "Feedback successfully uploaded"
}
Get Subscriptions
This endpoint provides an overview of the subscriptions for a tenant, with optional filtering by dataset ID and subscription IDs.
Authorizations:
query Parameters
datasetId | string Optional dataset ID to filter subscriptions. |
subscriptionIds | string Optional comma-separated list of subscription IDs to filter within the dataset. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "countOfDatasets": 0,
- "countOfSubscriptionIds": 0,
- "subscriptionObjects": [
- {
- "datasetId": "string",
- "countOfSubscriptionIds": 0,
- "subscriptionIds": [
- "string"
]
}
]
}
Unsubscribe
Unsubscribe for a list of subscription objects.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
Request Body schema: application/jsonrequired
The dataset IDs and optionally subscription IDs to unsubscribe.
required | Array of objects (Subscription Object) List of subscription objects. | ||||
Array
|
Responses
Request samples
- Payload
{- "subscriptionObjects": [
- {
- "datasetId": "string",
- "subscriptionIds": [
- "string"
]
}
]
}
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "countOfUnsubscribedDatasets": 0,
- "countOfUnsubscribedSubscriptionIds": 0,
- "unsubscribedSubscriptionObjects": [
- {
- "datasetId": "string",
- "countOfSubscriptionIds": 0,
- "subscriptionIds": [
- "string"
]
}
]
}
Start Inventory Classification
Starts an inventory classification job which conducts ABC classification, trend and seasonality identification, and demand type classification for each datasetId included in the request.
Step-by-step instructions
- Determine the tenant and dataset(s) to run the classification job for.
- Set the date range for the data to base the classification on.
- Determine the ABC driver.
- Send a `POST` request to this endpoint, following the schema below.
- The endpoint will return a jobId.
- Do one of the following...
- Call GET /status with the tenantId and jobId in the header until the status is “success”.
- Provide a webhook in the body of the `POST` request to receive a request when the job is finished running. See the Callbacks below for details about this request.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
object Details for the webhook endpoint to call when a job finishes. | |
datasetIds required | Array of strings (Start Inventory Classification Request Dataset Ids) Array with the ID of the datasets to run classification for. |
abcDriver required | string Enum: "profit" "revenue" "quantity" The value driver to be used in the ABC classification. |
object (Start Inventory Classification Request Date Range) The date range to be used for the inventory classification. If endDate is provided, it will be used as the current date, that is, you can provide a date from the past to see what would your inventory be classified at that date, for example, a dataset with two years of sales might be classified as New if endDate is set to almost two years back. |
Responses
Callbacks
Request samples
- Payload
{- "webhook": { },
- "datasetIds": [
- "string"
], - "abcDriver": "profit",
- "dateRange": {
- "startDate": "string",
- "endDate": "string"
}
}
Response samples
- 202
- 400
- 401
- 403
- 413
- 429
- 500
{- "jobId": "string",
- "message": "string"
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The classification job finished successfully"
}
Get Inventory Classification Results
Get the results of an inventory classification job. These include a product's ABC category, its seasonality and trend, and its demand type.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
jobId required | string The unique ID of the job |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "message": "string",
- "results": [
- {
- "datasetId": "string",
- "abcCategory": "A",
- "isSeasonal": true,
- "seasonalities": [
- "January"
], - "demandType": "New",
- "trend": "Positive"
}
]
}