API Reference
PayrollDetect API (1.0)
Download OpenAPI specification:Download
Welcome to the Resolve PayrollDetect API.
Get Presigned URL
Obtain a presigned URL to upload data to the platform. This URL allows you to make a PUT request with the data in the request body. Depending on the optional 'type' query parameter, the uploaded data can either trigger a prediction job or be stored for general use.
The URL is valid for 60 minutes. The user must specify the Tenant-ID, which is required in the request header.
- Use 'prediction' as the type to trigger processing that initiates a prediction job with the uploaded data.
- Use 'raw_data' as the type for standard data uploads. This is the default behavior if the type parameter is not specified.
Authorizations:
query Parameters
type | string Default: "raw_data" Enum: "prediction" "raw_data" Determines the type of data the presigned URL will be used for. "prediction" indicates prediction data; "raw_data" indicates that it will be used as training data. Defaults to "raw_data" if not provided. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 429
- 500
{- "url": "string",
- "jobId": "string",
- "message": "string"
}
Get Job Status
Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
jobId required | string The ID of the job to check the status for. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "status": "inProgress",
- "datasetsStatus": [
- {
- "datasetId": "string",
- "status": "string",
- "message": "string"
}
], - "message": "string"
}
Upload Data or Trigger Prediction Jobs via Presigned URL
This endpoint allows you to upload datasets for training, or trigger prediction jobs by using a presigned URL. The first step is to retrieve a presigner URL from the GET /presigned_url endpoint. If you want to upload data for training, set the `type=raw_data. If you want to trigger a prediction job, set the `type=prediction`.
When you have retrieved a presigned URL from the GET /presigned_url endpoint, the next step is to make a PUT request to the URL.
The body of the PUT request should contain the datasets that you want to upload, following the exact schema defined below. Please make sure that all keys in the JSON body are in camelCase (like you see in the schema), and that the values are of the correct type and following the defined requirements.
A 200 response code is typically returned after the PUT request, even if the data is invalid, as validation happens only during job processing. To confirm success, monitor the job's status using the jobId provided in the initial response.
Usage Overview
- Prepare a JSON file containing the data to be uploaded or prediction input:
- Follow the required JSON schema for data uploads or prediction requests, ensuring all keys are in camelCase and values conform to data type requirements.
- Use the
GET /presigned_url
endpoint to retrieve a presigned URL and a unique jobId: - Set the
type
parameter to specify either "data upload" or "prediction". - Perform a PUT request to the retrieved presigned URL with the JSON data in the request body.
- Monitor the job's status to verify successful execution:
- Periodically check the job status by calling
GET /status
with the jobId to confirm the outcome as "success". - Alternatively, include a webhook URL in the PUT request to receive automatic status updates when the job completes.
- In cases of data validation failure, the job status will be "invalid". This means the provided data did not meet schema or format requirements, and affected datasets will not be processed. Review the error message for troubleshooting guidance, or reach out for support if additional assistance is needed.
Authorizations:
Request Body schema: application/jsonrequired
JSON payload containing the required data for model training or prediction. Must adhere to one of the specified schemas.
required | Array of objects (Raw Data Upload Request Dataset) non-empty Datasets containing raw data for training. |
object Details for the webhook endpoint to call when a job finishes. |
Responses
Callbacks
Request samples
- Payload
{- "datasets": [
- {
- "id": "6180934d-044e-48e0-9b70-e1498fcf417e",
- "payslips": [
- {
- "id": "a10ba26d-1d90-409e-ae19-624fb16bf551",
- "date": "2024-05-21",
- "employeeId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "employeeGroups": [
- "Engineering",
- "Intern"
], - "transactions": [
- {
- "id": "53c1ec4e-c2b9-4621-a7a9-00d93761f3d3",
- "contractId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "type": "overtime",
- "subType": "incomeTax",
- "amount": 1000,
- "unit": "EUR",
- "fromDate": "2024-05-01",
- "toDate": "2024-05-31",
- "rate": 200,
- "quantity": 5,
- "costUnits": [
- {
- "department": "Engineering",
- "project": "Project X",
- "location": "Stockholm"
}, - {
- "company": "Company ABC"
}
]
}
]
}
]
}
],
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The job has been completed successfully"
}
Get Job Status
Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
jobId required | string The ID of the job to check the status for. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "status": "inProgress",
- "datasetsStatus": [
- {
- "datasetId": "string",
- "status": "string",
- "message": "string"
}
], - "message": "string"
}
Start Trainer
Starts a new trainer job.
Step-by-step instructions
- Determine the tenant and dataset(s) to run the trainer job for.
- Create the parametersArray with parameters for each dataset.
- Send a `POST` request to this endpoint, following the schema below.
- The endpoint will return a jobId.
- Do one of the following...
- Call GET /status with the tenantId and jobId in the header until the status is “success”.
- Provide a webhook in the body of the `POST` request in step 3 to receive a request when the job is finished running. See the Callbacks below for details about this request.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
datasetIds required | Array of strings non-empty List of unique IDs of the datasets on which the training should be run. The models will be trained from scratch on all uploaded information in the dataset. |
object Details for the webhook endpoint to call when a job finishes. |
Responses
Callbacks
Request samples
- Payload
{- "datasetIds": [
- "1bbfc224-5974-49a7-a03f-6a1692e2e7b3",
- "cf7c94c6-6057-48e6-b91c-121ad6cf392e",
- "d232399c-b6a9-475c-ad86-b38718df18ba"
],
}
Response samples
- 202
- 400
- 401
- 403
- 413
- 429
- 500
{- "jobId": "string",
- "message": "string"
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The training job finished successfully"
}
Get Job Status
Get the status for a validation, training, or prediction job. Use the job ID that was returned when the job was created. If the job runs with multiple datasets, the status for each dataset's process is returned.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
jobId required | string The ID of the job to check the status for. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "status": "inProgress",
- "datasetsStatus": [
- {
- "datasetId": "string",
- "status": "string",
- "message": "string"
}
], - "message": "string"
}
Create Prediction
Starts a prediction job which computes and stores anomaly scores for each datasetId included in the request body. Fetch the results from the results endpoint.
Step-by-step instructions
- Determine the tenant and dataset(s) to create predictions for.
- Create the parametersArray with parameters for each dataset.
- Send a `POST` request to this endpoint following the schema below.
- The endpoint will return a jobId.
- Do one of the following...
- Call GET /status with the tenantId and jobId in the header until the status is “success”.
- Provide a webhook in the body of the PUT request in step 3 to receive a request when the job is finished running.
- If the job status is "success", send a `GET` request to the /result endpoint with the jobId in the header to fetch the predictions.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
The body should contain the datasetIds that predictions should be computed for, in addition to required inventory-related data.
required | Array of objects (Prediction Dataset) non-empty Prediction datasets for which predictions should be made. |
object Details for the webhook endpoint to call when a job finishes. |
Responses
Callbacks
Request samples
- Payload
{- "datasets": [
- {
- "id": "12aad5ab-b516-4c3d-bdcb-30e5825f0dac",
- "payslips": [
- {
- "id": "a10ba26d-1d90-409e-ae19-624fb16bf551",
- "date": "2024-05-21",
- "employeeId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "employeeGroups": [
- "Engineering",
- "Intern"
], - "transactions": [
- {
- "id": "53c1ec4e-c2b9-4621-a7a9-00d93761f3d3",
- "contractId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "type": "overtime",
- "subType": "incomeTax",
- "amount": 1000,
- "unit": "EUR",
- "fromDate": "2024-05-01",
- "toDate": "2024-05-31",
- "rate": 200,
- "quantity": 5,
- "costUnits": [
- {
- "department": "Engineering",
- "project": "Project X",
- "location": "Stockholm"
}, - {
- "company": "Company ABC"
}
]
}
]
}
]
}
],
}
Response samples
- 202
- 400
- 401
- 403
- 413
- 429
- 500
{- "jobId": "string",
- "message": "string"
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The prediction job finished successfully"
}
Real-Time Prediction
Computes and returns anomaly scores for each datasetId provided in the parametersArray of the requestBody in real time.
As the forecast is provided in the request, the forecasting functionality is bypassed to start the inventory suggestion calculation directly. This is a fast process, which is why this endpoint can be executed in real-time.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
The body should contain the datasetIds that suggestions should be computed for, a sales forecast for each dataset, and required inventory-related data.
required | Array of objects (Prediction Dataset) non-empty Prediction parameters for each dataset. | ||||
Array (non-empty)
|
Responses
Request samples
- Payload
{- "datasets": [
- {
- "id": "12aad5ab-b516-4c3d-bdcb-30e5825f0dac",
- "payslips": [
- {
- "id": "a10ba26d-1d90-409e-ae19-624fb16bf551",
- "date": "2024-05-21",
- "employeeId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "employeeGroups": [
- "Engineering",
- "Intern"
], - "transactions": [
- {
- "id": "53c1ec4e-c2b9-4621-a7a9-00d93761f3d3",
- "contractId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "type": "overtime",
- "subType": "incomeTax",
- "amount": 1000,
- "unit": "EUR",
- "fromDate": "2024-05-01",
- "toDate": "2024-05-31",
- "rate": 200,
- "quantity": 5,
- "costUnits": [
- {
- "department": "Engineering",
- "project": "Project X",
- "location": "Stockholm"
}, - {
- "company": "Company ABC"
}
]
}
]
}
]
}
]
}
Response samples
- 200
- 400
- 401
- 403
- 404
- 413
- 429
- 500
{- "datasetResults": [
- {
- "id": "392fb924-0744-4fff-bff7-9ae89ce8fe9d",
- "severityThresholds": {
- "lowMid": 10,
- "midHigh": 25
}, - "findings": [
- {
- "type": "INCREASE",
- "fields": [
- {
- "key": "subType",
- "value": "Net"
}
], - "affectedEmployeeIds": [
- "7b3a3735-b23b-4b42-a46d-ad93e947c93e",
- "619a835a-ab4c-4022-8c33-3e056a35de65"
]
}
], - "payslipResults": [
- {
- "id": "aa3e4707-80b5-4a73-b551-54fc6629d4e6",
- "score": 49,
- "findings": {
- "observed": [
- {
- "predictionId": "a48e916a-8f6c-4ffd-9494-39425735c30a",
- "transactionId": "ca9185a0-1928-4b1d-8257-8f07e0d83485",
- "score": 42,
- "context": [
- {
- "employeeId": "181bfaaf-cc0a-409e-b811-1bc2739f6879",
- "employeeGroups": "Engineering"
}
], - "field": "amount",
- "anomalyType": "HIGH_VALUE"
}
], - "missing": [
- {
- "predictionId": "a48e916a-8f6c-4ffd-9494-39425735c30a",
- "score": 42,
- "context": [
- {
- "employeeId": "181bfaaf-cc0a-409e-b811-1bc2739f6879",
- "employeeGroups": "Engineering"
}
], - "expectedField": {
- "key": "subType",
- "value": "Net"
}
}
]
}
}
]
}
]
}
Get Prediction Results
Get the results of a prediction job.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
jobId required | string The unique ID of the job |
page | integer >= 1 Default: 1 The page number |
Responses
Response samples
- 200
- 400
- 401
- 403
- 404
- 429
- 500
{- "message": "string",
- "page": 1,
- "pages": 1,
- "datasetResults": [
- {
- "id": "392fb924-0744-4fff-bff7-9ae89ce8fe9d",
- "severityThresholds": {
- "lowMid": 10,
- "midHigh": 25
}, - "findings": [
- {
- "type": "INCREASE",
- "fields": [
- {
- "key": "subType",
- "value": "Net"
}
], - "affectedEmployeeIds": [
- "7b3a3735-b23b-4b42-a46d-ad93e947c93e",
- "619a835a-ab4c-4022-8c33-3e056a35de65"
]
}
], - "payslipResults": [
- {
- "id": "aa3e4707-80b5-4a73-b551-54fc6629d4e6",
- "score": 49,
- "findings": {
- "observed": [
- {
- "predictionId": "a48e916a-8f6c-4ffd-9494-39425735c30a",
- "transactionId": "ca9185a0-1928-4b1d-8257-8f07e0d83485",
- "score": 42,
- "context": [
- {
- "employeeId": "181bfaaf-cc0a-409e-b811-1bc2739f6879",
- "employeeGroups": "Engineering"
}
], - "field": "amount",
- "anomalyType": "HIGH_VALUE"
}
], - "missing": [
- {
- "predictionId": "a48e916a-8f6c-4ffd-9494-39425735c30a",
- "score": 42,
- "context": [
- {
- "employeeId": "181bfaaf-cc0a-409e-b811-1bc2739f6879",
- "employeeGroups": "Engineering"
}
], - "expectedField": {
- "key": "subType",
- "value": "Net"
}
}
]
}
}
]
}
]
}
Upload Data or Trigger Prediction Jobs via Presigned URL
This endpoint allows you to upload datasets for training, or trigger prediction jobs by using a presigned URL. The first step is to retrieve a presigner URL from the GET /presigned_url endpoint. If you want to upload data for training, set the `type=raw_data. If you want to trigger a prediction job, set the `type=prediction`.
When you have retrieved a presigned URL from the GET /presigned_url endpoint, the next step is to make a PUT request to the URL.
The body of the PUT request should contain the datasets that you want to upload, following the exact schema defined below. Please make sure that all keys in the JSON body are in camelCase (like you see in the schema), and that the values are of the correct type and following the defined requirements.
A 200 response code is typically returned after the PUT request, even if the data is invalid, as validation happens only during job processing. To confirm success, monitor the job's status using the jobId provided in the initial response.
Usage Overview
- Prepare a JSON file containing the data to be uploaded or prediction input:
- Follow the required JSON schema for data uploads or prediction requests, ensuring all keys are in camelCase and values conform to data type requirements.
- Use the
GET /presigned_url
endpoint to retrieve a presigned URL and a unique jobId: - Set the
type
parameter to specify either "data upload" or "prediction". - Perform a PUT request to the retrieved presigned URL with the JSON data in the request body.
- Monitor the job's status to verify successful execution:
- Periodically check the job status by calling
GET /status
with the jobId to confirm the outcome as "success". - Alternatively, include a webhook URL in the PUT request to receive automatic status updates when the job completes.
- In cases of data validation failure, the job status will be "invalid". This means the provided data did not meet schema or format requirements, and affected datasets will not be processed. Review the error message for troubleshooting guidance, or reach out for support if additional assistance is needed.
Authorizations:
Request Body schema: application/jsonrequired
JSON payload containing the required data for model training or prediction. Must adhere to one of the specified schemas.
required | Array of objects (Raw Data Upload Request Dataset) non-empty Datasets containing raw data for training. |
object Details for the webhook endpoint to call when a job finishes. |
Responses
Callbacks
Request samples
- Payload
{- "datasets": [
- {
- "id": "6180934d-044e-48e0-9b70-e1498fcf417e",
- "payslips": [
- {
- "id": "a10ba26d-1d90-409e-ae19-624fb16bf551",
- "date": "2024-05-21",
- "employeeId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "employeeGroups": [
- "Engineering",
- "Intern"
], - "transactions": [
- {
- "id": "53c1ec4e-c2b9-4621-a7a9-00d93761f3d3",
- "contractId": "9053ad3a-b930-4637-bf14-0c23d7198aed",
- "type": "overtime",
- "subType": "incomeTax",
- "amount": 1000,
- "unit": "EUR",
- "fromDate": "2024-05-01",
- "toDate": "2024-05-31",
- "rate": 200,
- "quantity": 5,
- "costUnits": [
- {
- "department": "Engineering",
- "project": "Project X",
- "location": "Stockholm"
}, - {
- "company": "Company ABC"
}
]
}
]
}
]
}
],
}
Callback payload samples
{- "status": "success",
- "jobId": "7720a8c02c664d80a69ed2141b731ee3",
- "message": "The job has been completed successfully"
}
Get Data
This endpoint returns list of dataset IDs that have been uploaded for the given tenant.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "countOfDatasets": 1,
- "datasetIds": [
- "dataset-1"
]
}
Get Data for Dataset
This endpoint allows you to get information about specific dataset.
Authorizations:
path Parameters
datasetId required | string The dataset ID to delete data for. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "startDate": "2022-04-01",
- "endDate": "2022-04-08",
- "intervalGranularity": "D",
- "numberOfIntervalsWithRecords": 5,
- "numberOfIntervalsWithoutRecords": 3,
- "numberOfIntervalsTotal": 8
}
Delete Data
Delete uploaded data for a specific dataset ID.
Authorizations:
path Parameters
datasetId required | string The dataset ID to delete data for. |
query Parameters
fromDate | string Earliest data point to be deleted. If not specified, all data until the "toDate" will be deleted. |
toDate | string The latest data point to be deleted. If not specified, all data from the "fromDate" will be deleted. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "message": "string"
}
Upload Feedback
This endpoint allows users to upload feedback for predictions made by the AI model. The feedback helps improve the model by providing insights into the accuracy of its predictions.
Authorizations:
header Parameters
tenantId required | string Tenant ID |
Request Body schema: application/jsonrequired
The body should contain the datasets for which feedback is being provided, including the dataset ID, job ID, and feedback for each prediction.
required | Array of objects | ||||||
Array
|
Responses
Request samples
- Payload
{- "datasets": [
- {
- "datasetId": "006b5f65-6a6a-4aa8-8c7d-5350147471c7",
- "jobId": "e982937d-95d3-4258-94d2-c91e91790ff0",
- "predictions": [
- {
- "predictionId": "ceb14722-b75d-45bc-b40d-9aaa7202158a",
- "feedback": "FALSE_POSITIVE"
}
]
}
]
}
Response samples
- 200
- 400
- 401
- 403
- 404
- 413
- 429
- 500
{- "message": "Feedback successfully uploaded"
}
Get Subscriptions
This endpoint provides an overview of the subscriptions for a tenant, with optional filtering by dataset ID and subscription IDs.
Authorizations:
query Parameters
datasetId | string Optional dataset ID to filter subscriptions. |
subscriptionIds | string Optional comma-separated list of subscription IDs to filter within the dataset. |
header Parameters
tenantId required | string The ID of the tenant. |
Responses
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "countOfDatasets": 0,
- "countOfSubscriptionIds": 0,
- "subscriptionObjects": [
- {
- "datasetId": "string",
- "countOfSubscriptionIds": 0,
- "subscriptionIds": [
- "string"
]
}
]
}
Unsubscribe
Unsubscribe for a list of subscription objects.
Authorizations:
header Parameters
tenantId required | string The ID of the tenant. |
Request Body schema: application/jsonrequired
The dataset IDs and optionally subscription IDs to unsubscribe.
required | Array of objects (Subscription Object) List of subscription objects. | ||||
Array
|
Responses
Request samples
- Payload
{- "subscriptionObjects": [
- {
- "datasetId": "string",
- "subscriptionIds": [
- "string"
]
}
]
}
Response samples
- 200
- 400
- 401
- 403
- 413
- 429
- 500
{- "countOfUnsubscribedDatasets": 0,
- "countOfUnsubscribedSubscriptionIds": 0,
- "unsubscribedSubscriptionObjects": [
- {
- "datasetId": "string",
- "countOfSubscriptionIds": 0,
- "subscriptionIds": [
- "string"
]
}
]
}