Upload Data
The initial step in utilizing TimeDetect is to upload a dataset of approved time registrations.
Our machine learning model learns what patterns are considered normal based on historical data. That is why it makes sense to upload only approved registrations, as they represent the actual work pattern for an employee.
The procedure for uploading data is as follows:
- Get Presigned URL: Call
GET /presigned_url
with the client ID and tenant ID in the header. Thetenant_id
is set by you and should be the same as yourclient_id
for the API integration. You will receive a presigned url and a Job ID in response.
GET /presigned_url
curl -X GET "https://api.machine-learning-factory.stage.visma.com/td/presigned_url" \
-H "Authorization: Bearer YOUR ACCESS TOKEN" \
-H "tenantId: YOUR_TENANT_ID"
Remove .stage
from base url for production environment.
- Create JSON File: Formulate a JSON file containing the Datasets with payroll data, adhering to the request body of
PUT /[presigned url]
.
Request payload format
required | Array of objects (Raw Data Upload Request Dataset) non-empty Datasets containing raw data for training. |
object Details for the webhook endpoint to call when a job finishes. |
{- "datasets": [
- {
- "datasetId": "5de7cd24-9dd3-4526-a782-55c670a47eba",
- "customerId": "acadd548-c4e5-4017-b89c-c0ef75143752",
- "registrations": [
- {
- "registrationId": "1e24162a-3bad-4a58-865c-a1994c75942d",
- "date": "2024-08-26",
- "employeeId": "cf5ee029-0747-45c1-94e3-ea7a2ae3129c",
- "projectId": "high_rise_project",
- "departmentId": "construction",
- "workCategory": "roofing",
- "startTime": 6.5,
- "endTime": 16,
- "workDuration": 9,
- "breakDuration": 0.5,
- "publicHoliday": false,
- "numericals": [
- {
- "name": "overtime",
- "value": 1
}
]
}
]
}
],
}
-
Send PUT Request: Issue a PUT request to the presigned url with the JSON file in the body.
After initiating the PUT request, the service begins validating and refining the uploaded data. This validation ensures compliance with the required schema. If data for any Dataset is invalid, the job status is updated, and no data is stored. The refinement process assesses existing data for each Dataset ID, replaces duplicates, and stores new records.
-
Check Status: Use
GET /status
with the Job ID in the header until it returns 200 with status="success", or utilize the webhook functionality described in webhook section.