Bulk operations let you move large volumes of clinical data in and out of ClinikAPI in a single job. Exports produce NDJSON files you can download from a pre-signed S3 URL; imports accept NDJSON batches of up to 10,000 records. Both operations run asynchronously — you start a job, then poll for its status.
Common use cases include migrating data from a legacy EHR, generating reporting snapshots, and creating point-in-time backups.
Bulk export
Start an export job
Send the resource types you want to export and an optional since date to filter records modified after that time:
curl -X POST https://api.clinikehr.com/v1/bulk/export \
-H "x-api-key: clk_live_abc123" \
-H "Content-Type: application/json" \
-d '{
"resourceTypes": ["Patient", "Encounter", "Observation"],
"since": "2025-01-01T00:00:00Z"
}'
The response returns a job ID you use to check progress:
{
"data": {
"jobId": "job_abc123",
"status": "pending",
"message": "Export job started"
}
}
Poll for completion
Check the job status by ID until status is completed or failed:
curl https://api.clinikehr.com/v1/bulk/jobs/job_abc123 \
-H "x-api-key: clk_live_abc123"
{
"data": {
"id": "job_abc123",
"type": "export",
"status": "completed",
"totalRecords": 1250,
"processedRecords": 1250,
"downloadUrl": "https://s3.amazonaws.com/clinikapi-exports/...",
"errors": []
}
}
Download the results
The downloadUrl is a pre-signed S3 URL valid for 1 hour after the job completes. Download it promptly, or re-poll the job to get a fresh URL.
curl -o export.ndjson "https://s3.amazonaws.com/clinikapi-exports/..."
Bulk import
Upload an NDJSON batch
Send one JSON object per line in the data field, with resourceType specifying the target resource:
curl -X POST https://api.clinikehr.com/v1/bulk/import \
-H "x-api-key: clk_live_abc123" \
-H "Content-Type: application/json" \
-d '{
"resourceType": "Patient",
"data": "{\"firstName\":\"Jane\",\"lastName\":\"Doe\",\"gender\":\"female\"}\n{\"firstName\":\"John\",\"lastName\":\"Smith\",\"gender\":\"male\"}"
}'
The response includes a job ID. Poll GET /v1/bulk/jobs/:jobId the same way as for exports to monitor progress.
Import rules
- Maximum 10,000 records per batch.
- Tenant tags are injected automatically — you cannot override them.
- Invalid records are skipped; check the
errors array in the completed job for details.
- Rate limited to 5 bulk operations per hour per tenant.
Bulk operations count against your plan’s request limit. Each record in an import counts as one request.
Job statuses
| Status | Description |
|---|
pending | Job is queued and has not started yet |
processing | Job is actively running |
completed | Job finished; downloadUrl is available for exports |
failed | Job encountered an error; inspect the errors array |
NDJSON (newline-delimited JSON) is a text format where each line is a complete, valid JSON object. This makes large files easy to stream and process line-by-line without loading the entire dataset into memory.
{"firstName":"Jane","lastName":"Doe","gender":"female","birthDate":"1990-01-15"}
{"firstName":"John","lastName":"Smith","gender":"male","birthDate":"1985-06-20"}
{"firstName":"Maria","lastName":"Garcia","gender":"female","birthDate":"1978-11-03"}
Blank lines are not allowed in NDJSON. Each non-empty line must be a complete JSON object.
Polling example
Here is a simple polling loop in TypeScript that waits for a job to finish:
import { Clinik } from '@clinikapi/sdk';
const clinik = new Clinik(process.env.CLINIKAPI_SECRET_KEY!);
async function waitForJob(jobId: string, intervalMs = 5000): Promise<void> {
while (true) {
const { data: job } = await clinik.bulk.getJob(jobId);
if (job.status === 'completed') {
console.log(`Job done. Download: ${job.downloadUrl}`);
break;
}
if (job.status === 'failed') {
console.error('Job failed:', job.errors);
break;
}
console.log(`Status: ${job.status} (${job.processedRecords}/${job.totalRecords})`);
await new Promise((resolve) => setTimeout(resolve, intervalMs));
}
}