ZiB Network
Overview
ZiB is decentralised encrypted storage. Files are encrypted client-side before any bytes leave the browser, distributed redundantly across the storage network, and reassembled on read by the CDN. Storage nodes never see plaintext — not the file content, not the filename, not the encryption key.
Your tenant credentials (access_key:secret_key) authenticate API calls. ZiB only knows about tenants — you manage your own users and decide who can access what. ZiB never stores user identities.
How It Works
Every file uploaded to ZiB goes through a four-stage pipeline before it is available on the network.
ZiB generates a fresh per-file encryption key on the backend and returns it to the browser SDK in the upload registration object. The SDK encrypts the file in the browser before any bytes leave the device. The key is stored on the ZiB backend and never sent to storage nodes.
The encrypted payload is broken into redundant fragments and spread across the storage network. The redundancy is high enough that the file remains fully recoverable through multiple simultaneous node failures.
Storage nodes receive ciphertext fragments only — they never receive the encryption key or the plaintext. All node-facing identifiers are opaque UUIDs, so node operators cannot correlate stored fragments with real filenames or object keys.
When a client requests a file via cdn.zibnetwork.com/objects/<uuid>, the CDN reassembles the fragments, decrypts with the stored key, and streams the result. The CDN URL is public and auth-free — the UUID itself is the access token.
Quick Start
Integrate ZiB storage into your app in five steps. The key architectural rule: credentials stay on your server, the SDK runs in the browser.
Get credentials
Sign in at app.zibnetwork.com → Dashboard → copy your Access Key and Secret Key. Store these as server-side environment variables only — never in browser code or client-side config.
.env on your server and access via process.env.ZIB_ACCESS_KEY etc.Load the SDK
Add the SDK script to your HTML. No npm package or build step required.
<script src="https://app.zibnetwork.com/sdk/zib-sdk.js"></script>Server route — initiate upload
Create a server route that authenticates your user, then calls ZiB to generate the encryption key and upload URL. The registration object returned has no credentials — it is safe to pass to the browser.
// Node.js / Next.js App Router
export async function POST(req) {
// 1. Authenticate with your own system first
const user = await getAuthenticatedUser(req);
if (!user) return Response.json({ error: 'Unauthorized' }, { status: 401 });
const { fileName, fileSize, contentType } = await req.json();
// 2. Build a scoped object key using your user's ID
const key = `${user.id}/${Date.now()}-${fileName}`;
// 3. Call ZiB — credentials stay on the server
const res = await fetch('https://api.zibnetwork.com/v1/api/upload/initiate', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${process.env.ZIB_ACCESS_KEY}:${process.env.ZIB_SECRET_KEY}`,
},
body: JSON.stringify({
bucket: 'my-bucket',
key,
file_size: fileSize,
// Forward the browser's File.type so the CDN can return the
// correct Content-Type header. Without this, iOS Safari refuses
// to render <img> tags and OG/social link previews fall back to
// text-only because the CDN defaults to application/octet-stream.
content_type: contentType,
}),
});
if (!res.ok) return Response.json({ error: 'Failed to initiate upload' }, { status: 502 });
const registration = await res.json();
// 4. Return the registration object — no credentials in here
return Response.json(registration);
}Browser — upload the file
In the browser, call your server route to get the registration, then pass it to ZiBStorage.uploadDirect. The SDK handles encryption and upload automatically.
// Browser — no credentials here
const registration = await fetch('/api/upload-initiate', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
fileName: file.name,
fileSize: file.size,
// Pass the browser's MIME type up so the CDN can serve the
// correct Content-Type — required for <img> tags on iOS Safari
// and for Open-Graph image unfurling on Slack / Twitter / iMessage.
contentType: file.type || 'application/octet-stream',
}),
}).then((r) => r.json());
const result = await ZiBStorage.uploadDirect(file, registration, {
onProgress: ({ stage, progress }) => {
console.log(`${stage}: ${progress}%`);
// stage: 'encrypting' | 'uploading' | 'completing'
},
});
console.log(result.cdn_url);
// → https://cdn.zibnetwork.com/objects/<uuid>Store the cdn_url
The upload result contains cdn_url and file_id. Save both in your database. cdn_url is the permanent public URL for the file — share it directly with clients or embed in your app.
// Send back to your server to persist
await fetch('/api/save-file', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
file_id: result.file_id, // ZiB's internal UUID
cdn_url: result.cdn_url, // permanent public URL
object_key: key, // the key you provided at initiate
}),
}).then((r) => r.json());The Upload Flow
Here is what happens under the hood when you call uploadDirect. The key principle: credentials and keys never cross the server/browser boundary.
SDK Reference
The ZiB SDK is a single browser-side JavaScript file with no dependencies. Load it once via script tag.
<script src="https://app.zibnetwork.com/sdk/zib-sdk.js"></script>Primary upload method
ZiBStorage.uploadDirect(file, registration, options)The primary upload method. No class instantiation needed — call it as a static method. Encrypts the file client-side using the key generated by the API during initiate, then uploads to the assigned storage node. Automatically uses multipart for large files.
fileFile | Blob | ArrayBufferThe file to upload.registrationobjectThe object returned by POST /v1/api/upload/initiate on your server.options.onProgress({ stage, progress }) => voidProgress callback. stage is one of encrypting, uploading, completing. progress is 0–100.Promise<{ file_id: string, cdn_url: string }>Account-level operations
For listing, deleting, and encoding, instantiate the class with credentials. These calls should be made from your server, not the browser — credentials are required.
new ZiBStorage({ accessKey, secretKey })Create an instance for account-level operations. Requires your tenant access key and secret key.
accessKeystringYour ZiB tenant access key.secretKeystringYour ZiB tenant secret key.storage.listBuckets()List all S3 buckets in your account.
Promise<string> — S3-compatible XMLstorage.listObjects(bucket)List all objects in a bucket.
bucketstringBucket name.Promise<string> — S3-compatible XMLstorage.deleteObject(bucket, key)Delete an object and remove all of its fragments from the storage network.
bucketstringBucket name.keystringObject key.Promise<void>Video encoding
storage.startEncoding(fileId)Trigger HLS and DASH encoding for an uploaded video. Pass the file_id returned by uploadDirect. Encoding runs asynchronously — poll getEncodingStatus for progress.
fileIdstringThe file_id UUID returned by uploadDirect.Promise<{ encoding_id, status, hls_manifest_url, dash_manifest_url }>storage.getEncodingStatus(jobId)Poll the status of an encoding job.
jobIdstringThe encoding_id returned by startEncoding.Promise<{ encoding_id, status, progress, hls_manifest_url, dash_manifest_url }>storage.getCdnUrl(fileId)Construct the CDN URL for a file. Equivalent to the cdn_url field returned by uploadDirect.
fileIdstringThe file_id UUID.string — https://cdn.zibnetwork.com/objects/<uuid>Server-side Integration
The ZiB SDK runs in the browser. For server-side code (Node.js, Python, etc.) call the ZiB API directly with fetch or any HTTP client. All endpoints accept the same Bearer auth format.
Authentication
All authenticated endpoints accept a single header:
Authorization: Bearer <access_key>:<secret_key>Your access key and secret key are displayed once when you create a storage account. Store them as environment variables (ZIB_ACCESS_KEY, ZIB_SECRET_KEY). No SigV4 signing is required.
Node.js example — initiate + upload
// Server: initiate the upload (credentials stay on your server)
const initRes = await fetch('https://api.zibnetwork.com/v1/api/upload/initiate', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.ZIB_ACCESS_KEY}:${process.env.ZIB_SECRET_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
bucket: 'my-bucket',
key: 'videos/intro.mp4',
file_size: fileBuffer.length,
content_type: 'video/mp4', // forwarded to the CDN's Content-Type header
}),
});
const registration = await initRes.json();
// registration = { type, file_id, upload_url?, upload_id?, ... }
// (Additional opaque fields are returned — pass the entire object straight to the SDK.)
// Return 'registration' to the browser for ZiBStorage.uploadDirect()
// or use the SDK on the server side if you need a non-browser upload path.Python example — list objects
import requests
headers = {"Authorization": f"Bearer {ZIB_ACCESS_KEY}:{ZIB_SECRET_KEY}"}
# List buckets
buckets = requests.get("https://s3.zibnetwork.com/s3/", headers=headers)
# List objects
objects = requests.get("https://s3.zibnetwork.com/s3/my-bucket", headers=headers)
# Start encoding
res = requests.post(
"https://api.zibnetwork.com/v1/api/encoding/jobs",
headers={**headers, "Content-Type": "application/json"},
json={"object_key": file_id, "transcription": "recommended"},
)
job = res.json() # { encoding_id, status, progress, ... }Server-side uploads
If you need to upload from a server (not a browser), use the same POST /v1/api/upload/initiate endpoint to get a registration object, then pass it to the ZiB SDK on the server. The SDK encapsulates the encryption and upload format — your code never has to construct the wire format directly.
Upload Pipeline
A pipeline is a tenant-declared chain of stages that should run on a file after upload — encoding (HLS/DASH), transcription (SRT/VTT subtitles), and vision analysis (sidecar JSON). Declare the whole chain in a single field on POST /v1/api/upload/initiate and ZiB will (1) route the upload to a storage node that can fulfil every stage and (2) auto-enqueue each downstream stage as soon as the file finishes sharding. You poll a single endpoint — GET /v1/api/pipeline/{file_id} — to track every stage from one place.
pipeline field replaces the older capabilities array. The legacy array is still accepted for backwards compatibility but new integrations should use pipeline. Pipeline declarations carry tier values (e.g. "recommended" vs "accurate") which the legacy array cannot express.Pipeline shape
{
"encode": true,
"transcription": "fastest" | "recommended" | "accurate",
"vision": "standard" | "hq",
"webhook_url": "https://yourapp.com/api/zib-pipeline-event",
"webhook_secret": "whsec_..."
}Every field is optional. Omit a stage to skip it. The webhook fields are also optional — if provided, ZiB fires HMAC-signed events as each stage completes (in addition to the per-file file.ready webhook configured at the customer level).
Available stages
encodeHLS/DASH video transcoding (H.264 multi-bitrate). Required for all video uploads that will be streamed.
transcriptionSpeech-to-text transcription with three quality tiers. Produces SRT and VTT subtitle files served from the CDN.
visionAI vision analysis with two quality tiers. Generates a sidecar JSON with scene understanding, ad targeting signals, IAB categories, and more.
Server-side pattern (recommended)
Declare the pipeline on your server when calling POST /v1/api/upload/initiate. The browser only receives the registration object — it never needs to know what jobs will run.
// server route — e.g. /api/upload-initiate
const res = await fetch('https://api.zibnetwork.com/v1/api/upload/initiate', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${ACCESS_KEY}:${SECRET_KEY}`,
},
body: JSON.stringify({
bucket: 'video',
key: `videos/${contentId}/${videoType}-${Date.now()}-${fileName}`,
file_size: fileSize,
pipeline: {
encode: true,
transcription: 'recommended',
vision: 'standard',
webhook_url: 'https://yourapp.com/api/zib-pipeline-event',
webhook_secret: process.env.WEBHOOK_SECRET,
},
}),
})
const registration = await res.json()
// Return registration to browser — credentials stay on the server
return Response.json(registration)Polling pipeline status
Once the upload finishes, poll GET /v1/api/pipeline/{file_id} for the rolled-up status of every stage. The top-level status field advances through queued → uploading → encoding → ai_processing → complete (or failed). Stages that were not requested in the original declaration are returned as null.
{
"file_id": "a1b2c3d4-...",
"status": "ai_processing",
"cdn_url": "https://cdn.zibnetwork.com/objects/a1b2c3d4-...",
"declared": { "encode": true, "transcription": "recommended", "vision": "standard" },
"stages": {
"upload": { "status": "complete",
"phase": "distributed",
"replication_safe_at": null },
"encoding": { "status": "completed", "encoding_id": "...", "progress": 100,
"hls_manifest_url": "https://cdn.zibnetwork.com/...m3u8",
"dash_manifest_url": "https://cdn.zibnetwork.com/...mpd" },
"transcription": { "status": "running", "tier": "recommended", "progress": 40,
"output_file_id": null },
"vision": { "status": "queued", "tier": "standard", "progress": 0,
"output_file_id": null }
},
"error": null
}The stages.upload block exposes three fields beyond status:
phase— finer-grained lifecycle:pending→uploading→distributed→safe(≥3 verified non-originator peers per shard) orcancelled(superseded by re-upload, or admin-cancelled). Authoritative for non-video uploads where there's no encoding job to read from.replication_safe_at— ISO timestamp when the file became fully replication-safe.nulluntil the safety sweep flipsphasetosafe.cancelled_reason— present only whenphase === 'cancelled'. Tenant-readable string (e.g."superseded by re-upload to same bucket+object_key").
If stages.upload.phase === 'cancelled', the rolled-up status is forced to failed regardless of downstream stage state — superseded uploads are terminal even if encoding had partially run. Listen for the file.failed webhook event for an out-of-band signal.
/v1/api/upload/initiate from your backend, then pass the returned registration object to ZiBStorage.uploadDirect() in the browser.Node selection
ZiB picks a storage node that supports every stage in your declaration — if you ask for vision, only nodes that have registered vision capability in the live capability table are eligible. If no online node can fulfil the full pipeline, the request fails with HTTP 503 immediately so you can fall back gracefully. Capability checks are always against the canonical capability registration, never a stale snapshot.
Upload via Node (PIN)
An alternative to the browser upload path. Instead of having the user's browser encrypt and PUT every byte through the network, you generate a single-use PIN that the user enters in their ZiB Node desktop app. The node app reads the file from local disk, encrypts it natively, shards it, and registers the file — all without the browser ever touching the bytes. After completion, your existing pipeline (encoding, transcription, vision) runs identically to a browser upload — the same file.ready and encoding.* webhooks fire on the same webhook URL.
Flow
- Your server calls
POST /v1/api/upload/pin/createwith the same params as/upload/initiate(bucket, key, pipeline). Returns{ pin, session_id, expires_at }. - You display the 3-word PIN to the user (and optionally a QR code).
- The user opens the ZiB Node desktop app, clicks Upload → Enter PIN, types the PIN, and chooses a file.
- The node app encrypts + shards the file locally and registers it with the network.
- Your server polls
GET /v1/api/upload/pin/status/{session_id}untilstatus: "complete"(or use the SDK helper below). - The same
file.readywebhook fires, the encoding pipeline runs as normal, and you receiveencoding.completeon your existing webhook URL.
Server: create a PIN
// Server-side only — your master Bearer credentials never reach the browser.
const res = await fetch('https://api.zibnetwork.com/v1/api/upload/pin/create', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${process.env.ZIB_ACCESS_KEY}:${process.env.ZIB_SECRET_KEY}`,
},
body: JSON.stringify({
bucket: 'video',
key: 'videos/abc-123/main.mp4',
pipeline: {
encode: true,
webhook_url: 'https://yourapp.com/api/zib-webhook',
webhook_secret: process.env.ZIB_WEBHOOK_SECRET,
},
}),
});
const { pin, session_id, expires_at } = await res.json();
// pin → "mosaic-trinket-blanket"
// session_id → "5a6b862f-9bb6-4c5a-b5f3-df22bb9da959"
// expires_at → "2026-04-10T15:47:03Z" (24 hours from creation)Server: poll for completion
// Poll until phase is 'ready' or 'failed'. The status endpoint returns
// a full lifecycle rollup so you can render byte-level upload progress,
// then encoding %, then analysis %, all from the same poll.
async function waitForCompletion(sessionId, onProgress) {
while (true) {
const res = await fetch(
`https://api.zibnetwork.com/v1/api/upload/pin/status/${sessionId}`,
{
headers: { 'Authorization': `Bearer ${ACCESS_KEY}:${SECRET_KEY}` },
}
);
const s = await res.json();
// Surface live progress to the UI. `phase` walks through:
// pending → redeemed → uploading → encoding → analyzing → ready
// Each phase populates its own sub-block (defensive read with ?? null
// so you don't break if a future phase is added).
onProgress?.({
phase: s.phase,
uploadPercent: s.upload?.progress_percent ?? null,
encodingPercent: s.encoding?.progress_percent ?? null,
analysisStage: s.analysis?.current_stage ?? null,
transcriptionPct: s.analysis?.transcription_progress ?? null,
visionPct: s.analysis?.vision_progress ?? null,
});
if (s.phase === 'ready') return { file_id: s.file_id };
if (s.phase === 'failed' || s.phase === 'expired') {
throw new Error(`Upload ${s.phase}: ${s.error_message ?? 'no detail'}`);
}
await new Promise(r => setTimeout(r, 2000)); // poll every 2s
}
}Lifecycle phases
The phase field on /pin/status walks through the full upload-and-pipeline lifecycle so a single button can reflect every stage. The legacy status field is still emitted at the top level for backwards compatibility — new integrations should read phase.
pendingPIN created, waiting for a node to redeem it.redeemedA node has entered the PIN. The user is now picking a file.uploadingNode is sending bytes to ZiB storage. Read upload.progress_percent for byte-level progress (updated ≥ every 2s).encodingUpload landed. Encoder is building HLS variants. Read encoding.progress_percent.analyzingEncoding done. Whisper / Qwen-VL is running. Read analysis.current_stage and the matching transcription/vision_progress.readyEvery requested stage has finished. file_id is final. CDN URL is live.failedUpload or a downstream stage failed. Get a new PIN to retry — the failed PIN is dead.expiredPIN was not redeemed within 24 hours. Get a new PIN.Status response shape
{
// Top-level rollup
"phase": "encoding", // pending | redeemed | uploading | encoding | analyzing | ready | failed | expired
"status": "uploading", // legacy field — keep reading 'phase' instead
"redeemed_by_node_id": "ZB-899233",
"file_id": "0c8e3765-9b2b-4a29-82cb-2f1911fe750c",
"file_size_bytes": 1823450000,
"error_message": null,
"expires_at": "2026-04-15T15:47:03Z",
// Populated while phase === "uploading" (omitted if no progress
// report has landed in the last 30 seconds — node may have stalled)
"upload": {
"bytes_uploaded": 412300123,
"total_bytes": 1823450000,
"progress_percent": 22.6
},
// Populated when the pipeline declares encode: true
"encoding": {
"status": "encoding", // queued | encoding | complete | failed
"progress_percent": 47.3
},
// Populated when the pipeline declares transcription and/or vision.
// current_stage tracks the slowest path so a single bar reflects the
// overall analysis progress.
"analysis": {
"current_stage": "transcription", // "transcription" | "vision"
"transcription_status": "running", // null if not declared
"transcription_progress": 42, // 0–100, null if not declared
"vision_status": "queued", // null if not declared
"vision_progress": 0 // 0–100, null if not declared
}
}/pin/extend or /pin/retry endpoint. In your UI, when the status becomes failed or expired, show a "Get a new PIN" button that calls /pin/create again. This keeps the security model simple and prevents replay attacks.Node operator UX (optional context)
When the user enters your PIN in their ZiB Node app, the app shows a capability checklist comparing what your pipeline needs (encode / transcribe / vision) against what their node can do locally. Your tenant_settings.compute_mode setting controls what happens when their node can't do everything:
publicDefault. Any capable node on the network can run downstream jobs. Your user's node uploads, the network handles the rest.prioritizePrefer your own user's nodes for compute, fall back to others if needed. The redeeming node will see a soft warning.strictOnly your own user's nodes can run compute. If a third-party node redeems the PIN, the upload still completes but the encoding job will queue until one of your own nodes is online. The node UI shows an explicit warning.file.ready, encoding.queued, etc. webhooks fire on your existing webhook URL with identical payloads. Your webhook handler doesn't need to know which path was used. Route by file_id as always.Video Encoding
After uploading a video file, call startEncoding(fileId) to trigger transcoding. ZiB produces an adaptive HLS stream with multiple quality variants, encrypts every segment, and distributes the segments redundantly across the storage network.
Encoding pipeline
Quality variants are produced: 360p, 480p, 720p, 1080p, and 2160p where source resolution allows. Each variant is a separate HLS stream.
Every segment is encrypted before it leaves the encoding node, using a unique key generated per encoding job and held only by the ZiB backend.
Each encrypted segment is broken into redundant fragments and distributed across the storage network — the same redundancy model used for all ZiB files.
The CDN serves standard HLS manifests with the encryption key URL embedded. The player fetches the key once, decrypts segments locally, and streams from the storage network directly.
Playback
The output is a standard HLS stream that plays in any HLS-capable player (hls.js, Video.js, native Safari, AVPlayer, etc.). The CDN serves the master manifest and per-quality variant playlists at predictable URLs — the player handles encryption and adaptive bitrate switching automatically.
Triggering encoding
// After uploadDirect() completes, pass file_id to startEncoding
const job = await storage.startEncoding(result.file_id);
console.log(job.encoding_id); // use this to poll status
// Poll until complete
const status = await storage.getEncodingStatus(job.encoding_id);
// status.hls_manifest_url — adaptive stream, ready to play
// status.dash_manifest_url — DASH manifest for alternative playersencoding.complete webhook event instead of polling — it fires as soon as the HLS manifest is ready and includes the manifest URL in the payload.Tracking encoding progress
ZiB fires encoding.progress webhook events throughout an encoding job so you can show a real percentage in your UI. Events are throttled to at most once every 10 seconds or every 5% change, whichever comes first.
The payload carries a progress integer (0–100) and a stage field ("encoding" or "sharding") so you can optionally show which phase the job is in. The percentage is continuous across both phases — you don't need to track them separately.
// Webhook handler — store the progress value, render in your UI
app.post('/api/zib-webhook', (req, res) => {
const { event, file_id, progress, stage } = req.body;
if (event === 'encoding.progress') {
// progress is an integer 0–100
// stage is "encoding" or "sharding"
await db.update('videos', { encoding_progress: progress }, { where: { file_id } });
}
if (event === 'encoding.complete') {
// Final state — manifest URLs are now live
await db.update('videos', {
encoding_progress: 100,
encoding_status: 'complete',
hls_url: req.body.hls_manifest_url,
}, { where: { file_id } });
}
res.sendStatus(200);
});file.ready for a key that was previously uploaded, reset your stored progress to 0. ZiB cancels the old encoding job and starts fresh — stale progress from the previous upload should not persist in your UI.Edge cases
- Out-of-order delivery — under retry pressure, a lower percent may arrive after a higher one. Use
Math.max(current, incoming)if you want monotonic progress. - Fast encodes — small files or hardware-accelerated encodes may jump from 0 to 100 in a single event. The bar fills instantly — this is expected.
- No events arrive — if all quality variants finish near-simultaneously, you may only see the final
encoding.complete. Your UI should still work without progress events. - Sharding stage — if you want to show a separate phase label like "Distributing segments…", branch on
stage === 'sharding'. Otherwise just use the percentage as-is.
Playing the stream
<!-- hls.js (browser) -->
<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
<video id="video" controls></video>
<script>
const hls = new Hls();
hls.loadSource('https://cdn.zibnetwork.com/stream/<file_id>/master.m3u8');
hls.attachMedia(document.getElementById('video'));
</script>
<!-- Native HLS (Safari / iOS) -->
<video controls src="https://cdn.zibnetwork.com/stream/<file_id>/master.m3u8"></video>Webhooks
ZiB sends signed POST requests to your configured webhook URL when files, encoding jobs, and AI jobs change state. There are two ways to configure delivery:
- Customer default — set once via
PATCH /v1/api/customers/<customer_id>/webhookandPOST /v1/api/customers/<customer_id>/webhook/secret. Used as the fallback for every event. - Per-pipeline override — pass
webhook_urlandwebhook_secretinside thepipelinefield on/v1/api/upload/initiate. Every event for that file is delivered to the override URL with the override secret instead.
Headers
Every webhook request carries:
Content-Type: application/json
X-ZiB-Event: <event name> e.g. encoding.complete
X-ZiB-Job-ID: <uuid> on encoding.* and {transcription,vision}.complete
X-ZiB-Signature: sha256=<hex> HMAC-SHA256 over the raw request bodyEvents
file.readyFile is fully sharded across the network. From here on the file_id is permanent.file.replicatedFile reached replication-safe state — ≥3 verified non-originator peers per shard. Backups can rely on it now.file.failedFile became terminal-cancelled (currently fires on re-upload supersede; cancelled_reason in payload distinguishes scenarios). Tenants relying on this file_id should switch to the new one.encoding.queuedEncoding job has been accepted and is waiting for a node.encoding.startedA node has claimed the job and begun encoding.encoding.progressThrottled progress update (every ~10s or every 5% delta) carrying a 0–100 integer percentage. Use this to drive a progress bar in your UI.encoding.shardingTranscoding finished; encrypted segments are being distributed.encoding.completeHLS and DASH manifests are live on the CDN.encoding.failedEncoding failed — check error_message in the payload.transcription.completeWebVTT subtitle track is ready (srt_file_id in extras).vision.completeZibSidecar JSON is ready (sidecar_file_id in extras).Payload shapes
Each event has its own envelope. All payloads include a top-level event string and timestamp (ISO 8601 UTC).
// file.ready — fired the moment shard count reaches the redundancy target
{
"event": "file.ready",
"file_id": "a1b2c3d4-...",
"bucket": "my-bucket",
"object_key": "videos/2026-04/film.mp4",
"size_bytes": 524288000,
"customer_id": "c0ffee...",
"timestamp": "2026-04-08T17:30:00.000Z"
}// encoding.queued | encoding.started | encoding.sharding | encoding.complete | encoding.failed
// All five share the same shape — only the "event" and "status" fields change.
{
"event": "encoding.complete",
"job_id": "<encoding_jobs.id>",
"status": "complete", // queued | encoding | sharding | complete | failed
"file_id": "a1b2c3d4-...",
"customer_id": "c0ffee...",
"hls_manifest_url": "https://cdn.zibnetwork.com/<file_id>/master.m3u8", // null until "complete"
"dash_manifest_url": "https://cdn.zibnetwork.com/<file_id>/manifest.mpd", // null until "complete"
"error_message": null, // populated on "failed"
"transcription_model": "recommended", // null if transcription not requested
"vision_model": "standard", // null if vision not requested
"timestamp": "2026-04-08T17:32:14.000Z"
}// encoding.progress — throttled mid-encode update
// Fires periodically while encoding is running so your UI can render a real
// progress bar instead of sitting at 0% from encoding.started until
// encoding.complete. Throttle policy: at most every 10s OR every 5% delta,
// whichever comes first. Idempotent — replays of the same percent are safe.
//
// You don't need this event to know the job will eventually finish — the
// existing encoding.complete event still fires. encoding.progress is purely
// for UI feedback.
{
"event": "encoding.progress",
"job_id": "<encoding_jobs.id>",
"file_id": "a1b2c3d4-...",
"progress": 47, // integer 0–100
"stage": "encoding", // "encoding" | "sharding"
"timestamp": "2026-04-08T17:31:08.000Z"
}// transcription.complete — fires when the SRT/WebVTT artefact has been
// produced and stored. Fetch via https://cdn.zibnetwork.com/objects/<srt_file_id>
{
"event": "transcription.complete",
"job_id": "<encoding_jobs.id OR compute_jobs.id>",
"file_id": "a1b2c3d4-...",
"srt_file_id": "f00d...",
"timestamp": "2026-04-08T17:34:02.000Z"
}// vision.complete — fires when the ZibSidecar JSON has been produced and
// stored. Fetch via https://cdn.zibnetwork.com/objects/<sidecar_file_id>
// and parse as JSON for the full sidecar (scenes, IAB categories, etc.).
{
"event": "vision.complete",
"job_id": "<encoding_jobs.id OR compute_jobs.id>",
"file_id": "a1b2c3d4-...",
"sidecar_file_id": "ba11...",
"timestamp": "2026-04-08T17:36:48.000Z"
}job_id field carries an encoding_jobs.id for encoding.* events. For transcription.complete and vision.complete it carries either a compute_jobs.id (when the AI was submitted as a standalone compute call) OR an encoding_jobs.id (when the AI ran as part of a combined encode+AI request). If you need a single dispatcher across both pipelines, route by file_id instead — it is unique per file regardless of which pipeline produced the event.(bucket, object_key) that already has an object, ZiB replaces the prior file_id with a fresh one and marks the old row cancelled. This applies to every upload type — images, JSON, generic blobs, and encoded video alike — not just files that have an encoding job attached. If the prior upload had an in-flight encoding job, that job is cancelled and no further encoding.* webhooks fire for it. You will only ever receive lifecycle events for the most recent upload to a given key. This means:- Route lifecycle events by
file_id— it is the canonical discriminator and is guaranteed unique per upload. - When you receive
file.readyfor a key you've seen before, treat it as a hard reset: clear any state stored against the previousfile_idfor that key, including any priorjob_id, CDN URLs, manifests, and progress. - Do not maintain a fallback that matches webhooks by
job_idalone — a stalejob_idfrom a prior upload should not be allowed to overwrite state for the current one. - The CDN URL
cdn.zibnetwork.com/objects/{file_id}always reflects the latest upload, because the newfile_idis what the SDK returns and what S3 GET redirects to.
Verifying signatures
Every webhook request includes an X-ZiB-Signature header containing an HMAC-SHA256 signature of the raw request body. Always verify this before processing the payload.
import crypto from 'crypto';
export async function POST(req) {
// Read raw body before JSON parsing — signature is over raw bytes
const body = await req.text();
const sig = req.headers.get('x-zib-signature'); // "sha256=<hex>"
const expected =
'sha256=' +
crypto
.createHmac('sha256', process.env.ZIB_WEBHOOK_SECRET)
.update(body)
.digest('hex');
if (sig !== expected) {
return new Response('Unauthorized', { status: 401 });
}
const payload = JSON.parse(body);
if (payload.event === 'file.ready') {
// update your DB with cdn_url, mark upload complete, etc.
}
if (payload.event === 'encoding.complete') {
// store hls_manifest_url, notify your users, etc.
}
return new Response('OK');
}crypto.timingSafeEqual) to avoid timing attacks when comparing HMAC signatures.Security Model
- Every file encrypted with a unique key before leaving the browser
- Keys stored only in ZiB backend — never on storage nodes
- Node operators see only encrypted fragments — no filenames, no keys, no plaintext
- Node-facing identifiers are opaque UUIDs — real object keys never exposed
- High redundancy — files survive multiple simultaneous node failures
- Keep
access_keyandsecret_keyon your server only - Authenticate your own users before calling
/v1/api/upload/initiate - Store
file_idandcdn_urlin your own database - Verify webhook signatures with
X-ZiB-Signaturebefore processing - Access control for CDN URLs (UUID-gated, but not secret — treat as public)
Why credentials must stay on the server
The upload/initiate endpoint uses your master credentials, which have full access to your ZiB account — all buckets, all objects, all quota. Exposing these in browser JavaScript would let anyone who reads your source code upload to your buckets and exhaust your storage quota.
The registration object returned by initiate is a one-time upload token. It authorises a single upload to a single pre-determined location. It contains no credentials and cannot be used to list buckets, delete objects, or do anything other than complete the specific upload it was created for. This is why it is safe to pass from your server to the browser.
Self-enforcing encryption
ZiB's encryption model is self-enforcing. If an attacker obtained a registration token and uploaded plaintext bytes directly to the node URL (bypassing the SDK), the CDN would decrypt those bytes using the key generated at initiate time and return garbage — because the bytes stored on the node would not be valid ciphertext for that key.
There is no configuration flag, no "plaintext mode", and no bypass. The CDN always decrypts. Files that were not encrypted correctly before upload cannot be served correctly — ever. This makes the security property durable even if an integration makes a mistake.
cdn.zibnetwork.com/objects/<uuid> is effectively a capability — possessing the URL grants read access to the file. Treat CDN URLs like signed S3 URLs: don't expose them in public listings if the files are meant to be private. Store them in your database and serve them only to authenticated users.Job Status API
Webhooks are the fastest way to know a file finished encoding, but if you need to render progress to an end user (an upload dashboard, a "your video is processing" screen, a retry button), poll GET /v1/api/my-jobs. Every encoding job in your account is returned with its current lifecycle phase — from queued all the way through safe (replicated to enough peers that the file is durable).
Authorization: Bearer access_key:secret_key. Jobs returned are scoped to your customer automatically.Lifecycle phases
Each job is in exactly one phase:
queued— accepted, waiting for a node to claim itencoding— FFmpeg is running;phase_progress_pctticks 0–100sharding— encoded segments are being hashed + written to storagelocal_complete— all bytes on the originating node, awaiting backend greenlightreplicating— segments are being pulled to peer nodessafe— ✅ 3+ peers confirmed; the file is durably stored and will survive any single-node failure.replication_safe_atis set.failed— transient failure, auto-retrying (retry_aftertells when)failed_user_action— retries exhausted; surface a Retry button to the usercancelled— user-cancelled; derivatives being cleaned up
List jobs
curl -s https://api.zibnetwork.com/v1/api/my-jobs \
-H "Authorization: Bearer ${ACCESS_KEY}:${SECRET_KEY}"
# Filter:
curl -s "https://api.zibnetwork.com/v1/api/my-jobs?status=all" \
-H "Authorization: Bearer ${ACCESS_KEY}:${SECRET_KEY}"
?status=active (default) returns jobs whose phase is not safe or cancelled. ?status=all returns everything.
Response
{
"jobs": [
{
"file_id": "0c8e3765-9b2b-4a29-82cb-2f1911fe750c",
"title": "trailer-final.mp4",
"phase": "replicating",
"phase_progress_pct": 66.7,
"phase_detail": "peer 2 of 3 verified",
"phase_updated_at": "2026-04-24T05:12:04Z",
"replication_peer_count": 2,
"replication_target_count": 3,
"replication_safe_at": null,
"retry_count": 0,
"last_error": null,
"cancellable": true,
"retriable": false,
"created_at": "2026-04-24T04:58:12Z",
"node_id": "ZB-899233"
}
]
}Cancel a job
curl -X DELETE https://api.zibnetwork.com/v1/api/my-jobs/${FILE_ID} \
-H "Authorization: Bearer ${ACCESS_KEY}:${SECRET_KEY}" \
-H "Content-Type: application/json" \
-d '{"reason":"user_abandoned_upload"}'
Sets phase=cancelled and queues cleanup commands to every node that holds any segment of the file. Not reversible. Only valid while cancellable: true.
Retry a failed job
curl -X POST https://api.zibnetwork.com/v1/api/my-jobs/${FILE_ID}/retry \
-H "Authorization: Bearer ${ACCESS_KEY}:${SECRET_KEY}"
Resets a failed_user_action job back to queued. Use this behind a Retry button in your UI. Only valid while retriable: true.
Suggested polling cadence
- While your UI is visible and you have any job in a non-terminal phase: every 5 s
- Background tab or no active jobs: every 30 s
- Webhooks remain the preferred signal for terminal transitions — use
/my-jobsfor in-flight progress UX
The durability guarantee
Once a job reaches phase: "safe", the file has ≥3 verified non-originator peers holding every segment. From that point:
- Any one peer can go offline permanently and the file is still served
- The originating node can freely reclaim its local copy
- Playback is served through
cdn.zibnetwork.comwith automatic failover between peers
Before safe is reached, the originator's bytes are unconditionally protected — the file cannot be lost to a transient failure on the originating node while replication is in progress.
Compute / AI
ZiB Compute is an optional AI inference layer built on top of ZiB storage. Compute-capable nodes run transcription and vision analysis on videos stored in the network. Output is an encrypted WebVTT subtitle file (auto-injected into the HLS manifest) and an encrypted ZibSidecar JSON — a full metadata envelope covering transcription, chapters, scene understanding, IAB content taxonomy, GARM brand safety, clip suggestions, and thumbnail candidates.
Trigger AI from an encoding request
Pass transcription and/or vision options to startEncoding() — the encoding node runs AI immediately after encoding while the decrypted source is already in memory.
const { encoding_id } = await zib.startEncoding(fileId, {
transcription: 'recommended', // 'fastest' | 'recommended' | 'accurate'
vision: 'standard' // 'standard' | 'hq'
});
// Poll until complete
const status = await zib.getEncodingStatus(encoding_id);
// status.srt_file_id → WebVTT file (auto-injected into HLS manifest)
// status.sidecar_file_id → ZibSidecar JSONStandalone compute jobs
Run transcription or vision on any already-stored video — no re-encoding needed.
// Standalone transcription
const { job_id } = await zib.submitTranscription(fileId, 'recommended');
// Standalone vision AI
const { job_id } = await zib.submitVisionAI(fileId, 'standard');
// Poll job status
const result = await zib.getComputeStatus(job_id);
// result.status → 'queued' | 'processing' | 'complete' | 'failed'
// result.srt_file_id → WebVTT file ID (when complete)
// result.sidecar_file_id → ZibSidecar file ID (when complete)Reading the ZibSidecar
The CDN decrypts the sidecar on the fly — fetch the CDN URL directly and parse as JSON.
// Get the sidecar CDN URL from job status
const status = await zib.getComputeStatus(job_id);
const sidecarUrl = zib.getCdnUrl(status.sidecar_file_id);
// Fetch and parse
const sidecar = await fetch(sidecarUrl).then(r => r.json());
// Key fields:
sidecar.scene_understanding.executive_summary // 150-200 word summary
sidecar.content_classification.garm.brand_safety_score // 0.0–1.0
sidecar.clip_suggestions[0].virality_score // 0.0–1.0
sidecar.chapters_youtube_format // paste into YouTube description
sidecar.transcript.full_text // full transcript string
sidecar.transcript.segments[0].intent // hook / payoff / cta / ...
sidecar.content_classification.iab_categories[0].name // IAB 3.0 taxonomyQuality tiers
fastestrecommendedaccuratestandardhq