Video Processing

Run AI detection and tracking to generate MP4E metadata for your video.

Overview

Processing is the step that analyzes your uploaded video to produce the object registry + tracking timeline used by overlays, rules, and interactions.

Processing is server-side
Studio starts processing via an API call and then listens to a live progress stream while the backend detects and tracks objects.

Start Processing

Open a project, then run processing from the editor UI (the project panel provides controls to start processing). When processing starts, Studio opens a modal with real-time progress.

Live Progress

The processing modal shows:

  • Frame progress and percent complete
  • Objects found so far
  • Processing FPS, elapsed time, and estimated remaining time
  • Live detection preview (bounding boxes on a canvas)

Processing Options

Studio supports optional processing parameters to focus detection and improve results:

Processing request body (optional)
1// Studio calls POST /api/video/:id/process with an optional JSON body:
2{
3 "filter_terms": ["shoe", "bag"], // optional: focus detection on specific concepts
4 "tracking_quality": "high", // optional: e.g. "standard" (default) / higher-quality presets
5 "save_polygons": true // optional: store per-frame polygons (useful for replacement zones)
6}
When to enable save_polygons
If you plan to use replacement zones or need high-fidelity shapes (not just bounding boxes), enable polygon saving so the engine/player has richer per-frame geometry.

Reprocessing

Reprocessing regenerates detection/tracking and may overwrite parts of the metadata. Studio prompts for confirmation when reprocessing an already-processed project.

Pro workflow
Export Metadata JSON before a big reprocess so you can restore edits or selectively merge changes after.