GUIDE The complete guide to AI Canvas - every model, every Flow, every workflow.
PRODUCT · AI CANVAS

The infinite canvas for generative art & video.

Kaiber's AI Canvas is the heart of Superstudio - a node-based, infinite workspace where every leading AI model lives side by side. Generate images, videos, and audio. Train your own models. Stitch them into stories. All without ever leaving the canvas.

CANVAS · LIVE GENERATION
15+AI models integrated
Canvas size · pan-and-zoom
100Free credits on signup
$15Starting price per month
2024Year Superstudio launched

What's in this guide

  1. The problem Canvas solves
  2. An infinite canvas, explained
  3. Inside the model library
  4. Flows · the modular AI toolkit
  5. Elements, Assets & Collections
  6. Custom Models & LoRA training
  7. The Canvas interface, toured
  8. Audioreactivity & beat-sync
  9. Who Canvas is for
  10. The artists who built it with us
  11. Credits & pricing
  12. Frequently asked questions
01 · The Problem

Five subscriptions, ten tabs, one finished asset.

Canvas exists because the modern AI creative workflow is broken — and Kaiber's founders lived through the breakage before they fixed it.

If you've tried to make a single piece of generative video in 2026, you already know the routine. You start in one tool to write a prompt and generate a still. You paste that still into a second tool to animate it. You bounce out to a third for upscaling, a fourth for sound design, a fifth for the actual edit. Each platform has its own login, its own credit system, its own pricing model, its own export quirks. By the time you finish a single 30-second clip, you've used five subscriptions, fourteen tabs, and three different file formats — and you've still got nothing approaching a unified creative workspace.

Kaiber CEO Victor Wang calls this tool fatigue, and he was direct about it in the Superstudio launch announcement in October 2024: "Creatives are stuck in a loop of slow, ugly AI slop and disjointed workflows, paying 5–10 subscriptions to make one asset." The Kaiber team had been producing AI music videos for artists like Kid Cudi (the Entergalactic lyric videos), Yaeji, Grimes and Chief Keef since 2022 — and they kept running headfirst into the same wall. The models were getting better every week. The workflow around them was getting worse.

We've created a home base for the new forms of creativity emerging as humans collaborate with machines. Our focus has always been putting human creativity first, and Superstudio empowers artists to seamlessly integrate AI into their process — amplifying their taste without sacrificing originality. — Victor Wang, CEO, Kaiber

AI Canvas is the answer to that mess. It's the central workspace inside Kaiber's Superstudio platform — a single, infinite, node-based environment where every leading generative model lives under one roof, billed from one credit pool, accessible without ever opening a new tab. Where the rest of the AI video industry has built single-purpose tools that demand stitching together, Canvas was designed from day one as a complete creative ecosystem.

It's also genuinely different in shape. Where most AI tools force a linear workflow — prompt, generate, repeat — Canvas spreads your entire creative process across a flexible, navigable workspace you can pan, zoom, branch and rearrange at will. Every generation becomes a node. Every node remembers where it came from. Every output can become the input for the next idea, in any direction, with any model. That's not just a feature — it's a fundamentally different relationship with AI tooling.

02 · The Concept

An infinite canvas, explained.

If you've used Figma, Miro, or a node-based tool like Flora, the mental model will feel familiar. If you haven't — here's everything you need to know.

The Canvas is, quite literally, infinite. There is no fixed page size, no slide dimension, no canvas boundary you can hit. You pan in any direction. You zoom out to see the entire shape of your project, then zoom in to refine a single frame. You can place a generation in the upper-right corner and another in the lower-left and another forty thousand pixels down and to the right — all on the same canvas, all part of the same project, all available to remix into each other.

This is different from the way nearly every other AI video tool works. Tools like Pika, Runway and Sora present a more linear interface: you write a prompt, you get a video back, you save it, you start over. The history is a vertical list. The canvas is a single output frame. Branching means duplicating and starting fresh. Kaiber's Canvas inverts that entirely.

Nodes, connections, and creative provenance

Inside the Canvas, every generation is a node — a discrete object you can move, copy, label, group, restyle, branch, or feed forward into another model. Nodes can connect to other nodes, forming visual chains. When you take a Flux image and animate it with Kling, those two outputs are linked — Canvas remembers that the video came from this specific image, generated with this specific prompt, on this specific date.

That means you never lose creative provenance. Three weeks later, when a client says "go back to that one variation we liked from the second round," you don't have to re-create from memory. You scroll back through your Canvas, find the node, branch it, and keep going. The whole history of how an idea evolved lives on the workspace permanently.

Why nodes matter

A node-based canvas turns AI generation from a slot machine ("hit the button, hope for the best") into a non-linear thinking tool. You can explore ten directions in parallel, compare them visually, then commit. It's the difference between sketching in a notebook and re-opening a Word document each time.

Multiple Canvases per project

Superstudio doesn't limit you to one Canvas per project. The free tier ships with up to 2 Canvases out of the box; paid tiers expand that allowance significantly. Most professional users keep one Canvas per major creative thread — one for "Music Video v3", one for "Brand Campaign Stills", one for "Storyboard for the short film". Renaming, duplicating and deleting Canvases happens through the Canvas Menu in the top-left of the workspace. Canvas History is also saved automatically, so you can scrub backwards through earlier states even after you've kept iterating.

Zoom level as creative tool

One quietly powerful detail: zoom level itself becomes a creative state. Zoom way out, and you can read the entire shape of your project — clusters of stylistic experiments here, narrative scenes over there, a row of color tests along the bottom. Zoom in, and Canvas reveals the granular detail of an individual generation. Many users report that zooming out is when the unexpected creative connections happen — when you notice that a still you generated for one purpose would be perfect as the input for another.

03 · The Models

Every leading model. One credit pool.

Canvas integrates the most powerful image, video and audio models on the market — switchable with a single click, all billed from the same wallet.

This is where Canvas's "one home, many models" philosophy goes from abstract pitch to concrete capability. Rather than building yet another in-house generative model and trying to compete with the entire industry, Kaiber's strategy is to integrate the best of every category — and let creators choose the right tool for the job, scene by scene, prompt by prompt.

That curation is updated regularly. As new model versions ship from frontier labs, Kaiber rolls them into Canvas with their native parameters intact. As of the latest Canvas release (v2.4), the lineup looks like this:

Video models

6 models · cinematic, animated, photoreal
KKling 3.0
Long takes

Up to 2-minute clips with strong audio-visual sync. The pick for narrative scenes and longer-form storytelling pieces.

LLuma Ray 2 / 3
Atmospheric

Hi-Fi 4K HDR with superior physics simulation. The image-to-video specialist for moody, cinematic frames.

VGoogle Veo 3.1
4K + audio

Native synchronized audio in the same render pass. 36 credits per second; a 5-second clip is 180 credits.

RRunway Gen-4.5
Cinematic

Motion brushes, scene consistency, and the strongest creative-control toolset. Filmmaker favorite.

MMochi 1
Stylized

Open-weight model tuned for artistic, painterly output. Great for art loops and abstract sequences.

HMiniMax Hailuo
Fast iteration

Lightning-fast generation. The right pick when you want to test 10 ideas quickly without burning premium credits.

Image models

4 models · photoreal, design, illustration
FFlux (Black Forest Labs)
Photoreal

State-of-the-art photorealism. Best-in-class for portraits, products, and brand-grade hero imagery.

RRecraft v3
Design

Tuned for typography, logos, vector graphics and brand systems. Posters and banners with crisp readable text.

SStable Diffusion 3.5
Versatile

The reliable workhorse. Stable, fine-tunable, and the foundation for most custom-style training inside Canvas.

MMagnific
Upscale

The premium upscaler integration. Pushes any image to crisp, hallucination-rich high resolution.

Audio models

2 models · generation & stem separation
SStability Audio
Generation

Generate music, ambient beds and SFX from text prompts. Up to 3-minute compositions in any genre.

AAudioshake
Stem split

Isolate vocals, drums, bass or instrumentals from any track. Powers Canvas's audioreactive features.

TTopaz
Video upscale

The video resolution heavyweight. Takes any Canvas-generated clip up to crisp 4K with detail preservation.

The integration philosophy here is worth pausing on. Each of these models, used standalone, requires its own subscription, its own UI, its own learning curve, its own export workflow. Inside Canvas, they're all available behind a single interface, with consistent input/output handling and a unified credit pool. That's the part nobody else in the industry has matched yet — and it's the reason Canvas users describe the experience as "having a research lab on a single laptop."

Credit costs vary by model

Premium models like Veo 3.1 cost 36 credits per second of generated video — a 5-second clip is 180 credits before any upscaling. Faster models like Hailuo are cheaper. Always preview at low resolution first; upscale only the takes you actually want to ship.

04 · Flows

Flows. The modular toolkit.

If models are the engines, Flows are the toolbox. They're the building blocks that turn raw model access into actual creative workflows.

Where models are the underlying generative engines, Flows are the modular AI tools you actually click on inside Canvas. Each Flow wraps a specific creative action — generating an image, animating a still, restyling a clip, training a custom model, transferring a style across frames, or syncing visuals to audio — into a single tile you can drop onto the workspace.

Flows are designed to compose. The output of one Flow can become the input for the next. You can chain them visually on the canvas: an image-generation Flow feeds a style-transfer Flow, which feeds a video-animation Flow, which feeds an upscaling Flow. The chain is visible, editable and re-runnable. Adjust a parameter five steps back and the downstream nodes can be regenerated in sequence.

Here's the current Flow library, organized by category. New ones land regularly as Kaiber Labs ships them out of beta:

Flow Category What it does
Image FlowImageGenerate a still from a text prompt or reference image. Pick the model (Flux, Recraft, Stable Diffusion).
Video FlowVideoGenerate video from text, image, or video input. Choose your model — Kling, Luma, Veo, Runway, Mochi or MiniMax.
Audio FlowAudioGenerate music, ambient sound, or SFX from text. Powered by Stability Audio.
Image UpscalerEnhancePush any image up to high-res via Magnific. Great for printing, hero images, or paid ad creative.
Video UpscalerEnhancePush any clip up to 1080p or 4K via Topaz. Costs additional credits per export.
Lip SyncAnimationMatch mouth movement on a generated character to an audio track. Particularly strong for narrative scenes.
Video RestylerTransformRe-skin existing video footage in a new aesthetic — anime, oil painting, cyberpunk, claymation.
Add Audio to VideoCompositionCombine a generated visual with a generated or uploaded soundtrack. The bridge between Canvas and Cuts.
Style TransferTransformApply the look of one image to another. Powers brand consistency across batch generations.
Face ReferenceIdentityLock a specific face across multiple generations — critical for character consistency in storyboards.
Model MakerTrainingTrain a custom LoRA model on your own visual style or character set. Reusable across every future project.
AudioreactivitySyncGenerate visuals that pulse, cut and animate in time with an uploaded music track.

Notice how the table reads almost like the list of tools you'd expect in a traditional post-production pipeline — generate, restyle, upscale, add audio, sync, finish. That's not an accident. Kaiber explicitly designed the Flow taxonomy around real production workflows, not around the underlying AI architecture. From the user's perspective, you're picking the creative action you want to take. From the system's perspective, Canvas routes that action to the right model with the right parameters.

Quick Action Menu vs full Flow Menu

Flows are accessed two ways. The Quick Action Menu at the bottom of the Canvas (centered around the yellow Kiko mascot — Kaiber's official Canvas character) gives you fast access to the most-used Flows: create video, create image, create audio. Hover Kiko for a condensed view; click for the full Quick Menu. For deeper tools — Model Maker, Image Upscaler, Video Upscaler, Lip Sync, Video Restyler, Add Audio to Video — open the full Explore Tools menu, which surfaces every Flow Superstudio currently supports.

05 · Asset Management

Elements, Assets & Collections.

The vocabulary that keeps a complex Canvas project organized — and the system that prevents it from becoming a digital landfill.

Long generative-AI sessions can quickly turn into chaos. After a couple of hours of branching ideas, you might be looking at hundreds of nodes — variations, restyles, dead ends, finished pieces, raw inputs. Without organization, you lose more time finding the thing you want than you spend creating new things. Canvas tackles this with a three-layer organizational system: Elements, Assets, and Collections.

Elements

Elements are the creative components — images, videos, and assets — that you manipulate inside Flows. An Element is anything you've imported into Canvas (an uploaded photo, a reference video, a song) or generated within Canvas (a Flux still, a Kling clip, a Stability Audio track). Drag an Element into a Flow, and it becomes an input. The Flow processes it, and the output becomes a new Element. Elements move freely between Flows, between Canvases, and between the three Kaiber products (Canvas, Cuts, Editor).

Assets

Assets is the broader term for anything stored in your Canvas workspace — every Element, every output, every variation. The Assets section is accessed through the Canvas Toolbar on the left side of the workspace. You can filter by model type, media type, aspect ratio, date created, or any combination. Drag individual Assets onto the Canvas, or drag entire Collections in at once.

Collections

Collections are the organizational layer that turns scattered Assets into structured projects. Group all the stills from a brand campaign into one Collection. Group all the variations of a music video into another. Add a Collection to a Flow and every item in it becomes available as an input — perfect for batch operations where you want, say, the same style transfer applied to twenty different stills.

Collections also integrate with Kaiber's Creative Templates — pre-built project structures loaded with relevant Assets and Flows that you can remix. New users often start a fresh Canvas by opening a Creative Template, swapping in their own inputs, and using the existing structure as scaffolding for their first project. It's the fastest way to get a usable result on day one.

06 · Custom Models

Train your style. Once. Forever.

The Model Maker Flow lets you fine-tune a LoRA model on your own visual identity — and then use it like any other model on the canvas.

One of the quietest superpowers inside Canvas is Custom Models. Most AI generation tools force you to recreate your style every session — re-pasting the same lengthy prompts, the same reference images, the same negative prompts, hoping the model lands the same place twice. Canvas inverts this: instead of describing your style every time, you teach a model what it is, once.

The Model Maker Flow walks you through the training process. Upload 10 to 30 reference images that represent your aesthetic — a specific character, a brand world, a recurring color palette, a signature illustration style. Kaiber's training pipeline fine-tunes a LoRA (Low-Rank Adaptation) on top of one of the base models — typically Flux or Stable Diffusion 3.5 — and produces a custom model that's now permanently scoped to your account.

From that point forward, that custom model appears in your Flow menu like any of the built-in options. Every future generation through it carries your DNA — the same color sensibility, the same character likeness, the same line quality. You can use it for a single generation or chain it into longer Flows. You can train multiple custom models for different projects (one per character, one per brand, one per series) and switch between them like switching pens.

What it costs and how it scales

A single training run costs roughly 500 credits. That's a one-time cost — once trained, the model is yours, and using it for subsequent generations is no more expensive than using any other base model. For agencies or freelancers managing multiple branded properties, this is genuinely transformative. A music label can train a model on each artist's visual world. A brand can train a model on its style guide. An indie game studio can train one on its protagonist.

Privacy & data handling

Worth noting: your training data and your trained models are private. The reference images you upload are not used to retrain Kaiber's base models, and your custom models are not visible to other users. This matters especially for agencies handling client IP, for musicians training on their own likeness, and for brands training on confidential visual systems. Canvas's commercial-use rights apply to outputs generated through your custom models, on Pro and Artist subscription tiers.

07 · The Interface

The Canvas, toured.

Kaiber publishes the Canvas's UI specification openly in the Help Center. Here's the actual layout, in plain language, with every menu and tool accounted for.

When you enter Superstudio for the first time, you land on a blank Canvas — Kaiber's official onboarding language calls this "the starting ground for unlocking your own boundless creativity." It looks empty because it is empty. But every tool, every Flow, every Asset is one click or one keyboard shortcut away. Here's how the interface is actually organized.

The Action Menu (bottom)

The Action Menu is the Canvas's main navigation hub, located at the bottom-center of the workspace. It's anchored by Kaiber's mascot, Kiko — a yellow character that's become the platform's signature visual touchpoint. Click Kiko to open the expanded menu; hover for a condensed view; click anywhere on the canvas to close. From the Action Menu, you access:

  • Try a Creative Template — Browse pre-loaded project templates with Assets and Flows ready to remix.
  • Create — Quick access to Image, Video, and Audio Flows.
  • Support from a Human — Direct path to Kaiber's support team for actual human help.
  • Quick Action Menu — Add media Flows to the Canvas with a single tap.
  • Explore all Tools — Full Flow menu including Model Maker, Upscalers, Lip Sync, Video Restyler, and Add Audio to Video.

The Canvas Toolbar (left)

Running down the left edge of the workspace, the Canvas Toolbar houses the persistent tools you'll touch most often:

  • Home — Opens the Explore Menu to learn about Superstudio tools, templates, and features.
  • Upload — Bring Assets in from your local machine or external sources.
  • Create Video / Create Image — Add the corresponding Flow directly to the Canvas.
  • Assets — Access your generated and uploaded Assets and Collections.
  • Templates — Browse Superstudio's project-based Creative Templates.
  • Move Tool — Pan the Canvas by clicking and dragging.
  • Select Tool — Click to select items on the Canvas for batch actions.
  • Keyboard Shortcuts — Reference the full hotkey list.
  • Account — Check credits, manage account, switch between light and dark themes.

The Canvas Menu (top-left)

The Canvas Menu in the top-left handles the meta-level operations on the workspace itself: creating new Canvases, renaming them, duplicating them, deleting them, accessing saved Canvas History, and switching between Canvases when you have multiple in a project. This is where multi-Canvas workflows live — keep one Canvas for stills, another for video, another for storyboarding.

Light, dark, and keyboard-driven

Canvas supports both light and dark theme — switchable from the Account menu. For power users, keyboard shortcuts are extensive: pan with spacebar-drag, zoom with scroll, select with click-drag, branch with right-click, and dozens of model-specific shortcuts inside individual Flows. The Keyboard Shortcuts panel in the Canvas Toolbar surfaces the full cheat sheet.

08 · Audioreactivity

The feature musicians won't shut up about.

If you make music, lead a DJ set, or run a live show — this is the part of Canvas that justifies the entire subscription on its own.

Audioreactivity is the Flow that takes an audio input and generates visuals that pulse, cut, transform, and animate in time with the music. Drop a track in. Pick a visual style — high energy, cinematic, time skip, or one of the dozens of presets curated for different genres. Canvas analyzes the rhythm, frequencies, and dynamic shifts of the track in real time, then generates a synchronized visual sequence where every visual change lands on a meaningful audio event.

The result, when it works, is uncanny. Cuts coincide with kicks. Color shifts hit on the snare. The visual energy ramps with the build, drops with the drop. For musicians who've previously commissioned music videos at $4,000–$50,000 per piece, audioreactivity is genuinely industry-rewriting. For DJs and live performers, it's the difference between buying a custom VJ rig and prompting one in five minutes.

Behind the scenes, audioreactivity is powered by the Audioshake stem-separation model — which isolates vocals, drums, bass and instrumentals — combined with the visual generation models you already have access to in Canvas. You can specify which stem drives which visual layer (drums drive the cuts, vocals drive the color), or let Canvas's defaults handle it.

The artists who pioneered it

Audioreactivity wasn't built in a vacuum. Kaiber's team developed and refined it through real-world artist projects with Yaeji, Boiler Room, Praying, Jon Rafman, Grimes (her acclaimed Coachella set), Chief Keef, and Andrew T. Live From Earth. The feature shipping today is the production-grade version of what those artists used live and on stage.

Audioreactivity sits at the intersection of Canvas (where the visuals are generated) and Cuts (Kaiber's beat-synced auto-editor). Many users start in Canvas to design the visual look, then send everything to Cuts to ship 10 ready-to-post variations of the finished music video.

Audioreactive output: Visuals generated from a music track, with cuts and color shifts locked to the beat.
09 · Audiences

Who Canvas is for.

Canvas isn't a one-size-fits-all tool. It's specifically built for four audiences — and worth knowing if you're one of them before you sign up.

/ 01 — MUSICIANS & DJS

Musicians, DJs & live performers

Canvas's audioreactivity is the closest thing the AI video industry has to a competitive moat. Drop in a track, get back synchronized visuals you can post as a Spotify Canvas, a TikTok teaser, a YouTube music video, or project on stage at a live set. Indie musicians ship music videos in under an hour that would have cost $4,000–$50,000 a year ago.

/ 02 — VISUAL ARTISTS

Visual artists & illustrators

Animate static works. Train a Custom Model on your style and produce variations in your own DNA. Build dreamlike loops for galleries, projection mapping, NFT-style drops, or experimental art installations. Canvas's stylized output (Mochi, Flipbook) is built for art-first users who don't need photorealism.

/ 03 — SOCIAL CREATORS

Content creators & social-first marketers

Ship scroll-stopping ad creative in hours, not weeks. Test ten hooks before lunch. Pair Canvas with Cuts to auto-edit beat-synced verticals, then export 9:16 for TikTok, 1:1 for Instagram, and 16:9 for YouTube — all in a single render pass. ROAS-positive, brand-safe, and faster than any agency turnaround.

/ 04 — FILMMAKERS & STUDIOS

Filmmakers & indie studios

Storyboard scenes, pre-vis sequences, prototype VFX without rendering. Use Canvas as a creative whiteboard before any real production budget gets spent. Custom Models keep characters consistent across scenes. Send finished sequences to Editor for the timeline polish step.

It's worth noting who Canvas is not for. If you need an AI presenter who lip-syncs your script — a corporate explainer, a training video, a virtual avatar — you're better served by HeyGen or Synthesia. If you need ultra-cinematic photoreal output for a feature film, Sora 2 (while available) and Runway Gen-4.5 lead the pack. Canvas is unapologetically built for art, music, and bold stylized visuals — not corporate explainers.

10 · Artist Collaborations

The artists who built it with us.

Canvas isn't a tool dreamed up in a vacuum. It evolved through dozens of real production projects with some of the most adventurous artists working today.

Through Kaiber Studios — the applied research and creative production arm of the company — the team has co-produced AI-driven visuals for headlining live shows, festival sets, music videos, art installations, and major label projects. Each collaboration directly informed the features that ship in Canvas today. Kid Cudi's Entergalactic lyric video work was the seed that became the whole company. Grimes's Coachella visuals tested the limits of audioreactivity. Yaeji's headline summer show in New York pushed the live performance pipeline. Boiler Room's festival visuals stress-tested the Custom Models system at scale.

Kid Cudi · Entergalactic Grimes · Coachella Yaeji · NY headline show Boiler Room · NYC + LA Chief Keef Jon Rafman · LA Praying Andrew T · Live From Earth

When Kaiber says Canvas was "made by artists, for artists," that lineage is real and verifiable. Kaiber Studios continues to operate as a creative production house — meaning new tools and features are stress-tested in actual high-stakes production environments before they ship to the public Canvas. It's a feedback loop most AI companies don't have.

11 · Pricing

Credits & pricing.

Canvas runs on a credit-based system that scales with what you create. Here's the practical breakdown.

Canvas access is included with every Kaiber subscription tier, plus a no-subscription Flex option for users who want to pay-as-they-go without a recurring commitment. Every plan ships with credits — a token currency that's spent on each generation, variation, training run, and upscale. Different models cost different amounts; premium models like Veo 3.1 cost meaningfully more than fast-iteration models like Hailuo.

Free

$0/forever
100 credits · 2 Canvases · 1 GB storage

Test the full interface, every Flow, and most models. Personal use only; outputs carry a watermark.

Creator

$15/month
1,000+ credits/month · unlimited canvases

The recommended starting point. Commercial-use rights, all models unlocked, custom-model training included.

Pro / Artist

$25/month
3,000+ credits/month · priority queue

For high-volume production. 4K upscaling, beta features, 20% off credit packs, direct support line.

Top-up packs let you buy credits on top of any plan, ranging from $5 (300 credits) to $250 (20,000 credits). Top-up credits never expire — even if you cancel your subscription, they remain in your account permanently. Plan credits refresh monthly.

Practical credit math

A 5-second Veo 3.1 clip costs about 180 credits before any upscaling. A 4K upscale adds ~40 more. Custom-model training is ~500 credits per run. The Creator tier's 1,000+ monthly credits are enough to ship 4–5 polished shorts per month, or 20+ rougher experimental pieces. Heavy iterators should plan for the Artist tier or regular top-ups.

12 · FAQ

AI Canvas, explained.

What's the difference between Canvas and Superstudio?
Superstudio is the broader Kaiber platform — it's the product name for the unified workspace that holds Canvas, Cuts and Editor together. Canvas is the specific product within Superstudio focused on generation and arrangement. When people say "the infinite canvas," they usually mean the workspace inside the Canvas product. Superstudio launched in October 2024, funded by EQT Ventures and Crush Ventures.
Do I need a subscription to use Canvas?
No. Canvas works without a subscription via the Flex plan — you just buy credit packs as you go. A subscription unlocks all 15+ models, custom-model training, video and image upscaling, additional canvases, and discounted per-generation rates. Free signup ships with 100 credits, 2 Canvases and 1 GB of storage to test things first.
How is Canvas different from Cuts and Editor?
Canvas is where you generate raw material with AI — every model, every Flow, the infinite workspace. Cuts auto-edits that material into beat-synced social videos. Editor gives you a timeline for manual polish, transitions, and multi-aspect-ratio export. They share the same Media Library, so you can jump between them without re-importing anything.
Can I use my own footage as a starting point?
Yes — image, video, and audio uploads all work as input nodes. Drop a still and animate it. Drop a clip and re-skin it through the Video Restyler Flow. Drop a song and generate audioreactive visuals. Bring-your-own-media is core to the workflow and most Canvas projects start with at least one uploaded reference.
Are my custom-trained models private?
Yes. Custom Models trained via the Model Maker Flow stay scoped to your account. Your training data is never used to retrain Kaiber's base models, never shared with other users, and outputs through your custom models carry full commercial-use rights on Pro and Artist plans.
How long does a generation take?
It depends on the model and the duration. A Flux still renders in seconds. A 4-second Motion clip takes 1–2 minutes. A Kling 2-minute clip can take 8–10 minutes. Custom model training takes 30–90 minutes depending on dataset size. Pro and Artist plans get priority queue access for faster generations.
Is Canvas available on mobile?
The full Canvas experience is desktop-first — you really do want screen real estate for an infinite workspace. A lighter mobile version is available for fast generations on the go via the iOS and Android apps, but heavy editing and multi-Flow chaining work best on desktop.
Can I collaborate with my team on a Canvas?
Sharing & collaboration is in beta on Pro and Artist plans. Send a view link to anyone, or invite teammates with edit access. Real-time multi-user editing is on the public roadmap for 2026 and is being tested with select agency partners through Kaiber Studios.
What's the catch on the 5-day trial?
The 5-day trial is genuinely $5 for 300 credits and full access to every feature. The catch — and Kaiber is upfront about this — is that the trial converts to a Creator monthly subscription unless you cancel before day 5. Set a calendar reminder. If you want to extend the trial, you can. If you don't, cancel before the timer runs out.

Open the canvas.
Start with 100 free credits.

No credit card. No model lock-in. Every Flow, every model, every capability — yours to test.

Open Canvas · 100 free credits