Midjourney has finally jumped from still images to moving pictures, unveiling its Video V1 model on 18 June 2025. The new engine turns a single image into four five-second clips that can be extended in-app up to 21 seconds.
What makes V1 different?
- Image-to-video focus – start with any picture (including one made in Midjourney) instead of writing a text prompt.
- Two motion modes – Auto for random movement or Manual if you type a short animation description.
- Motion slider – Low or High controls how much the camera and subject drift.
- Creative, not hyper-real – early clips keep Midjourney’s dream-like style, which the company positions as a stepping-stone toward real-time open-world simulations.
Heads-up on credits: each Video V1 render currently counts as about 8× a normal image generation, so Basic-plan users will chew through their monthly quota faster.
V1 versus other video models
- Audio – V1 exports silent footage, unlike Google’s Veo 3, but similar to Runway Gen-4.
- Clip length – 21 s max is shorter than OpenAI Sora (60 s) yet longer than Adobe Firefly Video (10 s).
- Access route – Discord-only for now; no web panel or API.
Need web-based automation or faster turnaround? Use the AI Video Generator hub on BasedLabs and swap among engines like the rapid-fire Seedance 1.0 Pro or the photorealistic Hailuo 0.2.
Working with V1-style clips on BasedLabs
- Start with a still – drag any PNG or JPG into the image-to-video converter.
- Pick your backend – choose Seedance for speed drafts or Hailuo for cinematic detail; both mimic V1’s silent output.
- Add sound or loop – layer audio, or spin the result into a shareable loop with the AI GIF Generator.
- Remix for fun – upscale or stylise in the Hailuo AI Video Generator, or parody it inside the AI Stormtrooper Vlog Generator.
Quick prompt ideas for V1
- “Low-motion pan across a crystal-clear alpine lake, subtle ripples, light mist.”
- “Manual: rotate camera 180° around a neon-lit samurai statue, sparks drifting.”
- “Low-motion zoom into a steampunk pocket-watch forest, gears ticking in silence.”
FAQ
How do I access Midjourney Video V1?
Join Midjourney’s Discord, generate or upload an image, then run /animate
(Beta Features must be on).
Can I generate similar video directly on BasedLabs? Midjourney hasn’t released an API, but you can achieve comparable image-to-video results with our image-to-video tool powered by Seedance and Hailuo.
Does V1 include audio? No. All clips are silent. For sound-on footage, pick an audio-capable model inside the AI Video Generator hub.
What’s the maximum length? A base render is 5 seconds. You can extend it four times (adding 4 seconds each) for a total of 21 seconds.
How much does it cost? Every V1 render consumes roughly eight image credits; Basic-plan users will exhaust their quota more quickly.
Are there copyright risks? Yes. Disney and Universal sued Midjourney in June 2025, alleging its models reproduce copyrighted characters.
Midjourney’s Video V1 is a bold first step into animation. If you need browser-based tools, APIs, or longer clips today, experiment with Seedance, Hailuo and more inside the BasedLabs AI Video Generator hub.