Wan 2.7

Wan 2.7 is Alibaba's latest open-source AI video generation model. It generates 1080P videos up to 15 seconds with first/last-frame control, 9-grid image-to-video, subject + voice cloning, and precise instruction-based editing - all in one place. You can try it on Dzine.

Click or drag here to upload images

Uploading via drag and drop

Try Wan 2.7 with one of these examples

Wan 2.7 Takes AI Video Generation to a New Level

Wan 2.7 is a major step forward for AI video generation. Every part of the output - visual quality, audio fidelity, motion coherence, stylization, and timing consistency - has been significantly improved over previous versions. The result is a model that produces videos that feel genuinely crafted, not randomly generated.

Key Features of Wan 2.7

  1. Superior visual fidelity - The Wan 2.7 AI video generator renders fine details with greater accuracy. Skin textures, fabric movement, lighting gradients, and background depth all look sharper and more realistic in 1080P output, making it suitable for commercial-quality content.
  2. Precise temporal consistency - Wan 2.7 keeps characters, objects, and scene elements stable across the full video duration. There is no drift, no sudden morphing, and no flickering - which has been a common issue in earlier AI video models.
  3. Advanced audio synthesis - The model generates background music, ambient sound, and character vocals that feel matched to the scene. Audio is no longer an afterthought. With Wan 2.7, sound and visuals are generated as a unified output from the start.
  4. Stylization with control - Whether you need cinematic realism, anime aesthetics, or stylized illustration output, Wan 2.7 follows style instructions with accuracy. It works well for product marketing, short films, social content, and creative storytelling alike.
  5. Instruction-following accuracy - The Wan 2.7 AI video generator understands detailed text prompts much better than earlier versions. Camera angle, subject action, scene mood, and composition are all more reliably interpreted and executed.

How to Use Wan 2.7 on Dzine

Step 1: Upload Your Image and Enter a Prompt

Go to Dzine's image to video AI tool. Upload your reference image and write a prompt describing the video you want to create.

Step 2: Select Wan 2.7 as Your Video Model

In the model selector, choose Wan 2.7. You can also explore other models like Kling 3.0 or Hailuo 2.3 based on your project needs.

Step 3: Generate and Download Your Video

Click Generate and Dzine will process your inputs using Wan 2.7. Your 1080P video will be ready in seconds. Download it directly.

First-Frame & Last-Frame Video Generation

Wan 2.7 lets you set both the opening frame and the closing frame of a video. The model fills in all motion, transition, and scene progression in between - with consistent subject identity and natural movement throughout. This gives you precise control over the final output.

Start FrameEnd FramePromptOutput Video
start frameend frameA 6-second whimsical animation: a glossy acrylic paint-splash cat (red, blue, yellow, orange, purple, green) walks on a white canvas, leaps and splatters paint droplets (0-2s), spins and dissolves into swirling color streams (2-4s), which converge and solidify into a bright paint-splash flower with a green stem (4-6s). Fluid paint physics, vibrant colors, cartoonish style, 8K, soft white background.output video

9-Grid Image-to-Video

Upload a 3×3 arrangement of still images and Wan 2.7 converts them into a single continuous video. Each panel becomes a distinct scene or moment, stitched together with smooth transitions and consistent visual style - no manual editing required.This is useful for storyboard-based workflows, multi-scene ads, product catalogs, and sequential illustrations.

Input GridPromptOutput Video
9-grid inputmake a video with the imageoutput video

Subject + Voice Reference Cloning

Provide a reference image and a short voice sample. Wan 2.7 replicates the subject's visual identity - face, body, clothing - and vocal characteristics in the generated video. The output stays consistent across multiple clips without reshooting. This works for spokesperson videos, brand mascot content, and creator series. An influencer can generate new content with their likeness and voice without being on camera.

Input ImagePromptOutput Video
input imageThe camera sways slightly with the waves and zooms in a little on the people in the image. The two people in the picture are having a conversation, which needs to be related to discussing beluga whales. The beluga whales are also happily playing and swimming in the water.output video

Instruction-Based Video Editing

Upload an existing video clip and type what you want changed. Wan 2.7 applies the edit — swap the background, change the outfit color, modify lighting, alter the character's action — while keeping the rest of the clip intact. You keep the structure and timing of the original video and adjust specific elements by text command.

Original VideoEdit InstructionEdited Output
original videoChange the canvas color to white.edited output

Video Recreation/Replication

Wan 2.7 analyzes a reference video and recreates it with a new character, style, or environment - while preserving the original motion structure, pacing, and camera movement. You describe what should change; the model applies it. This works for adapting trending video concepts with your own brand assets, generating style variants of the same clip, and converting live-action footage into animation or anime.

Reference VideoRecreation PromptOutput Video
reference videoRecreate the character with an cartoon superhero. Keep the movement and camera path identical.output video

Why Use Wan 2.7 on Dzine AI?

Advanced AI Models

Advanced AI Models

Advanced AI models such as Wan 2.7, Veo 3.1, and Hailuo 2.3 deliver high-quality, varied video content that makes your work truly distinctive.

One Click to Generate

One Click to Generate

Produce polished visuals instantly, apply styles automatically, or refine designs without pro skills.

Free Trial

Free Trial

Access core AI tools for free, experiment with creations freely, or test features before committing.

High Quality Results Export

High Quality Results Export

Export high-res visuals in multiple formats, preserve every fine detail, or use them for print and digital.

Watermark Free

Watermark Free

Get clean, watermark-free outputs, use visuals for commercial campaigns, or share them seamlessly.

Online Platform, No Downloading

Online Platform, No Downloading

Create on the web directly, skip software installation, or design anytime, anywhere, on any device.

More Dzine Tools to Enhance Your Labubu Creations

What Our Users Said

First and Last Frame Changed My Workflow Completely

I used to spend hours in post trying to get a video to end on the exact shot I needed. With Wan 2.7 on Dzine, I just upload my target ending frame and the model builds toward it. The output is clean, the subject stays consistent, and the motion between the two frames feels natural. It is exactly what I needed for product reveal content.

Priya MenonE-commerce Content Strategist

The 9-Grid Feature Saved My Agency Days of Work

We were building a multi-scene ad campaign for a skincare client. Normally that means generating scenes one by one, then editing them together. With Wan 2.7, I laid out the shots in a 3×3 grid, wrote one prompt, and got a coherent campaign video back. The transitions were smooth and the brand colors stayed consistent across all nine panels. That alone made it worth switching.

Daniel OseiCreative Director, Digital Agency

Subject Cloning Means I Can Scale Content Without Reshooting

I have a character I use across my YouTube channel. With the subject and voice reference cloning in Wan 2.7, I generated three new videos with my character last week without touching a camera. The voice matched, the face matched, and the overall quality was consistent enough to publish directly. The Wan 2.7 AI video generator is the first tool that made this kind of content scaling actually practical.

Leo HartmannYouTube Creator & Video Producer

FAQ

What is Wan 2.7?

Wan 2.7 is Alibaba's latest open-source AI video generation model, available on Dzine. It generates 1080P videos up to 15 seconds with first/last-frame control, 9-grid image-to-video, subject and voice cloning, instruction-based editing, and video recreation.

How to use Wan 2.7 on Dzine for free?

Dzine offers a 7-day free trial with full access to the Wan 2.7 AI video generator. No credit card required. Sign up, upload an image or write a prompt, select Wan 2.7, and generate your first video in seconds.

What are the key features of Wan 2.7?

The key features of Wan 2.7 include first/last-frame video generation, 9-grid image-to-video, subject and voice reference cloning, instruction-based video editing, and video recreation. All outputs are rendered at 1080P, up to 15 seconds long.

How does first and last frame video generation work in Wan 2.7?

Upload two images - one for the opening frame and one for the closing frame. The Wan 2.7 AI video generator builds the motion and scene progression between them with consistent subject identity. Works well for product reveals, narrative shorts, and any content where the final frame must match a specific target.

Can Wan 2.7 clone a subject's voice and appearance?

Yes. Provide a reference image and a short audio sample. Wan 2.7 replicates the subject's face, body, and clothing style, along with their voice timbre, in the generated video. Works for real people, illustrated characters, and branded mascots.

What is the 9-grid image-to-video feature in Wan 2.7?

Upload a 3×3 grid of still images and Wan 2.7 turns them into a single continuous video. Each panel becomes a distinct scene, connected by smooth transitions and consistent visual style. Designed for storyboard workflows, multi-scene ads, and product catalogs.

How is Wan 2.7 different from Wan 2.6?

Wan 2.7 adds first/last-frame generation, 9-grid image-to-video, subject and voice cloning, and improved instruction-based editing - none of which are in Wan 2.6. Visual quality, audio synthesis, and temporal consistency are also higher across the board.