Alibaba releases Qwen3.5-Omni and Wan2.7-Image to speed up coding and visual design workflows

Alibaba released two AI models that take a handwritten sketch and spoken description to a working prototype with matching visuals in hours. Qwen3.5-Omni writes code; Wan2.7-Image generates brand-accurate assets.

Categorized in: AI News Creatives
Published on: Apr 03, 2026
Alibaba releases Qwen3.5-Omni and Wan2.7-Image to speed up coding and visual design workflows

Alibaba's AI Duo Collapses the Gap Between Sketch and Working Prototype

Alibaba released two AI models this week that work together to move creators from initial concept to functional prototype in hours rather than days. Qwen3.5-Omni handles code generation from sketches and voice descriptions. Wan2.7-Image populates those interfaces with brand-accurate visuals.

The pairing addresses a real friction point in creative work: stitching together outputs from different tools that don't match in style, color, or intent.

Coding by showing, not typing

Qwen3.5-Omni processes text, audio, images, and video. Its practical advantage for developers and designers is "Audio-Visual Vibe Coding" - the ability to generate working code from a handwritten sketch and spoken description.

A designer sketches a layout on paper. They describe what each element does. The model outputs functional HTML and JavaScript for a website, app, or mini-game interface.

This eliminates the boilerplate phase where developers write basic structure before adding logic. The barrier to moving from idea to clickable prototype drops significantly.

Visual assets that match brand standards

A working interface means nothing without visuals. Wan2.7-Image generates images with direct control over color, style, and character details - constraints that older image models treated as suggestions.

Creators input specific hex codes and color proportions into prompts. The model respects those constraints. This solves a chronic problem with AI image generation: outputs that drift away from corporate brand guidelines or require dozens of regenerations to land on the right shade.

The model also lets users specify physical attributes - eye shape, bone structure, facial features - to create consistent characters across a project. It generates up to 12 images at once, making it practical for building complete asset libraries rather than individual graphics.

From sketch to shipped faster

The two tools create a closed loop. A designer with a handwritten sketch, a verbal description, and brand color codes can produce a working prototype with matching visuals without switching between multiple tools or waiting for design revisions.

For e-commerce teams building campaign visuals, architects generating renderings, or game developers populating interfaces, the time saved on iteration compounds across projects.

Learn more about AI design tools and generative art workflows that can accelerate your creative process.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)