Higgsfield's Soul 2 Tackles AI's Plastic Look Problem
Higgsfield released Soul 2, an updated photo generation model designed to produce images that look intentional rather than synthetically generic. The model was built by a team of engineers, art directors, stylists, and photographers with luxury fashion and brand backgrounds.
The shift addresses a persistent problem in generative imagery: the recognizable artificiality that marks most AI-produced photos. Soul 2 aims to eliminate that quality by encoding aesthetic choices from professional photographers and art directors into the model's training.
How It Works
Soul 2 uses preference optimization based on feedback from art directors, photographers, and concept artists. The system learned from fashion history and contemporary cultural references, allowing it to render diverse features and hair textures with specificity rather than defaults.
A feature called Soul ID lets users train the model on specific faces using just 20 photos. The system then generates campaign-ready images of those faces in different settings and conditions.
Users can apply specialized photography presets and camera references to control the exact look of their work. This removes traditional production constraints like location scouting, casting, and travel costs.
Broader Platform Strategy
Higgsfield positions itself as a workflow hub rather than a single-tool solution. The platform integrates OpenAI's Sora, Google's Veo, and other third-party models into one environment, letting creative teams pick the right engine for each task.
Soul 2 builds on the original Soul model, which the company says became widely adopted among hundreds of thousands of users as a daily creative tool.
For creatives looking to expand your skills with AI tools, AI Design Courses and Generative Art Courses can help you understand how these systems work and integrate them into your workflow.
Your membership also unlocks: