Miro Mitev's AI-Run Fund Delivered 400%-And He Says Humans Matter Most

At SmartWealth, Miro Mitev lets AI make the trades while people build and guard the models. The edge is clean data, strict validation, and zero gut overrides.

Categorized in: AI News Finance Management
Published on: Dec 27, 2025
Miro Mitev's AI-Run Fund Delivered 400%-And He Says Humans Matter Most

AI-Run Investing, Human-Centered: How Miro Mitev Builds Trustworthy Systems

In 1997, while most students were discovering the internet, Miro Mitev was learning neural networks at the Vienna University of Economics and Business. Decades later, he leads SmartWealth Asset Management, where a network of AI systems makes the calls-without human override.

He's clear about one thing: humans still matter most. Not for gut calls, but for designing, feeding, and maintaining the models that do the work.

From Lecture Hall to Live Markets

Mitev saw early that neural networks could forecast financial outcomes. That sent him into a 25-year career building and testing models for banks and tech firms like Siemens.

SmartWealth is the product of that experience. Its latest fund, IVAC, is targeting $2 billion in assets under management with a 14-15% annualized return objective.

How the System Decides

At SmartWealth, decisions are generated by a connected stack of AI systems. No human steps in to approve or veto daily outputs.

The rule is simple: trust the model you built. If you intervene after deployment, you're introducing your own bias right when the model needs clean execution.

Where Humans Matter

Human expertise sets the stage. Teams choose the training data, select features, set parameters, and keep the pipeline clean. That's the leverage point.

Once live, the work shifts to feeding reliable, current data and checking for errors in inputs or calculations. Override the model's signal because it "feels wrong" and you'll usually regret it a few months later.

Performance and Time Horizon

SmartWealth reports a 10-year gain of 407.63% through Nov. 1, 2025, versus a 145.34% industry benchmark in the same period, based on a firm-provided chart. The edge isn't clairvoyance-it's consistency.

Mitev's view: it's not possible to know what the market will do in a year. His systems focus on a shorter, roughly one-month window where signal strength holds up better, then compound decisions over time.

The Real Risk: Emotion

Markets are driven by optimism, pessimism, and speculation. That's human. Models aren't immune to error, but they don't feel fear or FOMO.

Take the emotion out of execution and results tend to improve-if the inputs, design, and validation are strong.

Hallucinations, Overfitting, and Other Model Traps

AI can generate false results. In markets, that usually ties back to overfitting, poor data, or a misspecified model. Overfitting happens when the system learns noise instead of signal-patterns that look useful but don't hold up out of sample.

Antidotes: rigorous design, strict validation, live environment testing, and controlled iteration. No single safeguard is enough; the combination is what keeps the model honest.

Operator's Checklist for Finance Leaders

  • Define the horizon: Align models to the window where your signals are statistically reliable. Shorter windows often beat vague, long-term guesses.
  • Own the data pipeline: Validate sources, monitor drift, and automate checks for anomalies and gaps.
  • Separate build from run: Build and tune models with care. Once live, avoid overrides unless you find a proven error in inputs or code.
  • Fight overfitting early: Use cross-validation, out-of-sample tests, and live shadow runs before full deployment.
  • Measure in the real world: Track hit rates, drawdowns, turnover, and capacity limits. Optimize for the net, not just the backtest.
  • Close the loop: Add new data, retire stale features, and document every change. The process compounds.
  • Keep humans where they win: Framing the problem, choosing constraints, and ensuring governance-not micromanaging daily signals.

Why In-House Matters

According to Mitev, the edge comes from years of iteration, tight feedback loops, and direct control over data and models. Outsourcing the core won't give you that.

If you want differentiation, build the capability internally and make your process your moat.

Learn More

Bottom line: let models make the calls, and let humans make the models better. That split creates compounding advantages without adding emotion to the trade.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide