Published: 14:35, 13 Feb 2026 GMT
Actor sues after Albania's AI minister uses her face and voice
Anila Bisha, a well-known Albanian actor, says her likeness and voice were used to create "Diella," the government's AI minister, without her consent. She has filed a formal request in an Albanian administrative court to stop all use of her image and voice in the project.
As reported by The Independent, Albania introduced what was described as the first fully AI-generated government minister last year. Prime Minister Edi Rama said Diella's role is to keep public procurement tenders clean of corruption and noted the AI figure "gave birth" to 80 digital children to assist each member of parliament.
Bisha's lawyer, Aranit Roshi, called the filing the first legal step to prevent misuse of her image. Bisha told the Associated Press: "It was surprising when I heard the prime minister declare it. I asked how this could happen without my knowledge, without anyone asking me if I wanted my image to be used or not."
She previously signed a contract allowing her face and voice on the government's e-Albania platform, but says she was never told they'd be combined with AI. She also believes her image is now being used in a political context, with the AI figure often dressed in traditional Albanian clothing.
This isn't an isolated issue. We've seen unexpected AI lookalikes surface online-like a case where a jailed individual appeared to be "modeling" for a fast-fashion brand. For public bodies, that's a warning: synthetic media can cross legal and ethical lines quickly if consent and purpose aren't nailed down.
Why this matters for government teams
Using AI personas that resemble real people-especially public-facing "ministers"-creates legal risk (image rights, voice rights, data protection), opens procurement to challenges, and can erode public trust. The fix is operational discipline: clear consent, transparent pipelines, and accountable human oversight.
Action steps for compliant, trusted AI deployments
- Get explicit, purpose-bound consent for likeness and voice. Separate clauses for AI/synthetic use, distribution, political or governmental communications, and revocation.
- Maintain a consent registry. Track who approved what, where it's used, expiry dates, and a rapid takedown path if consent is withdrawn.
- Lock rights in contracts. Require vendors to warrant rights clearance for all assets, ban scraping without rights, include indemnities, and allow audits.
- Label synthetic content. Add visible notices and technical markers (watermarks, metadata) so the public knows when they're interacting with AI.
- Keep a human accountable. Don't present AI as an office holder. Clarify who owns decisions, approvals, and communications.
- Run pre-deployment risk checks. Conduct legal review (image rights, data protection), security tests, and bias testing. Document the outcome and mitigation.
- Publish model provenance. Share which model is used, data sources (at a high level), guardrails, and limits. Offer a non-AI channel for services.
- Set incident response. If a likeness dispute surfaces, define who pauses the system, who reviews, and timelines for removal and public notice.
- Train staff. Ensure communications, legal, procurement, and IT teams know how to source licensed assets and manage AI disclosures.
What to watch next
The court's decision could influence how public bodies in the region approach AI-generated personas and image rights. Expect more scrutiny on consent, political use of synthetic media, and transparency in AI-assisted governance.
Sources referenced: The Independent, Associated Press.
Looking to upskill your team on practical AI use and policy? See role-based options here: Complete AI Training - Courses by Job.
Your membership also unlocks: