Artists dump X as new AI image-editing tool triggers backlash
Digital artists are walking away from X (formerly Twitter) after the platform rolled out an AI-driven "Edit Image" feature tied to Grok. The core issue: anyone can grab a public image, prompt the chatbot, and spit out altered versions-then repost them or reply to the original creator.
For working creatives, that's a direct threat to consent, credit, and compensation. Reports also point to harassment and watermark removal, pushing trust on the platform to a breaking point.
What changed on X
The new "Edit Image" tab lets users select any public image and issue text commands to Grok to modify it. The generated version can be reposted or replied back to the original post.
Creators say this allows people to alter their work without permission and misrepresent the artist's intention. There are also reports of users removing watermarks on AI-edited images, further eroding control over provenance and authorship.
High-profile exits and public statements
One of the most visible critics is Mu-jik Park, the South Korean artist known as Boichi (Dr. Stone, Sun-Ken Rock). He announced he will pause publishing comics and illustrations on X. "It is with a heavy and broken heart that I write these words. For the time being, I will pause the publication of my comics and illustrations on X."
He said he believes in AI's future but cannot accept his work being used or exploited without consent or fair compensation. He asked fans to follow him on Instagram for new posts and noted he will still use X for updates, hoping normalcy returns so he can resume sharing work there.
The flashpoint: Iomaya's commissioned piece was hijacked
After artist Iomaya posted a paid commission that was flagged as adult content and downranked, an anonymous user grabbed the image and used Grok to alter it. The modified version had the word "Luddite" written across the character's abdomen.
The user described the piece as low-quality AI content and, per reports, stripped the watermark. The thread went viral with more than five million views and over 15,000 reposts, drawing strong support for the artist. The account behind the manipulations, El3v3nDimesion, appeared to target multiple creatives and has since been deactivated.
Why this matters for creatives
Consent isn't optional. When anyone can edit public posts and send them back into the same feed, the lines between commentary, collaboration, and exploitation blur fast.
This is about control, safety, and the ability to make a living from your work. Without strong provenance and platform-level guardrails, abuse becomes easy and enforcement becomes hard.
Practical steps to protect your work right now
- Post previews, keep finals elsewhere: Share lower-res images on social platforms; reserve high-res files for clients, stores, or your own site.
- State terms clearly: Put license terms and "No edits, no training" disclaimers in your profile and post captions.
- Use visible + metadata-based provenance: Combine visible marks with content credentials where possible. Learn about content authenticity standards like the Content Authenticity Initiative and C2PA.
- Monitor and document: Track mentions, save evidence, and file reports quickly. Keep timestamps, URLs, and copies of offending posts.
- Diversify audience touchpoints: Build direct channels (newsletter, website, private community) so a single platform doesn't own your reach.
- Set an AI policy: If you use AI, explain what you allow and don't allow. Be explicit about training rights, edits, and attribution.
What platforms should do next
Consent-first controls: let creators disable AI edits on their posts and enforce it. Preserve and surface provenance metadata. Make watermark tampering a policy violation with clear, fast penalties.
Add creator protections by default-opt-out toggles, blocklists, and reporting flows that acknowledge the difference between parody and abuse.
Want ethical, creator-first AI workflows?
If you're exploring AI in your practice and want practical, creator-friendly resources, see this curated roundup for visual creators: AI tools for generative art. Build a process that keeps consent, attribution, and revenue at the center.
Bottom line
AI-assisted editing isn't the problem by itself. Lack of consent and weak safeguards are. Until platforms give artists real control-and enforce it-creatives will keep moving their best work elsewhere.
Your membership also unlocks: