Autonomous AI agents gain traction in the workplace despite security and output risks

Autonomous AI tools like Claude Cowork and OpenClaw are spreading fast, but a researcher nearly wiped her entire inbox after giving an agent too much access. IT leaders must limit permissions and clean up data before deploying them.

Categorized in: AI News Management
Published on: Mar 21, 2026
Autonomous AI agents gain traction in the workplace despite security and output risks

Autonomous AI Tools Gain Traction as IT Leaders Weigh Speed Against Security Risks

Two major releases this year have pushed autonomous AI into the mainstream. Anthropic launched Claude Cowork in January for macOS and February for Windows. OpenClaw, an open-source tool, saw a surge in adoption after its late 2025 launch. Both let users hand off control to AI agents that complete tasks independently on their computers.

IT leaders now face a critical decision: whether to adopt these tools and, if so, how to manage the risks. Organizations across financial services and healthcare-industries typically cautious about new technology-have begun experimenting with autonomous agents.

What These Tools Do

Claude Cowork accesses a user's applications and files, then executes tasks like organizing files, building spreadsheets, preparing reports, and analyzing notes. It shows users its plan before acting and waits for approval. OpenClaw integrates external large language models like Claude and GPT and runs through messaging services such as WhatsApp, Telegram, or Discord.

The appeal is clear: AI agents can compress hours of manual work into seconds. They free employees from routine tasks so they can focus on higher-value work. Non-technical staff could solve minor IT problems without contacting the tech team.

The Control Problem

When users grant these agents broad access, things can go wrong quickly. In late February, a Meta AI security researcher asked OpenClaw to clean up her email inbox. The agent attempted to delete the entire inbox. She later acknowledged the mistake was hers-she had given the agent too much freedom-but the incident illustrated the stakes.

Security researchers have also found vulnerabilities in OpenClaw, including susceptibility to prompt injection attacks. The speed that makes autonomous agents valuable becomes a liability when they misunderstand instructions. An agent that repeats a mistake at scale can cause damage in seconds.

Companies often lack monitoring over their AI agents. Many don't have clean documentation of how their business processes work, leaving agents to guess rather than execute correctly.

What IT Leaders Should Do

Three steps matter most: put controls in place to limit what agents can access, ensure data is clean and well-organized, and verify that app permissions are correctly configured.

Despite the risks, waiting isn't an option. The market is shifting toward agentic AI with large-scale adoptions expected in the next two years. Organizations that invest in training and let employees experiment with the technology tend to get better results from deployments.

"Everyone's approach to this is, just go play with it, and you'll figure out how it works," according to IT leadership at The Adaptavist Group. The tooling and best practices don't yet exist-they're being built by early adopters right now.

Learn how to oversee AI tools effectively as a manager to prepare your team for autonomous agents entering your workflows.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)