How a Personal AI Agent Becomes Invisible Infrastructure
Radek Sienkiewicz of VelvetShark built a personal AI agent that started as a solution to one problem and evolved into a system managing his notes, calendar, reminders, files, and automations. The progression shows how to expand an AI tool's scope without breaking trust or reliability.
Sienkiewicz did not plan to build an operating system. He started with a specific recurring problem, then added capabilities incrementally. This approach-granting the agent access to more data and functions in small steps-let him verify reliability before expanding further.
The Knowledge Base Drives Usefulness
The agent's value depends on what it can access. Sienkiewicz maintains roughly 3,000 notes in Obsidian covering daily journals, project plans, call notes, drafts, and personal context. The agent synthesizes this information to surface relevant details and make connections a human might miss.
A well-maintained knowledge base lets the agent move beyond generic responses. It can flag a missed Netflix payment, remind about an upcoming meeting, or draft email replies by understanding your priorities and history.
Five Core Functions
Sienkiewicz identified how the agent operates:
- Ambient Operations: Keeping systems running, updated, and recoverable.
- Attention Filtering: Surfacing priorities and catching what might slip through.
- Execution Support: Drafting, synthesizing, and preparing work to reduce friction.
- State and Memory: Maintaining context instead of starting fresh each interaction.
- Trust and Control: Determining what runs automatically versus what requires approval.
Scaling Brings Real Risks
As the system grows, operational hazards emerge. Bad memory compounds errors. Brittle automations fail unpredictably. Noisy notes create noisy associations. Weak boundaries matter more as the agent's reach increases.
Sienkiewicz stressed the importance of keeping the system inspectable-you need to understand what it's doing and why. Separating judgment calls (what requires human decision) from predictable execution (what the agent handles alone) becomes critical.
The Build Path That Works
His recommended approach for product teams building similar systems:
- Start with a specific recurring pain point.
- Grow trust incrementally, not all at once.
- Build the knowledge base as you go.
- Keep the system inspectable and auditable.
- Separate judgment from predictable execution.
- Optimize for your future self, not the current moment.
The real measure of success is boring reliability. A personal agent becomes most valuable when it stops feeling novel and starts feeling like infrastructure-the kind of thing you notice only when it fails.
For product teams, this means the user of tomorrow is an optimized version of today's user, supported by an AI system that handles routine decisions while flagging exceptions that require human judgment. That's the target: not automation that replaces thinking, but automation that frees attention for what matters.
Learn more about AI Agents & Automation and how to apply these principles to your own workflow.
Your membership also unlocks: