AI-Driven Development with Olivia McVicker: From Coding Assistants to Full-Lifecycle Agents
Senior Cloud Advocate Olivia McVicker breaks down how AI is moving from autocomplete helpers to teammates that touch every part of the software lifecycle. The headline isn't "AI replaces developers." It's "developers spend more time solving real problems while AI handles repeatable work."
This conversation walks through current tools, where agentic systems are going, and how teams can put guardrails in place without slowing down.
Key Takeaways
- AI coding tools build on decades of developer assistance. Think IntelliSense and static analysis with natural language on top.
- Offload predictable tasks (refactors, boilerplate, docs, tests) so engineers can focus on design, trade-offs, and hard bugs.
- Prompt engineering is now a core skill. Model choice, context, and instructions matter as much as syntax.
- AI is still software. Clear instructions and constraints are required or results will drift.
- Break the SDLC into subagents (planning → dev → test → docs). Keep the right humans in the loop at each handoff.
From Autocomplete to AI Teammates
We've moved past ghost text suggestions. AI now participates across brainstorming, planning issues, scaffolding pipelines, writing tests, and reviewing PRs. Treat it like a teammate that shares cognitive load, not a magic box.
Use assistants for fast iteration, but keep accountability. One-shot generations are great for drafts, never for unreviewed production code.
What Changes for Developers
- Less time on rote work. More time on architecture, edge cases, performance, and integration complexity.
- New fundamentals: prompting, providing context, model selection, and tool orchestration.
- Codify "how we build" in repo-level instructions (project overview, standards, frameworks, gotchas, bug patterns to avoid).
- Develop a review mindset. Ask for citations, request diffs, compare alternatives, and validate assumptions.
Practical Setup: Your First Week Plan
- Pick your assistant and IDE integration. Enable local agent sessions and, if available, background agents for long-running tasks.
- Add an instructions file to each repo. Include stack, patterns, testing norms, security constraints, and known pitfalls.
- Wire in guardrails: linters, formatters, type checks, unit tests, and pre-commit hooks the agent must respect.
- Define a PR policy: AI can propose, humans approve. Require domain expert sign-off for risky areas.
- Start a 2-3 week pilot. Track time saved, defect rates, review load, and developer satisfaction. Adjust instructions weekly.
Agents Across the SDLC
- Planning subagent: turns product notes into clear requirements, flags ambiguities, creates issues with acceptance criteria.
- Dev subagent: scaffolds modules, applies patterns, and stays within repo rules and architectural constraints.
- Test subagent: generates unit/integration tests, mutates inputs, and checks edge cases tied to acceptance criteria.
- Docs subagent: updates READMEs, ADRs, and changelogs, and drafts onboarding notes from recent changes.
- Release subagent: prepares PR summaries, release notes, and pipeline configs, then waits for human approval.
Chain subagents with human checkpoints between each stage. No "click once, deploy to prod."
Keep the Right Humans in the Loop
- For unfamiliar code areas, involve a domain expert before merge. "LGTM" from two non-experts is a trap.
- Make reviewers responsible for one thing per pass: correctness, security, performance, or style. Rotate specialties.
- Require agents to link sources or show code locations when summarizing or fixing issues.
Security, Trust, and Governance
- Ask vendors: Where does my code go? Is training opt-in? What telemetry is collected? Can we self-host or restrict regions?
- Set policy for secrets, PII, and regulated data. Enforce via scanners and CI gates the agent must pass.
- Treat AI output like third-party code: run SAST/DAST, dependency checks, and threat modeling on significant changes.
- Review vendor trust docs. For example, see the GitHub Copilot Trust Center and the NIST AI Risk Management Framework.
Quick Wins You Can Automate Now
- Refactors, conversions, and boilerplate generation.
- Test scaffolding with property-based cases and mutation prompts.
- PR summaries, risk highlights, and checklist validation against repo rules.
- Onboarding Q&A over the codebase, docs, and ADRs.
- Release notes and pipeline YAML stubs tied to your CI/CD stack.
Level Up Your Prompting
- Give role, goal, constraints, and examples. Add code context and links to local files or docs.
- Ask for a plan first, then the change. Request diffs and tests alongside code.
- Pin standards: "Follow these repo instructions" and "Reject solutions that violate X."
- If you're building skills, see curated training on prompt engineering.
Looking Ahead
Adopt a simple rule: try, measure, and revisit every 90 days. Models, capacity, and IDE integrations improve constantly, and the "same tool" can behave very differently a quarter later.
Start small, prove value, and move more work to agents as your instructions, tests, and review muscle get stronger.
About Olivia McVicker
Olivia McVicker is a Developer Advocate at Microsoft focused on VS Code, GitHub, and Azure. She draws on years of application development to help teams use AI assistants effectively across real-world workflows.
Your membership also unlocks: