Ram Bala: What does AI know about your business?
Ram Bala is an associate professor of AI and analytics and the co-founder of an enterprise AI startup. He teaches executive MBA, master's, and undergraduate courses on data analytics-the backbone of useful AI in business. His aim is simple: turn institutional knowledge into leverage for decision-making, growth, and resilience.
He also co-wrote "The AI-Centered Enterprise," a practical guide that helps leaders decide where AI fits in their company. What follows are the core ideas he teaches and applies with companies right now-direct, actionable, and built for managers.
The real management question
The public debate focuses on making AI safe and consistent with human values. That's necessary, but not enough for operators. The real question is: how do you make AI serve your company's goals, standards, and workflows?
Two fronts decide the outcome. First, the tech: out-of-the-box tools like ChatGPT need personalization by industry, function, team, and sometimes by individual. Second, the organization: change management-process, roles, incentives, and measurement-makes or breaks adoption.
Why this matters now
Workforces are aging and talent is getting tighter across many countries. Demographic pressure means fewer people doing more work, which forces productivity gains and smarter discovery. See the United Nations' data on global aging for the macro picture.
Your biggest bottleneck is probably hidden knowledge. It sits in PDFs, email threads, wikis, meeting notes-and in people's heads. Until recently, that was stuck. Now, large language models can read, organize, and reason over that sprawl, even from scanned notes.
What a large language model can do
If your teams could find the right answer in seconds-and trust it-you'd save hours, reduce rework, and free attention for execution.
A practical playbook to put AI to work
Here's a simple, field-tested path managers can run with. Start small. Ship in weeks, not quarters. Measure hard results.
- 1) Inventory your knowledge
List core sources: contracts, SOPs, policies, sales collateral, tickets, project docs, and chat. Define freshness (what needs updates weekly vs. quarterly). Flag authoritative vs. draft content to control what the model can cite. - 2) Set guardrails
Access controls by role and team. Redaction for sensitive data. Logging for who asked what and what was returned. A simple review queue for answers that need human sign-off before broad use. - 3) Build the personalization layer
Use retrieval-augmented generation (RAG) to ground answers in your documents. Create role-based prompt templates (support, finance, sales engineering). Start with embeddings and RAG; consider fine-tuning only if you hit clear limits. - 4) Choose 2-3 high-yield use cases
Great starters: customer support deflection, sales Q&A and proposal assembly, policy and compliance search, ops SOP lookup, and onboarding. Estimate value: time saved x frequency x fully loaded cost. Prioritize what pays back in a quarter. - 5) Treat adoption like a product launch
Pick champions in each team. Give them a use target (e.g., 10 queries/day). Run weekly office hours. Reward people who add missing docs and fix bad answers. Share wins company-wide. - 6) Track the right metrics
Time-to-answer, answer helpfulness rating, deflection rate, content coverage, and freshness. Add a "couldn't find it" button to capture gaps for your content backlog. - 7) Tune and scale
Promote successful patterns (prompts, sources) to other teams. Retire low-use bots. Review drift monthly: are answers still accurate and fast? Keep your content and connectors current.
From factory floors to boardrooms: Bala's path
His first project: digitize quality checks at a lock manufacturer. Workers recorded pin thickness by hand; he turned that into software and statistical analysis so managers could trace where quality slipped and fix it fast. He layered in "expert systems" ideas from early AI-small, scrappy, and practical.
That bias for real problems stuck. After a Ph.D. in operations research and analytics work in pharma, he returned to academia with one question: how do we move from interesting models to business results? Today, his startup work feeds his teaching, and his classrooms feed real-world experiments.
What students reveal to managers
Executive MBAs bring edge cases from industry structure, incentives, and risk-the stuff that derails clean charts. Undergrads pressure-test assumptions with fresh usage patterns and high expectations for instant answers. Master's students jump into research and deployments, often as interns who turn into hires.
The lesson: your AI plan will meet real people, legacy tools, and messy data. Plan for it. Build with it.
Where managers should start this quarter
- Pick one team with repeat questions and high volume. Support or sales enablement usually wins.
- Connect one trusted content source. Don't boil the ocean.
- Ship a grounded Q&A assistant. Limit scope. Measure time saved and deflection in the first 30 days.
- Close the loop weekly: fix bad answers, add missing docs, and publish the win.
Recommended reading
For the backstory of modern AI and the people behind it: "The Genius Makers" by Cade Metz. For the shift sparked by tools like ChatGPT and the story of how it happened: "The Optimist: Sam Altman, OpenAI, The Race to Invent the Future" by Keach Hagey.
Manager's cheat sheet: questions to ask your team
- What critical answers do we repeat daily, and where do they live right now?
- Which 20 documents decide 80% of our answers?
- Who owns content freshness and access rules?
- What would "fast and accurate" look like in metrics we already track?
Want more practical plays for managers? Explore AI for Management.
Your membership also unlocks: