Inside ABEMA's Generative AI Push: Smarter Recommendations, Leaner Workflows, Faster Onboarding

ABEMA is using generative AI to boost recommendations, speed content ops, and streamline engineering. Real wins, clear limits, and a playbook for teams to iterate with care.

Categorized in: AI News Product Development
Published on: Jan 21, 2026
Inside ABEMA's Generative AI Push: Smarter Recommendations, Leaner Workflows, Faster Onboarding

How ABEMA Builds Better Content Experiences With Generative AI

ABEMA's product development has grown with the service-and the org is now using generative AI to push both content quality and operational speed. We spoke with Principal Engineer Yuji Hato, who leads AI adoption, and Engineering Manager Shunya Suga, who owns recommendation features. Below is a practical view of what's working, what still needs human judgment, and how teams and skills are changing.

Profile

Yuji Hato is Principal Product Engineer at AbemaTV, Inc. Since joining in 2011, he has developed mobile apps, built common backend infrastructure, helped launch the music streaming service "AWA," and led ABEMA's client strategy across iOS/Android.

Shunya Suga is Engineering Manager for ABEMA's Product Backend team. Since joining in 2021, he has shipped features for large sporting events, replaced ABEMA's search infrastructure, and now leads recommendation feature development.

How Generative AI Is Enhancing Features of ABEMA

Company-wide momentum matters. Over the past two to three years, ABEMA and its parent group have run contests, prototypes, and production pilots to find useful AI integrations. Some are already in daily use-like generating banner images and drafting news articles.

On recommendations, ABEMA summarizes and structures content metadata (like synopses), converts it to vectors, and uses vector search to surface similar shows. If you watched a program, you'll see "Because you watched…" suggestions that map to your intent. This shipped feature moved key metrics by several percentage points-clear proof that AI can improve the viewing experience.

The next step: go from program-level insights to scene-level. Think "confession scene" or "surprise reveal." Segmenting and indexing moments unlocks tighter, more relevant suggestions right when the viewer wants them.

From Code Completion to Workflow Design: A Look at Generative AI at ABEMA Today

ABEMA's adoption playbook follows two tracks. First: developer augmentation-tools embedded in daily work like real-time code completion, chat over the repo, and emerging agent support. Second: systems-level design-optimizing end-to-end workflows and embedding agents where they create compounding team leverage.

  • Developer augmentation: Tools like GitHub Copilot are widely rolled out and part of the routine. They cut friction for coding, tests, and quick lookups.
  • Systems-level improvements: Custom workflows, agent handoffs, and process automation for test runs and operations. This requires deep knowledge of the business, architecture, and cost/security trade-offs-not just a tool install.

The pace of change is fast. The team's stance: experiment, measure, and iterate quickly. Keep what compounds. Drop what doesn't.

Reducing Cognitive Load on Large Codebases

Code assistants help with queries like "Where is this feature?" or "How is this structured?" That said, code can't tell you everything. Domain intent, product rules, and historical decisions live outside the codebase.

ABEMA centralizes specs, requirements, and PRDs in documentation, then exposes them to a hybrid search (RAG plus full-text). Ask "What are the specs for international support?" and you get sourced, summarized answers. It shortens onboarding loops and cuts context-switching time across teams.

Docs Generation and the Limits of Automation

AI can draft documentation from an existing codebase, but output is capped by what's explicit in code-structure, diagrams, interfaces. It misses implicit knowledge: domain rules, trade-offs, non-functional constraints, and the "why."

ABEMA is testing "Project as Code" ideas and the Model Context Protocol (MCP) to supply richer, structured context to models. The takeaway: invest in structured knowledge today; your AI output quality increases tomorrow.

Testing: From Unit Speed to End-to-End Reality

Unit tests are a sweet spot. Prompts like "Generate tests based on this spec" already produce useful results. That's a direct boost for individual contributors.

End-to-end testing is different. It spans case creation, data setup, execution, and assertions across multiple systems. There's no single tool that covers all of it well. ABEMA is assembling the right tools per stage and exploring agents for task decomposition and decision-making. The near-term strategy: start with well-bounded tasks, prove value, then expand.

Growth Strategies for Engineers and the Reality of Crossing Borders in the AI-Native Era

Onboarding speed is changing. Interns and junior engineers can ship faster by pairing with AI: code help, API usage, design patterns, and quick feedback loops. What used to take years of exposure now compresses into months.

ABEMA's domain-based teams (home UI, billing/auth, infra, etc.) keep complexity manageable, but there's still a lot to absorb. Looking ahead, long-context models and multi-agent systems could provide personalized, context-aware support so new engineers contribute sooner without drowning in details.

Crossing role boundaries is getting easier. Suga started as backend but now builds recommendation features alongside ML specialists. With cloud-native vector search and practical AI services, engineers can experiment and learn by doing-without hitting a wall of prerequisites.

Quality still needs human judgment. Even with advanced AI tools, turning "Build this feature" into a product-grade result demands clear requirements, sound design, security awareness, and alignment with non-functional needs. As models handle more context, the high-leverage skills shift further toward problem framing, architectural thinking, and the ability to guide AI effectively within team workflows.

A Practical Playbook for Product Leaders

  • Pick a few high-impact use cases: recommendations, test generation, or content ops. Ship small, measure, iterate.
  • Roll out developer augmentation first (e.g., Copilot), then layer in systems-level automation where ROI is clear.
  • Centralize knowledge (specs, PRDs) and expose it via hybrid search so AI can answer domain questions reliably.
  • Instrument end-to-end workflows. Define handoffs between tools and agents. Keep humans in the loop for final checks.
  • Set guardrails for cost, data security, and model choice. Review regularly as tools evolve.
  • Invest in skills: requirements writing, architectural reasoning, and prompt/system design-not just code.

ABEMA's lesson is simple: treat AI as part of the product system, not a magic switch. Start where the value is obvious, build feedback loops, and level up the team's ability to think and ship with AI-end to end.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)
Advertisement
Stream Watch Guide