AI Access That Actually Works: An Interview with Tomer Aharoni, CEO of Nagish
Tomer Aharoni is the Co-founder and CEO of Nagish, an AI accessibility company focused on making communication possible for people who are Deaf, DeafBlind, hard-of-hearing, or those with speech disabilities. His north star is simple: communication should be private, and it should work anywhere. That lens drives Nagish's product decisions and research roadmap.
Why Nagish Exists
The spark came during Tomer's time at Columbia University. A missed call in class led to a bigger question: how do you call if you can't hear or speak? In 2019, most people still relied on interpreters or captioning assistants, which meant delays, scheduling issues, and awkward workarounds.
Later, while interning at Bloomberg, Tomer saw how hard it was to coordinate meetings with a deaf colleague and two interpreters who understood technical jargon. Availability was limited. The result: fewer spontaneous calls, slower collaboration, and lost opportunities for connection.
The Gap: Not Enough Interpreters, Too Many Barriers
Interpreters are essential, but there aren't enough of them, and access is uneven. Scheduling, cost, and geography make it even harder-especially for healthcare, emergencies, or rural communities. For context on the scale of hearing loss and access challenges, see the WHO's overview on hearing loss and communication access.
What Nagish Offers Today
Nagish currently provides real-time phone call captioning on any connected device. Users type to speak and read to listen using their existing phone number. The app converts speech-to-text and text-to-speech in real time, so people can place and receive calls without needing to hear or speak.
The principle behind the product is consistent: privacy first, no intermediaries, and instant access. That privacy stance came from day one and hasn't moved.
What's Next: Researching AI Sign Language Interpretation
Nagish is in the research phase for sign language interpretation. The goal isn't to replace human interpreters-it's to fill gaps when they aren't available and to support lower-stakes or repetitive interactions. Think of it as more coverage, more consistency, and fewer missed moments.
The vision is straightforward: sign into the camera on your phone, tablet, or laptop. The AI translates your signing into spoken or written language. The other person's reply is converted back into sign language using a visual avatar-fast, private, and usable anywhere.
Community-Led, Or It Doesn't Ship
Nagish builds with the Deaf community, not just for it. Native signers, interpreters, and linguists are in the research loop-testing early versions, giving feedback, and shaping model choices. Accuracy, cultural nuance, and authenticity matter as much as speed.
Language Coverage
The team is starting with American Sign Language (ASL) and plans to add more sign languages as research progresses. British Sign Language (BSL) and others are on the horizon, guided by demand and community partnerships.
Where Development Stands
Sign language interpretation is in active research. The company is gathering data, refining models with experts, and validating naturalness and accuracy. There are no public release timelines yet.
How the Community Might Receive It
Expect excitement, curiosity, and healthy skepticism. The Deaf community has seen big promises that didn't pan out, so Nagish is committing to steady progress with ongoing feedback. The aim is to offer another option-one that works when other options don't.
A Ten-Year View
Tomer's target: anyone can join any conversation, anywhere-video calls, classrooms, clinics, even a yoga class-and get instant, accurate sign language support. If millions can communicate in their own language, privately and independently, that's success.
What Leaders and Product Teams Should Take Away
- Design for independence: prioritize features that remove intermediaries, reduce scheduling friction, and respect privacy.
- Build with communities: co-develop with native signers and linguists to avoid brittle, tone-deaf solutions.
- Scope wisely: use AI to extend coverage and consistency; reserve human expertise for high-stakes, high-nuance situations.
- Measure what matters: latency, accuracy, user control, and error recovery are the metrics that drive trust.
- Plan for variation: sign languages include facial expressions, regional differences, and context-data and UX must reflect that.
If You're Building AI Accessibility Into Your Roadmap
Upskill your team on practical AI workflows, evaluation, and product integration. Start with focused, hands-on training that maps to roles and deliverables.
Final Word
Tech should reduce distance, not add gatekeepers. Nagish's approach-private communication, community-led research, and practical use cases-offers a blueprint for building AI that actually helps people live their lives. That's the bar to meet.
Your membership also unlocks: