ElevenLabs co-founder and CEO Mati Staniszewski says voice is becoming the next major interface for AI - the way people will increasingly interact with machines as models move beyond text and screens. Speaking at Web Summit in Doha, Staniszewski told TechCrunch voice models like those developed by ElevenLabs have recently moved beyond simply mimicking human speech - including emotion and intonation - to working in tandem with the reasoning capabilities of large language models.
STL Partners predicts one AI-related growth area among telcos but warns of a slower adoption or pullbacks in three others. First, the AI optimism: Telcos will increasingly adopt voice-based AI, analysts believe. Already, some of the biggest global telcos are using embedded voice assistance in AI channels for enterprise customers. In 2026, telcos are likely to adopt voice technologies for customer calls as well. Immediate benefits could include live translation and integration of digital assistance services.
Y Combinator rejected the application from Bolna, a voice orchestration startup built by Maitreya Wagh and Prateek Sachan, five times before finally accepting it into the fall 2025 batch, skeptical that the founders could turn interest into revenue. "When we were applying for Y-Combinator, the feedback we got was, 'great to see that you have a product that can create realistic voice agents, but Indian enterprises are not going to pay, and you are not going to make money out of this,'" Wagh told TechCrunch.
We are seeing that there is a huge move towards voice as a new interface that a lot of folks are adopting. You can do much more with voice in a natural way than with a keyboard. However, we saw that voice is rarely an interface people use when others are around. So that using our noise isolation model, we will give consumers a way to experience a voice interface in the form of our earbuds,
For example, users can now ask something like, "I like dramas but my wife likes comedies. What's a movie we can watch together?" when looking for movie recommendations. Or, users could quickly catch up on a show they're returning to by asking something like, "What happened at the end of Outlander last season?" In an another example, Google says users can even ask something like "What's the new hospital drama everyone's talking about?"
The idea for Keplar was conceived in 2023 when Dhruv Guliani (above right), previously an engineer at Google, where he worked on speech and voice AI models, and machine learning engineer William Wen, participated in the South Park Commons founder fellowship program. The duo spoke with market researchers and brand managers and realized that the tools these professionals rely on - written surveys and interviews conducted by humans - can now be replaced by conversational AI.
Additionally, the model provider said that the updated gpt-realtime model has shown improvements in following complex instructions, calling tools with precision, and producing speech that "sounds more natural and expressive."
Leena's agentic AI colleagues can handle domain-specific and cross-domain requests, serving as go-betweens for employees and enterprise systems. The human is always in the loop; the agent must request approval before taking action.