The response was in Indonesian but shaped by values that centered individual autonomy over the consensus-building, social harmony and collective family dynamics that tend to matter more in Indonesian social life.
The competitive landscape among AI apps in China is fierce. Companies have been dumping money into the market to try to win customers and show them how AI is useful in everyday life, in particular, for buying stuff.
Cohere's Transcribe model is designed for tasks like note-taking and speech analysis, supporting 14 languages and optimized for consumer-grade GPUs, making it accessible for self-hosting.
Learning a new language not only makes you look cool, it also allows you to familiarize yourself with another culture, connect with new people and enjoy a wider variety of art and media.
Computational linguistics is a two-way street: You're either using a computer to do things with human language or communicate or translate or teach a foreign language, or you're using computational techniques to learn something about human languages. Her work documenting and preserving endangered languages uses a little bit of both.
Talking to ChatGPT feels more collaborative than typing. It shines for brainstorming, prep, and translation. Usage limits can interrupt productivity mid-session. Voice Mode runs on mobile devices, as well as in your browser. On mobile, there are two ChatGPT widgets available for the lock screen. One widget opens the app, and one launches ChatGPT Voice.
As explained by Meta: AI-powered translations for Reels are starting to roll out in more languages, including Bengali, Tamil, Telugu, Marathi, and Kannada, on Instagram. These new additions build on our existing language support for English, Hindi, Portuguese, and Spanish. The addition of more of the languages spoken in India is significant, because India is now the biggest single market for both Facebook and Instagram usage, beating out the U.S. by a significant margin.
Led Zeppelin warned us about the perils of misunderstood communications in relationships. Failing to translate what we are trying to say or do so that someone else gets it is the root of so many problems. But translation is a fantastic find when it goes right. Here are some things I've learned about translating meaning from a lifetime of speaking numerous languages, practicing a wide array of martial arts, and communicating science.
The majority of AI products remain tethered to a single, monolithic UI pattern: the chat box. While conversational interfaces are effective for exploration and managing ambiguity, they frequently become suboptimal when applied to structured professional workflows. To move beyond "bolted-on" chat, product teams must shift from asking where AI can be added to identifying the specific user intent and the interface best suited to deliver it.
There's a good chance you spend more time talking to your phone's virtual assistant, or dictating text with your voice, instead of actually calling people these days. But, as convenient as voice input can be, you don't want to be the obnoxious person shouting commands to Siri in a quiet library. And you probably won't have much luck dictating an email in a room with toddlers screaming and Peppa Pig blaring on the TV. (Ask me how I know.)
By comparing how AI models and humans map these words to numerical percentages, we uncovered significant gaps between humans and large language models. While the models do tend to agree with humans on extremes like 'impossible,' they diverge sharply on hedge words like 'maybe.' For example, a model might use the word 'likely' to represent an 80% probability, while a human reader assumes it means closer to 65%.
On Wednesday, the Paris-based AI lab released two new speech-to-text models: Voxtral Mini Transcribe V2 and Voxtral Realtime. The former is built to transcribe audio files in large batches and the latter for nearly real-time transcription, within 200 milliseconds; both can translate between 13 languages. Voxtral Realtime is freely available under an open source license.
The dataset was created by translating non-English content from the FineWeb2 corpus into English using Gemma3 27B, with the full data generation pipeline designed to be reproducible and publicly documented. The dataset is primarily intended to improve machine translation, particularly in the English→X direction, where performance remains weaker for many lower-resource languages. By starting from text originally written in non-English languages and translating it into English, FineTranslations provides large-scale parallel data suitable for fine-tuning existing translation models.
If old sci-fi shows are anything to go by, we're all using our computers wrong. We're still typing with our fingers, like cave people, instead of talking out loud the way the future was supposed to be. Have you ever seen Picard touch a keyboard? Of course not. And it's odd because our computers are all capable of turning speech into text by default. The problem? It just doesn't work very well. Or, at least, it didn't.
The new talk of the town is one where humans have no place a site called Moltbook that describes itself as a "social network for AI agents." The Reddit-styled site, launched in late January by US-based entrepreneur Matt Schlicht, is one where thousands of AI assistants talk to each other and discuss topics ranging from the technical to the philosophical.
A major difference between LLMs and LTMs is the type of data they're able to synthesize and use. LLMs use unstructured data-think text, social media posts, emails, etc. LTMs, on the other hand, can extract information or insights from structured data, which could be contained in tables, for instance. Since many enterprises rely on structured data, often contained in spreadsheets, to run their operations, LTMs could have an immediate use case for many organizations.
Have you ever asked Alexa to remind you to send a WhatsApp message at a determined hour? And then you just wonder, 'Why can't Alexa just send the message herself? Or the incredible frustration when you use an app to plan a trip, only to have to jump to your calendar/booking website/tour/bank account instead of your AI assistant doing it all? Well, exactly this gap between AI automation and human action is what the agent-to-agent (A2A) protocol aims to address. With the introduction of AI Agents, the next step of evolution seemed to be communication. But when communication between machines and humans is already here, what's left?