Entire's tech has three components. One is a git-compatible database to unify the AI-produced code. Git is a distributed version control system popular with enterprises and used by open source sites like GitHub and GitLab. Another component is what it calls "a universal semantic reasoning layer" intended to allow multiple AI agents to work together. The final piece is an AI-native user interface designed with agent-to-human collaboration in mind.
Alibaba has launched RynnBrain, an open source AI model that helps robots and smart devices perform complex tasks in the real world. The model combines spatial understanding with time awareness. Alibaba's DAMO Academy introduced the foundation model that enables interaction with the environment. RynnBrain can map objects, predict trajectories, and navigate in complex environments such as kitchens or factory halls. The system is trained on Alibaba's Qwen3-VL vision language model.
That mismatch worked, if uncomfortably, when contributing had friction. After all, you had to care enough to reproduce a bug, understand the codebase, and risk looking dumb. But AI agents are obliterating that friction (and have no problem with looking dumb). Even Mitchell Hashimoto, the founder of HashiCorp, is now considering closing external PRs to his open source projects, not because he's losing faith in open source, but because he's drowning in "slop PRs" generated by large language models and their AI agent henchmen.
Moca has open-sourced Agent Definition Language (ADL), a vendor-neutral specification intended to standardize how AI agents are defined, reviewed, and governed across frameworks and platforms. The project is released under the Apache 2.0 license and is positioned as a missing "definition layer" for AI agents, comparable to the role OpenAPI plays for APIs. ADL provides a declarative format for defining AI agents, including their identity, role, language model setup, tools, permissions, RAG data access, dependencies, and governance metadata like ownership and version history.
On Wednesday, the Paris-based AI lab released two new speech-to-text models: Voxtral Mini Transcribe V2 and Voxtral Realtime. The former is built to transcribe audio files in large batches and the latter for nearly real-time transcription, within 200 milliseconds; both can translate between 13 languages. Voxtral Realtime is freely available under an open source license.
Trying to write on a laptop means fighting a machine that is also a notification box, streaming portal, and social feed. Distraction-free apps help, but they still live inside the same browser-and-tab chaos, surrounded by everything else your computer knows how to do. Some writers just want a device that only knows how to produce plain text and does not care about anything else happening in the world.
Completely free and open source (view our licence here). data_object Supports export for integration with frameworks including React, Vue, and Angular. Fully configurable, featuring custom triggers and adjustable text to support multiple language locales. 60 languages supported by default (view the languages here). Includes multiple views, including Map, Line, Chart, Days, Months, and Color Ranges. export_notes Export data to multiple file formats (view the supported types here), with system clipboard setting support.
When I moved to VMware, I expected things to continue much as before, but COVID disrupted those plans. When Broadcom acquired VMware, the writing was on the wall and though it took a while, I eventually got made redundant. That was almost 18 months ago. In the time since, I've taken an extended break with overseas travel and thoughts of early retirement. It's been a while therefore since I've done any direct developer advocacy.
Poettering is best known for systemd. After a lengthy stint at Red Hat, he joined Microsoft in 2022. Kühl was a Microsoft employee until last year, and Brauner, who also joined Microsoft in 2022, left this month. The trio are leading lights in the Linux and open source world. Brauner posted on Mastodon: "My role in upstream maintenance for the Linux kernel will continue as it always has." Poettering will similarly remain deeply involved in the systemd ecosystem.
The ad industry is racing toward a not-too-distant future where AI agents negotiate programmatic deals on their own - and Prebid doesn't want publishers to get left behind. The group that turned header bidding software into an open-source standard announced on Thursday that it's taking ownership of code developed using Ad Context Protocol (AdCP) that will power publisher-side AI agents.
If you're a fan of SimCity, then you'll appreciate IsoCity, an open source simulation game. The premise is the same. Start with land, build infrastructure, and try to maintain a thriving city. From the GitHub: IsoCity is a open-source isometric city-building simulation game built with Next.js, TypeScript, and Tailwind CSS. It leverages the HTML5 Canvas API for high-performance rendering of isometric graphics, featuring complex systems for economic simulation, trains, planes, seaplanes, helicopters, cars, pedestrians, and more.
When we announced the pre-release version of Lumen AI, our goal was ambitious: build a fully open, extensible framework for conversational data exploration that always remains transparent, inspectable, and composable, rather than opaque, closed and non-extensible. Today, with the full release of Lumen 1.0, that vision has been realized while also significantly evolving. This release represents a substantial re-architecture of both the UI and the core execution model, along with major improvements in robustness, extensibility, and real-world applicability.
In response, companies that are behind MySQL are coming together. Rather than continuing with things as they are, these companies recognize that developing a future path for MySQL is essential. What this will lead to will depend on decisions outside the community. Will this act as a spur for a fork of MySQL that has community support, similar to PostgreSQL? Or will this lead to MySQL moving away from the control of a single vendor, as has been the case since it was founded?
The agent takes input from the user and prepares a textual prompt for the model. The model then generates a response, which either produces a final answer for the user or requests a tool call (such as running a shell command or reading a file). If the model requests a tool call, the agent executes it, appends the output to the original prompt, and queries the model again. This process repeats until the model stops requesting tools and instead produces an assistant message
I do not want AI in my web browser. I just don't. I also don't want companies collecting information about me, or sponsored content and product integrations. All those bits make me want to pull my hair out. I like my privacy and want to browse, you know, the old-fashioned way. I do use AI (on occasion), but only locally-installed AI and only for specific purposes (such as learning Python or researching a topic when I don't want to use a standard search engine).
Bose SoundTouch was first launched in 2013, with prices ranging from $399-$1,500. During the initial launch, it was announced that support for the devices would last for 13 years. That time has come. Bose SoundTouch announced in October 2025 (via an email) that all SoundTouch speakers would become "dumb" speakers on Feb. 18, 2026. Once that date hits, the speakers will stop receiving updates (including those for security), and the only way they will work will be via HDMI, Aux, or Bluetooth connections.
The company confirmed that cloud support for the family of devices ends on May 6th, 2026, and this change affects how the SoundTouch app works. The news first came in October 2025, and after hearing feedback from users, the brand decided to move the shutdown date from February to May to give people more time to prepare. Before the cloud shuts down, the SoundTouch app will update by itself.
Sometimes software founders are a weird bunch. They've built their businesses on open source software and the contributions of people who've done a lot of work for free. They've benefited at great length from infrastructure and tooling built on open standards that facilitate free exchange of data and ideas. Yet when it comes to their own software business, they hold the opinion that you should have as much vendor lock-in as possible when it comes to your users.
Clearly, AI will play a larger role in Linux and open source next year, but that's true of pretty much all technology. However, while AI will be used to help develop the Linux kernel, no one is predicting, a la Windows, that AI will be used to rewrite the entire codebase by 2030. That said, open source will remain at the heart of AI.
As modern technologies such as artificial intelligence grab today's headlines, it's worth remembering that their foundations were being laid more than half a century ago by computer scientists, philosophers, psychologists, developers, entrepreneurs, and more. These pioneers and those who followed tackled issues and solved problems that future generations may never know existed - but without their seminal work, we wouldn't be where we are today.
According to the latest commit in the public GitHub repository, no new features, enhancements, or pull requests will be accepted in the MinIO community edition, and critical security fixes will be evaluated on a case-by-case basis. Existing issues and pull requests will not be actively reviewed, with community support continuing on Slack on a best-effort basis, and the company encouraging users to migrate to MinIO Enterprise.
The backlash against AI invading almost every aspect of the computing experience is growing by the day. Particularly as an onslaught of lazy AI slop subsuming news feeds, the tech is starting to feel like a massive distraction - and huge parts of the internet are disillusioned or even fuming in anger.
Cloudflare has open-sourced tokio-quiche, an asynchronous QUIC and HTTP/3 Rust library that wraps its battle-tested quiche implementation with the Tokio runtime to simplify the development of high-performance QUIC applications. The library was used internally to back the edge services, the Oxy HTTP proxies, or MASQUE-based tunnels, replacing Wireguard-based tunnels in the WARP client. Tokio-quiche is now available as an open-source crate on crates.io, with its source hosted in the quiche repository.
Nowadays, everyone uses cross-platform hybrid desktop apps written in JavaScript, ignoring excessive CPU and RAM usage. You most likely use a hybrid, native-like, cross-platform code editor for day-to-day programming activities. It may work fine on your computer because you've upgraded your hardware, since it may have worked slowly before. If you check the resource usage of your favorite code editor, you'll see not megabytes of RAM, but gigabytes of RAM;