Mistral AI surfs vibe coding tailwinds with new coding models | TechCrunch
Briefly

Mistral AI surfs vibe coding tailwinds with new coding models | TechCrunch
"Mistral AI is betting on the added value of context awareness, which is particularly relevant in business use cases. Similar to its AI assistant, Le Chat, which can remember previous conversations with users and use that context to guide its answers, Vibe CLI features persistent history, and can also scan file structures and Git statuses to build context to inform its behavior."
"This focus on production-grade workflows also explains why Devstral 2 is relatively demanding, requiring at least four H100 GPUs or equivalent for deployment, and weighing 123 billion parameters. However, the model is also available in a smaller size with Devstral Small, which, at 24 billion parameters, makes it deployable locally on consumer hardware. The models differ in their open-source licensing - Devstral 2 ships under a modified MIT license, while Devstral Small uses Apache 2.0."
Mistral launched Devstral 2, a 123 billion-parameter coding model that requires at least four H100 GPUs or equivalent for deployment. A smaller Devstral Small model at 24 billion parameters is available for local consumer deployment. Devstral 2 is released under a modified MIT license while Devstral Small uses Apache 2.0. Mistral introduced Mistral Vibe, a command-line interface for natural-language code automation with file manipulation, code searching, version control, and command execution. Vibe CLI maintains persistent history and can scan file structures and Git statuses to build context for behavior. Initial API access to Devstral 2 is free, with paid pricing planned.
Read at TechCrunch
Unable to calculate read time
[
|
]