
"But tiny 30-person startup Arcee AI disagrees. The company just released a truly and permanently open (Apache license) general-purpose, foundation model called Trinity, and Arcee claims that at 400B parameters, it is among the largest open-source foundation models ever trained and released by a U.S. company. Arcee says Trinity compares to Meta's Llama 4 Maverick 400B, and Z.ai GLM-4.5, a high-performing open-source model from China's Tsinghua University, according to benchmark tests conducted using base models (very little post training)."
"Like other state-of-the-art (SOTA) models, Trinity is geared for coding and multi-step processes like agents. Still, despite its size, it's not a true SOTA competitor yet because it currently supports only text. More modes are in the works - a vision model is currently in development, and a speech-to-text version is on the roadmap, CTO Lucas Atkins told TechCrunch (pictured above, on the left). In comparison, Meta's Llama 4 Maverick is already multi-modal, supporting text and images."
"But before adding more AI modes to its roster, Arcee says, it wanted a base LLM that would impress its main target customers: developers and academics. The team particularly wants to woo U.S. companies of all sizes away from choosing open models from China. "Ultimately, the winners of this game, and the only way to really win over the usage, is to have the best open-weight model," Atkins said. "To win the hearts and minds of developers, you have to give them the best.""
Arcee AI, a 30-person startup, released Trinity, a permanently open (Apache-licensed) general-purpose foundation model with 400 billion parameters. Arcee positions Trinity among the largest open-source foundation models trained and released by a U.S. company and reports benchmark comparisons with Meta's Llama 4 Maverick 400B and Z.ai GLM-4.5 using base models. Trinity focuses on coding and multi-step agent processes but currently supports only text; vision and speech-to-text modes are planned. Arcee targets developers, academics, and U.S. companies seeking non-Chinese open models. Company benchmarks show Trinity largely holding its own, and in some coding, math, and reasoning tests slightly outperforming Llama while post-training continues.
Read at TechCrunch
Unable to calculate read time
Collection
[
|
...
]