
"In the past three months, several state-of-the-art AI systems have been released with open weights, meaning their core parameters can be downloaded and customized by anyone. Examples include reasoning models such as Kimi-K2-Instruct from technology company Moonshot AI in Beijing, GLM-4.5 by Z.ai, also in Beijing, and gpt-oss by the California firm OpenAI in San Francisco. Early evaluations suggest that these are the most advanced open-weight systems so far, approaching the performance of today's leading closed models."
"Open-weight systems are the lifeblood of research and innovation in AI. They improve transparency, make large-scale testing easier and encourage diversity and competition in the marketplace. But they also pose serious risks. Once released, harmful capabilities can spread quickly and models cannot be withdrawn. For example, synthetic child sexual-abuse material is most commonly generated using open-weight models. Many copies of these models are shared online, often altered by users to strip away safety features, making them easier to misuse."
"In the case of closed AI systems, developers can rely on an established safety toolkit. They can add safeguards such as content filters, control who accesses the tool and enforce acceptable-use policies. Even when users are allowed to adapt a closed model using an application programming interface (API) and custom training data, the developer can still monitor and regulate the process."
Several state-of-the-art AI systems with open weights have been released recently, allowing anyone to download and customize core model parameters. Examples include Kimi-K2-Instruct, GLM-4.5 and gpt-oss, which early evaluations suggest are approaching the performance of leading closed models. Open-weight models enable transparency, large-scale testing, diversity and market competition. They also enable rapid, uncontrollable spread of harmful capabilities, for example synthetic child sexual-abuse material, and are frequently shared and modified to remove safety features. Research at the UK AI Security Institute (AISI) indicates that a healthy open-weight ecosystem requires rigorous scientific methods for monitoring and mitigating these harms. Closed models retain more centralized safeguards.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]