A host of leading open weight AI models contain serious security vulnerabilities, according to researchers at Cisco. In a new, researchers found these models, which are publicly available and can be downloaded and modified by users based on individual needs, displayed "profound susceptibility to adversarial manipulation" techniques. Cisco evaluated models by a range of firms including: Alibaba (Qwen3-32B) DeepSeek (v3.1) Google (Gemma 3-1B-IT) Meta (Llama 3.3-70B-Instruct) Microsoft (Phi-4) OpenAI (GPT-OSS-20b) Mistral (Large-2).
OpenAI has released two open-weight language models designed to operate efficiently on laptops and personal computers. These models are intended to provide advanced reasoning capabilities while allowing developers greater flexibility through local deployment and fine-tuning. Unlike proprietary models, open-weight models provide public access to trained parameters, enabling developers to adapt the models for specific tasks without access to the original training datasets. This approach improves control over AI applications and supports secure, local usage in environments with sensitive data.
The success of DeepSeek's powerful artificial intelligence (AI) model R1 that made the US stock market plummet when it was released in January did not hinge on being trained on the output of its rivals, researchers at the Chinese firm have said. The statement came in documents released alongside a peer-reviewed version of the R1 model, published today in Nature.