Endor Labs has introduced AI Model Discovery to enhance its open-source evaluation platform. This tool allows security teams to identify open-source AI models used in their applications, analyze potential risks, and enforce usage policies. It automates the detection of policy violations and can prevent high-risk models from production. Currently, it only supports models from Hugging Face in Python applications. Despite its limitations, it addresses a significant visibility gap for security teams by scoring models and summarizing assessments, thereby helping companies balance innovation with risk management.
As companies try to develop their internal AI applications, they're looking for ways to reduce costs, and using a pre-trained model is one of those ways.
AI Model Discovery provides the ability to find those models and evaluate them across 50 dimensions, enabling tailored policy creation.
Collection
[
|
...
]