Co-authored by Hiep Dang, VP of Strategic Technical Alliances at HiddenLayer
At Microsoft Build, we announced that Azure AI uses HiddenLayer Model Scanner to scan third-party and open models for emerging threats, such as cybersecurity vulnerabilities, malware, and other signs of tampering, before onboarding them to the model catalog collection curated by Azure AI. The resulting verifications from Model Scanner, provided within each model card, can help streamline AI deployment processes and empower development teams to fine-tune or deploy open models safely and with greater confidence.
Fig 1. View the verification from HiddenLayer Model Scanner within the model card
Open foundation models are popular for their cost and flexibility, particularly among software development companies and other organizations looking to fine-tune foundation models with their own proprietary data. This is an area ripe with innovation, with new open models appearing on the internet every day. The Azure AI model catalog alone curates over a thousand open models, spanning a variety of tasks, sizes, and overall capabilities so that customers can choose the best fit for their use cases. Customers can also import open models if the model catalog does not have what they are looking for.
Unfortunately, even when public model repositories have robust security measures in place, using open models found on the internet can present some risks. For example, in namesquatting or typosquatting attacks, an attacker might mimic a well-known brand name to confuse developers and encourage the download of models containing malicious code. The consequences of loading a hijacked model can be severe. For example, attackers can hide malware such as ransomware, execute supply chain attacks, leak personally identifiable information (PII) or protected health information (PHI), steal intellectual property, or pose other serious risks. While most developers are familiar with the risks of using code taken directly from public repositories, open models represent an emerging and lesser-known attack vector.
HiddenLayer Model Scanner uses proprietary approaches to analyze AI models and identify cybersecurity risks and threats. The platform recognizes all major machine learning model formats and frameworks and conducts a deep analysis of their structure, layers, tensors, functions, and modules to identify suspicious or malicious code, vulnerabilities, and integrity issues. As a result, development teams can deploy open models knowing they have been thoroughly checked for security risks.
Before open models are onboarded to the model catalog collection curated by Azure AI, HiddenLayer Model Scanner performs the following checks:
No action is required by customers to benefit from these capabilities. For each model scanned, the model card will provide a verification from HiddenLayer. Customers’ fine-tuned models, models imported directly from Hugging Face, and proprietary models offered as-a-Service, such as Azure OpenAI Service models, are not scanned by HiddenLayer today.
Visit the Azure AI model catalog in Azure AI Studio: ai.azure.com/explore/models
Learn more about this announcement and services offered by HiddenLayer:
Learn more best practices to develop and deploy AI safely and responsibly with Azure AI:
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.