HiddenLayer Model Scanner helps developers assess the security of open models in the model catalog
Published May 21 2024 08:30 AM 2,813 Views
Microsoft

Co-authored by Hiep Dang, VP of Strategic Technical Alliances at HiddenLayer

 

At Microsoft Build, we announced that Azure AI uses HiddenLayer Model Scanner to scan third-party and open models for emerging threats, such as cybersecurity vulnerabilities, malware, and other signs of tampering, before onboarding them to the model catalog collection curated by Azure AI. The resulting verifications from Model Scanner, provided within each model card, can help streamline AI deployment processes and empower development teams to fine-tune or deploy open models safely and with greater confidence.

 

BlogSnapHL.png

  Fig 1. View the verification from HiddenLayer Model Scanner within the model card

 

Protecting open collaboration and innovation

Open foundation models are popular for their cost and flexibility, particularly among software development companies and other organizations looking to fine-tune foundation models with their own proprietary data. This is an area ripe with innovation, with new open models appearing on the internet every day. The Azure AI model catalog alone curates over a thousand open models, spanning a variety of tasks, sizes, and overall capabilities so that customers can choose the best fit for their use cases. Customers can also import open models if the model catalog does not have what they are looking for.

 

Unfortunately, even when public model repositories have robust security measures in place, using open models found on the internet can present some risks. For example, in namesquatting or typosquatting attacks, an attacker might mimic a well-known brand name to confuse developers and encourage the download of models containing malicious code. The consequences of loading a hijacked model can be severe. For example, attackers can hide malware such as ransomware, execute supply chain attacks, leak personally identifiable information (PII) or protected health information (PHI), steal intellectual property, or pose other serious risks. While most developers are familiar with the risks of using code taken directly from public repositories, open models represent an emerging and lesser-known attack vector.

 

Introducing a new layer of defense with HiddenLayer Model Scanner

HiddenLayer Model Scanner uses proprietary approaches to analyze AI models and identify cybersecurity risks and threats. The platform recognizes all major machine learning model formats and frameworks and conducts a deep analysis of their structure, layers, tensors, functions, and modules to identify suspicious or malicious code, vulnerabilities, and integrity issues. As a result, development teams can deploy open models knowing they have been thoroughly checked for security risks.

 

Before open models are onboarded to the model catalog collection curated by Azure AI, HiddenLayer Model Scanner performs the following checks:

  • Malware Analysis: Scans AI models for embedded malicious code that could serve as an infection vector and launchpad for malware.
  • Vulnerability Assessment: Scans for common vulnerabilities and exposures (CVEs) and zero-day vulnerabilities targeting AI models.
  • Backdoor detection: Scans model functionality for evidence of supply chain attacks and backdoors such as arbitrary code execution and network calls.
  • Model Integrity: Analyzes an AI model’s layers, components and tensors to detect tampering or corruption.

 

No action is required by customers to benefit from these capabilities. For each model scanned, the model card will provide a verification from HiddenLayer. Customers’ fine-tuned models, models imported directly from Hugging Face, and proprietary models offered as-a-Service, such as Azure OpenAI Service models, are not scanned by HiddenLayer today.

 

 

Get started with these resources

Visit the Azure AI model catalog in Azure AI Studio: ai.azure.com/explore/models


Learn more about this announcement and services offered by HiddenLayer:

 

Learn more best practices to develop and deploy AI safely and responsibly with Azure AI:

Co-Authors
Version history
Last update:
‎May 21 2024 09:58 AM
Updated by: