Control Model Catalog deployments with Azure RBAC and Azure Policy.
Published May 21 2024 08:58 AM 2,530 Views
Microsoft

Introduction

Azure Machine Learning (AML) enables organizations to implement powerful AI capabilities by offering access to state-of-the-art hardware and software resources.  The software state-of-the-art tools offered by AML include Azure Container for PyTorch (ACPT) images that simplify AI training and finetuning, integration with popular model types (e.g.: MLFlow) and a fully curated Model Catalog, which exposes a large collection of pretrained models. 

 

While the advanced software tooling offered by AML is impressive, some organizations often wish to implement their own Model Catalog curation process to further control the types of pretrained models available to their internal developers.  In this article, we will describe some of the approaches available in Microsoft Azure to implement bespoke model curation processes within Azure subscriptions and resource groups.

 

Model Catalog Overview

 

Azure Model Catalog is a curated collection of pretrained Open Source (OSS) and proprietary Machine Learning and AI models.  It is accessible via the AML Workspace and allows for one-click scalable Managed Endpoint deployments of many popular models:

antonslutsky_0-1716297218459.png

 

As mentioned earlier, Real-Time and Batch Managed Endpoint deployment may often be easily accomplished in Azure ML Model Catalog with a few click:

antonslutsky_0-1716297427316.png

 

While this seamless deployment capability offers unparalleled access to advanced AIs, some organizations might want to limit this access to:

  1. Control costs
  2. Reduce likelihood of misuse
  3. Limit AML use for industry uses only
  4. Enforce internal compliance rules

Model Catalog Security Model

 

Azure RBAC

 

Generally, access to Model Catalog deployments may be managed through the Azure role-based access control (Azure RBAC), which allows for role-level access restrictions to various AML resources.  For example, by assigning Reader role or adding users to custom security groups, it is possible to restrict access to create compute resources and block model deployments for specific users or groups.

 

Azure Policy 

 

While the RBAC model allows for workspace-level controls of resource utilization, it offers little help in restricting specific OSS and private pretrained models from being deployed, while allowing other models to be instantiated.  To help organizations implement fine-grained controls of Model Catalog assets, Azure Policy facilities may be used in conjunction with RBAC.

 

Azure Policy offers graphical dashboards to visualize resource compliance and provides Command-Line (CLI) and SDK interfaces to create, assign and manage granular-level policies:

antonslutsky_0-1716299505515.png

Many Built-In policy definitions are currently available that may be assigned to restrict resource access in Azure Machine Learning.  For example, the "[Preview]: Azure Machine Learning Model Registry Deployments are restricted except for the allowed Registry" policy, if assigned, allows access only to specific Model Catalog registries and offers a way to restrict some models within the allowed registries: 

 

antonslutsky_2-1716300938096.png

The specific Model Catalog registry name and restricted model names may be specified at assignment-time through the Azure Policy graphical interface, as well as via Azure Policy CLI and SDK:

antonslutsky_0-1716301162271.png

In the example bellow, the assignment allows Model Catalog models to be deployed only from the azureml registry and also restricts the tiiuae-falcon-7b model from being deployed:

antonslutsky_2-1716301469696.png

This Policy assignment may be assigned at subscription level as well as for a particular Resource Group, which allows for fine-grained control for organization with diverse Data Science needs:

antonslutsky_3-1716301711293.png

 

Custom Azure Policy 

 

While the Built-In Azure policies allow for fine-grained control of Model Catalog registries, some organizations need to implement an "Allowed" list policy to further control access to various Model Catalog OSS and proprietary models.  The following set of steps shows an example definition and assignment for a model "Allowed" list policy using Azure Policy user interface.

 

Step 1: Create new policy definition

antonslutsky_0-1716303695990.png

 

Step 2: Select subscription and provide the custom policy name

antonslutsky_0-1716303839329.png

Step 3: Specify the custom policy Mode as "Microsoft.MachineLearningServices.v2.Data"

antonslutsky_0-1716303957052.png

 

Step 4: Define the custom policy parameters

antonslutsky_0-1716304058452.png

 

Step 5: Define custom policy rule using the Not-In operators

antonslutsky_0-1716304169781.png

 

Step 6: Save and assign the custom policy to the appropriate scope

antonslutsky_0-1716304257211.pngantonslutsky_1-1716304308011.png

 

 

Conclusion 

 

This brief blog post attempted to outline some of the advanced access control capabilities available in Azure Machine Learning.  By combining Azure RBAC and Azure Policy definitions, platform administrators are able to have fine-grained control over the types of Machine Learning and AI models available to users.  Custom Azure Policy definitions are a powerful tool to further regulate access for various resources available in AML.