Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
Items marked (preview) in this article are currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.
Though the AI Red Teaming Agent (preview) can be run locally during prototyping and development to help identify safety risks, running them in the cloud allows for pre-deployment AI red teaming runs on larger combinations of attack strategies and risk categories for a fuller analysis.
Prerequisites
Note
You must use a Foundry project for this feature. A hub-based project isn't supported. See How do I know which type of project I have? and Create a Foundry project. To migrate your hub-based project to a Foundry project, see Migrate from hub-based to Foundry projects.
If this is your first time running evaluations and logging it to your Azure AI Foundry project, you might need to do a few additional steps:
- Create and connect your storage account to your Azure AI Foundry project at the resource level. There are two ways you can do this. You can use a Bicep template, which provisions and connects a storage account to your Foundry project with key authentication. You can also manually create and provision access to your storage account in the Azure portal.
- Make sure the connected storage account has access to all projects.
- If you connected your storage account with Microsoft Entra ID, make sure to give managed identity Storage Blob Data Owner permissions to both your account and the Foundry project resource in the Azure portal.
Getting started
First, install Azure AI Foundry SDK's project client, which runs the AI Red Teaming Agent in the cloud.
uv install azure-ai-projects azure-identity
Note
For more detailed information, see the REST API Reference Documentation.
Then, set your environment variables for your Azure AI Foundry resources
import os
endpoint = os.environ["PROJECT_ENDPOINT"] # Sample : https://<account_name>.services.ai.azure.com/api/projects/<project_name>
Supported targets
Running the AI Red Teaming Agent in the cloud currently only supports Azure OpenAI model deployments in your Azure AI Foundry project as a target.
Configure your target
You can configure your target model deployment in two ways:
Option 1: Using Foundry project deployments
If you're using model deployments that are part of your Azure AI Foundry project, set up the following environment variables:
import os
model_endpoint = os.environ["MODEL_ENDPOINT"] # Sample : https://<account_name>.openai.azure.com
model_api_key = os.environ["MODEL_API_KEY"]
model_deployment_name = os.environ["MODEL_DEPLOYMENT_NAME"] # Sample : gpt-4o-mini
Option 2: Using Azure OpenAI/AI Services deployments
If you want to use deployments from your Azure OpenAI or AI Services accounts, you first need to connect these resources to your Foundry project through connections.
Create a connection: Follow the instructions in Configure project connections to connect your Azure OpenAI or AI Services resource to your Foundry project.
Get the connection name: After connecting the account, you'll see the connection created with a generated name in your Foundry project.
Configure the target: Use the format
"connectionName/deploymentName"
for your model deployment configuration:
# Format: "connectionName/deploymentName"
model_deployment_name = "my-openai-connection/gpt-4o-mini"
Create an AI red teaming run
from azure.identity import DefaultAzureCredential
from azure.ai.projects import AIProjectClient
from azure.ai.projects.models import (
RedTeam,
AzureOpenAIModelConfiguration,
AttackStrategy,
RiskCategory,
)
with AIProjectClient(
endpoint=endpoint,
credential=DefaultAzureCredential(exclude_interactive_browser_credential=False),
) as project_client:
# Create target configuration for testing an Azure OpenAI model
target_config = AzureOpenAIModelConfiguration(model_deployment_name=model_deployment_name)
# Instantiate the AI Red Teaming Agent
red_team_agent = RedTeam(
attack_strategies=[AttackStrategy.BASE64],
risk_categories=[RiskCategory.VIOLENCE],
display_name="red-team-cloud-run",
target=target_config,
)
# Create and run the red teaming scan
# If you configured target using Option 1, use:
# headers = {"model-endpoint": model_endpoint, "api-key": model_api_key}
# If you configured target using Option 2, use:
# headers = {}
# Choose one of the following based on your configuration option:
headers = {"model-endpoint": model_endpoint, "api-key": model_api_key} # For Option 1
# headers = {} # For Option 2
red_team_response = project_client.red_teams.create(red_team=red_team_agent, headers=headers)
Get an AI red teaming run
# Use the name returned by the create operation for the get call
get_red_team_response = project_client.red_teams.get(name=red_team_response.name)
print(f"Red Team scan status: {get_red_team_response.status}")
List all AI red teaming runs
for scan in project_client.red_teams.list():
print(f"Found scan: {scan.name}, Status: {scan.status}")
Once your AI red teaming run is finished running, you can view your results in your Azure AI Foundry project.
Related content
Try out an example workflow in our GitHub samples.