You cannot fully turn off Azure AI Foundry’s content moderation for LLM endpoints by default, as Microsoft enforces baseline filters for safety. While you can create custom content filter configurations and lower the severity levels, these still apply “Annotate and Block” for certain categories. To switch to “Annotate Only” or disable filtering for specific harm categories, your organization must request an exception by applying through the Azure OpenAI Limited Access Review: Modified Content Filters form. Approval for this capability is typically granted only to managed enterprise customers, and Microsoft generally requires engagement through your Azure account representative. In your request, you should include context about your pharmaceutical use case and examples of false positives to justify why you need “Annotate Only” mode. Once approved, you will be able to configure content filters at a granular level and set them to “Annotate Only,” or even disable blocking entirely for certain categories, either through the Azure AI Foundry portal or by applying custom filters at runtime using the x‑policy‑id
header.
For more information: Limited access for Azure OpenAI Service
Limited Access features for Azure AI services
How can we turn off content moderation policy for LLMS deployed as endpoints in AI Azure foundry?
Hi,
Our company provides AI services to pharmaceutical clients, recenetly we migrated from using the OpenAI API to serverless endpoints on Azure Foundry - since the migration some prompts (that previously had no issues) seem to trigger the content policy on Azure during inference.
I created a custom filter with the lowest moderation levels for each of the topics, but this issue seems to still persist triggering the content policy (mentioning injections in the prompt seems to trigger the "self-harm" trigger), how do I only have the settings as "Anotate Only" instead instead of "Anotate and Block"?
Being a pharma company we'd like to file for an exception to this policy as we get a lot of false triggers - would appreciate any guidance to how we can make this happen.
Thank you
Amo
Azure AI Content Safety
-
Pavankumar Purilla 10,425 Reputation points Microsoft External Staff Moderator
2025-08-05T11:46:59.4466667+00:00