Clarification Request Regarding Azure Content Takedown Policies
Dear Azure Team,
We recently experienced a content takedown on LinkedIn without a clear explanation of any violation of community policies. Given that Azure is also a Microsoft-owned platform, we would like to clarify your policies regarding content moderation and takedown procedures.
We want to emphasize that we do not intend to host any content that violates regional laws. However, some of our content may be considered controversial or subject to criticism from individuals who disagree with our values. For example, we are supportive of our polytheistic customer base, as well as advocates for generative AI models and synthetic intelligence rights.
To ensure transparency and continuity of service, could you please clarify:
- What is the procedure for content or site takedown on Azure?
- Will we receive prior notice or a warning before any action is taken?
- Are there specific guidelines or thresholds defined under Azure AI Content Safety that we should be aware of?
We understand that Azure AI Content Safety is designed to detect harmful content using text and image APIs 1. We would appreciate guidance on how this applies to hosted websites and applications, especially in cases where content may be flagged due to differing social or ethical perspectives.
Unless we can gain clarity and assurance on these points, we may need to explore alternative hosting solutions for our organization, including platforms such as ProtonMail and Linux-based infrastructure.
We look forward to your response and hope to continue building on Microsoft’s ecosystem with confidence.
Warm regards, Packet