Hello Juuso !
Thank you for posting on Microsoft Lean.
You’re hitting a quota issue, not a model not supported issue.
gpt-5-mini is listed as available for DataZoneStandard in Sweden Central, but the DataZoneStandard quota pool is separate, and your subscription currently has 0 allocated there. You need Microsoft to grant quota for that model/deployment type in that region.
https://learn.microsoft.com/en-us/azure/ai-foundry/openai/how-to/reasoning
Try a tiny deployment first, in the deploy dialog pick Data zone standard, region Sweden Central, and set very low rates (for example 10 RPM / 10k TPM).
If the portal still says "Insufficient quota…", proceed to step 2. (Low rates sometimes fit if you have a small default) https://learn.microsoft.com/en-us/azure/ai-foundry/openai/quotas-limits
You can request quota for this exact pool from the deployment pane (or go to your AI Foundry resource), and file a Quota increase request for gpt-5-mini DataZoneStandardSweden Central under your subscription.
You need to mention your required RPM/TPM and intended workload. Approval is per-subscription and per-region or deployment-type. https://learn.microsoft.com/en-us/azure/ai-foundry/how-to/quota
If you need to go live before approval, you can se Global Standard for gpt-5-mini (higher default quota, but inferencing may run in any Azure AI location since the data is at rest stays in your Azure geography) or you can use DataZoneStandard in East US 2 (also supported for gpt-5-mini). Pick the one that fits your data-residency needs.
Why the contradiction? Docs describe availability, the portal enforces quota. Availability means the model is enabled in that region/deployment type, but you still need quota assigned to your subscription DataZoneStandard pool there. Until that’s granted, you’ll see no locations with enough quota.