An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
Hello Varma,
Thank you for reaching out and for sharing the detailed screenshots from the Azure portal along with the kubectl get nodes -o wide output. I understand you're seeing a difference between the CIDR range shown in your AKS networking configuration and the actual INTERNAL-IP addresses assigned to your nodes (e.g., 10.224.0.4, 10.224.0.5, etc.).
This behavior is completely expected and by design in Azure Kubernetes Service (AKS) when using Azure CNI. There is no issue with your cluster.
- The CIDR range you see in the AKS Networking section of the portal (the "above CIDR") is the Kubernetes Service CIDR (or Pod CIDR if using Overlay mode). This range is used only for:
- Kubernetes Services (ClusterIP type).
- Pods (in Azure CNI Overlay mode).
- The node IPs you see in kubectl get nodes -o wide come from the Azure Virtual Network subnet that is explicitly assigned to your node pool. This is the actual subnet range configured for the underlying Azure VMs (or VMSS instances) of your nodes.
To confirm the node pool subnet CIDR matches your node IPs, please follow these steps:
- In the Azure portal, navigate to your AKS cluster → Node pools → select the node pool in question.
- Note the Subnet name listed under the networking details.
- Go to Virtual networks → select your VNet → Subnets → open the subnet you noted above.
- Verify that the Address range of this subnet exactly matches the range from which your node INTERNAL-IPs are assigned (e.g., 10.224.0.0/16).
If you are using Azure CNI Overlay (the recommended and most common mode today), pods will receive IPs from a separate private overlay CIDR (not from your VNet subnet at all). This is the most efficient way to conserve VNet IP addresses.
Thanks,
Manish.