Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Azure Kubernetes Service (AKS) normally provisions one Standard Load Balancer (SLB) for all LoadBalancer
Services in a cluster. Because each node NIC is limited to 300 inbound load‑balancing rules and 8 private‑link services, large clusters or port‑heavy workloads can quickly exhaust these limits.
The multiple SLB preview removes that bottleneck by letting you create several SLBs inside the same cluster and shard nodes and Services across them. You define load‑balancer configurations, each tied to a primary agent pool and optional namespace, label, or node selectors—and AKS automatically places nodes and Services on the appropriate SLB. Outbound SNAT behavior is unchanged if outboundType
is loadBalancer
. Outbound traffic still flows through the first SLB.
Use this feature to:
- Scale beyond 300 inbound rules without adding clusters.
- Isolate tenant or workload traffic by binding a dedicated SLB to its own agent pool.
- Distribute private‑link services across multiple SLBs when you approach the per‑SLB limit.
Prerequisites
aks-preview
extension 18.0.0b1 or later.- Subscription feature flag
Microsoft.ContainerService/MultipleStandardLoadBalancersPreview
registered. - Kubernetes version 1.28 or later.
- Cluster created with
--load-balancer-backend-pool-type nodeIP
or update and existing cluster usingaz aks update
.
Install the aks-preview Azure CLI extension
Important
AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
Install the aks-preview extension using the
az extension add
command.az extension add --name aks-preview
Update to the latest version of the extension released using the
az extension update
command.az extension update --name aks-preview
Register the MultipleStandardLoadBalancersPreview
feature flag
Register the
MultipleStandardLoadBalancersPreview
feature flag using theaz feature register
command.az feature register --namespace "Microsoft.ContainerService" --name "MultipleStandardLoadBalancersPreview"
It takes a few minutes for the status to show Registered.
Verify the registration status using the
az feature show
command:az feature show --namespace "Microsoft.ContainerService" --name "MultipleStandardLoadBalancersPreview"
When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider using the
az provider register
command.az provider register --namespace Microsoft.ContainerService
How AKS chooses a load balancer (node & Service placement)
AKS uses multiple inputs to determine where to place nodes and expose LoadBalancer Services. These inputs are defined in each load balancer configuration and influence which SLB is selected for each resource.
Input type | Applies to | Description |
---|---|---|
Primary agent pool--primary-agent-pool-name |
Nodes | Required. All nodes in this pool are always added to the SLB’s backend pool. Ensures each SLB has at least one healthy node. |
Node selector--node-selector |
Nodes | Optional. Adds any node with matching labels to the SLB, in addition to the primary pool. |
Service namespace selector--service-namespace-selector |
Services | Optional. Only Services in namespaces with matching labels are considered for this SLB. |
Service label selector--service-label-selector |
Services | Optional. Only Services with matching labels are eligible for this SLB. |
Service annotationservice.beta.kubernetes.io/azure-load-balancer-configurations |
Services | Optional. Limits placement to one or more explicitly named SLB configurations. Without it, any matching configuration is eligible. |
Note
Selectors define eligibility. The annotation (if used) restricts the controller to a specific subset of SLBs.
How AKS uses these inputs
The AKS control plane continuously reconciles node and Service state using the rules above:
Node placement
When a node is added or updated, AKS checks which SLBs it qualifies for based on primary pool and node selector.
- If multiple SLBs match, the controller picks the one with the fewest current nodes.
- The node is added to that SLB’s backend pool.
Service placement
When a LoadBalancer Service is created or updated:
- AKS finds SLBs whose namespace and label selectors match the Service.
- If the Service annotation is present, only those named SLBs are considered.
- SLBs that have allowServicePlacement=false or that would exceed Azure limits (300 rules or 8 private-link services) are excluded.
- Among valid options, the SLB with the fewest rules is chosen.
externalTrafficPolicy (ETP) behavior
AKS handles Services differently depending on the value of externalTrafficPolicy
.
Mode | How load balancer selection works | How backend pool membership is built | Notes |
---|---|---|---|
Cluster (default) | The controller follows the standard placement rules described above. A single load-balancing rule targets the shared kubernetes backend pool on the chosen SLB. | All nodes in that SLB’s kubernetes pool are healthy targets. Nodes without matching Pods are removed automatically by health probes. |
Same behavior as today in single-SLB clusters. |
Local | The controller still uses the selector-based algorithm to pick an SLB, but creates a dedicated backend pool per Service instead of using the shared pool. | Membership is synced from the Service’s EndpointSlice objects, so only nodes that actually host ready Pods are added. Health probes continue to use healthCheckNodePort to drop unhealthy nodes. |
Guarantees client IP preservation and avoids routing through nodes that lack Pods, even when nodes are sharded across multiple SLBs. |
Why a dedicated pool for ETP Local?
In multi-SLB mode, nodes that host Pods for a given Service may reside on different SLBs from the client-facing VIP. A shared backend pool would often contain zero eligible nodes, breaking traffic. By allocating a per-Service pool and syncing it fromEndpointSlice
, AKS ensures the Service’s SLB always points at the correct nodes.
Impact on quotas
- Each ETP Local Service adds one backend pool and one load-balancing rule to its SLB.
- These count toward the 300-rule limit, so monitor rule usage when you have many ETP Local Services.
No change to outbound traffic
Outbound SNAT still flows through the first SLB’s aksOutboundBackendPool
when outboundType
is loadBalancer
, independent of ETP settings.
Optional: Rebalancing
You can manually rebalance node distribution later using az aks loadbalancer rebalance
.
This design lets you define flexible, label-driven routing for both infrastructure and workloads, while AKS handles placement automatically to maintain balance and avoid quota issues.
Add the first load balancer configuration
Add a configuration named kubernetes
and bind it to a primary agent pool that always has at least one node. Removing every configuration switches the cluster back to single‑SLB mode.
Important
To enable multiple‑SLB mode you must add a load‑balancer configuration named kubernetes
and attach it to a primary agent pool that always has at least one ready node.
The presence of this configuration toggles multi‑SLB support; in service selection, it has no special priority and is treated like any other load balancer configuration.
If you delete every load‑balancer configuration, the cluster automatically falls back to single‑SLB mode, which can briefly disrupt service routing or SNAT flows.
Set environment variables for use throughout this tutorial. You can replace all placeholder values with your own except
DEFAULT_LB_NAME
, which must remain askubernetes
.RESOURCE_GROUP="rg-aks-multislb" CLUSTER_NAME="aks-multi-slb" LOCATION="westus" DEFAULT_LB_NAME="kubernetes" PRIMARY_POOL="nodepool1"
Create resource group using the
az group create
command.az group create --name $RESOURCE_GROUP --location $LOCATION
Create an AKS cluster using the
az aks create
command.\az aks create --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME \ --load-balancer-backend-pool-type nodeIP \ --node-count 3
Add a default load balancer using the
az aks loadbalancer add
command.az aks loadbalancer add --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME \ --name $DEFAULT_LB_NAME \ --primary-agent-pool-name $PRIMARY_POOL \ --allow-service-placement true
Add additional load balancers
Create tenant‑specific configurations by specifying a different primary pool plus optional namespace, label, or node selectors.
Team 1 will run its own workloads in a separate node pool. Assign a
tenant=team1
label so workloads can be scheduled using selectors:TEAM1_POOL="team1pool" TEAM1_LB_NAME="team1-lb"
Create a second node pool for team 1 using the
az aks nodepool add
command.az aks nodepool add --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME \ --name $TEAM1_POOL \ --labels tenant=team1 \ --node-count 2
Create a load balancer for team 1 using the
az aks loadbalancer add
command.az aks loadbalancer add --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME \ --name $TEAM1_LB_NAME \ --primary-agent-pool-name $TEAM1_POOL \ --service-namespace-selector "tenant=team1" \ --node-selector "tenant=team1"
Label the target namespace (e.g.,
team1-apps
) to match the selector using theaz aks command invoke
command.az aks command invoke \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --command " kubectl create namespace team1-apps --dry-run=client -o yaml | kubectl apply -f - kubectl label namespace team1-apps tenant=team1 --overwrite "
You can now list the load balancers in the cluster to see the multiple configurations using the
az aks loadbalancer list
command.az aks loadbalancer list --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME --output table
Example output:
AllowServicePlacement ETag Name PrimaryAgentPoolName ProvisioningState ResourceGroup ----------------------- ------- ---------- ---------------------- ------------------- --------------- True <ETAG> kubernetes nodepool1 Succeeded rg-aks-multislb True <ETAG> team1-lb team1pool Succeeded rg-aks-multislb
Deploy a Service to a specific load balancer
Add the annotation service.beta.kubernetes.io/azure-load-balancer-configurations
with a comma‑separated list of configuration names. If the annotation is omitted, the controller chooses automatically.
az aks command invoke \
--resource-group $RESOURCE_GROUP \
--name $CLUSTER_NAME \
--command "
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: lb-svc-1
namespace: team1-apps
labels:
app: nginx-test
annotations:
service.beta.kubernetes.io/azure-load-balancer-configurations: \"team1-lb\"
spec:
selector:
app: nginx-test
ports:
- name: port1
port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
namespace: team1-apps
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test
template:
metadata:
labels:
app: nginx-test
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: \"150m\"
memory: \"300Mi\"
EOF
"
Rebalance nodes (optional)
Run a rebalance operation after scaling if rule counts become unbalanced using the az aks loadbalancer rebalance
command. This command disrupts active flows, so schedule it during a maintenance window.
az aks loadbalancer rebalance --resource-group $RESOURCE_GROUP --cluster-name $CLUSTER_NAME
Monitoring and troubleshooting
- Watch controller events (
kubectl get events …
) to confirm that Services are reconciled. - If external connectivity is blocked, open a node shell and curl the Service VIP to confirm kube‑proxy routing.
Limitations and known issues
Limitation | Details |
---|---|
Outbound SNAT | Always uses the first SLB; outbound flows aren’t sharded. |
Backend pool type | Create or update and existing cluster to use nodeIP backend pools. |
Autoscaler zeros | A primary agent pool can’t scale to 0 nodes. |
ETP local Rule Growth |
Each ETP local Service uses its own rule and backend pool, so rule counts can grow faster than with cluster mode. |
Rebalance disruption | Removing a node from a backend pool drops in‑flight connections. Plan maintenance windows. |
Configuration reload timing | After running az aks loadbalancer , changes may not take effect immediately. The AKS operation finishes quickly, but the cloud-controller-manager may take longer to apply updates. Wait for the EnsuredLoadBalancer event to confirm the changes are active. |
Clean up resources
Delete the resource group when you’re finished to remove the cluster and load balancers using the az group delete
command.
az group delete --name $RESOURCE_GROUP --yes --no-wait
Next steps
The multiple SLB feature helps scale and isolate workloads at the networking layer while maintaining simplicity through Azure-managed configuration. For more information, see the following resources:
Azure Kubernetes Service