The following are common questions asked about Azure Application Gateway for Containers.
General
What is Application Gateway for Containers?
Application Gateway for Containers offers various layer 7 load-balancing capabilities for your container applications. This service is highly available, scalable, and fully managed by Azure.
Where does Application Gateway for Containers store customer data?
Application Gateway for Containers stores and processes data in the region of its deployed resources. Configuration data may be replicated to its region pair, where applicable, for resiliency.
How does Application Gateway for Containers handle routine maintenance?
Routine maintenance and updates are designed to not be service impacting and require no customer intervention. For updates that might break existing configurations or cause existing product functionality changes, notifications are published via Azure Service Health. These notifications are also sent via email to subscription service administrators.
Does Application Gateway for Containers support FIPS?
Yes, Application Gateway for Containers can run in a FIPS 140-2 approved mode of operation, commonly referred to as FIPS mode. FIPS mode calls a FIPS 140-2 validated cryptographic module that ensures FIPS-compliant algorithms for encryption, hashing, and signing are used. If necessary, open a support case via the Azure portal to request that FIPS mode be enabled.
Does Application Gateway for Containers support Kubenet?
No, Application Gateway for Containers doesn't support Kubenet in favor of CNI Overlay. Learn more about migrating from Kubenet to CNI Overlay.
Why does the frontend resolve to an IP address with an ASN location outside the region Application Gateway for Containers is deployed to?
Application Gateway for Containers frontends resolve to any anycast IP address. Anycast IP addresses are advertised globally, which may lead to the ASN location being different from Application Gateway for Containers. The resolved location of the IP address is cosmetic and has no bearing on availability or performance in how clients connect.
Performance
How does Application Gateway for Containers support high availability and scalability?
Application Gateway for Containers automatically ensures underlying components are spread across availability zones for increased resiliency, if the Azure region supports it. If the region doesn't support zones, fault domains and update domains be used to help mitigate impact during planned maintenance and unexpected failures.
Warning
Ensure the Application Gateway for Containers subnet is a /24 prior to upgrading. Upgrading from CNI to CNI Overlay with a larger subnet (i.e., /23) will lead to an outage and require the Application Gateway for Containers subnet to be recreated with a /24 subnet size.
Configuration - AKS
Can I upgrade an existing AKS cluster with Application Gateway for Containers from CNI to CNI Overlay?
Yes. Upgrade of the AKS cluster from CNI to CNI Overlay and Application Gateway for Containers automatically detects the change. It is recommended to schedule this during a maintenance window as it may take a few minutes post-cluster upgrade to detect and configure support for CNI Overlay.
Configuration - TLS
Does Application Gateway for Containers support reencryption (end-to-end encryption) of traffic to backend targets?
Yes. Application Gateway for Containers supports TLS offloading and end-to-end TLS to backend targets.
Can I configure TLS protocol versions?
No. Application Gateway for Containers supports TLS 1.2; SSL 2.0, 3.0, TLS 1.0, and TLS 1.1 are disabled and not configurable.
Configuration - ALB Controller
Is it supported to install both Application Gateway Ingress Controller (AGIC) and ALB Controller in the same Kubernetes cluster?
Yes, both Application Gateway Ingress Controller (AGIC) and ALB Controller may run at the same time in the same Kubernetes cluster. Updates to AGIC or to ALB Controller don't interfere with each other.
Is it supported to install multiple ALB Controllers on the same Kubernetes cluster?
It's possible to install multiple ALB Controllers on the same cluster, however this option isn't recommended or supported as it's not possible to partition gateways, routes, services, etc.
Is it supported to increase the number of pods/replicas for ALB Controller?
No, user defined replica count is not supported. ALB Controller is automatically provisioned with at least two replicas in active/passive configuration to enable high availability.
Is it supported to use the same managed identity with multiple ALB Controllers?
No. Each ALB controller must use its own unique managed identity.
Can I share the same frontend resource between multiple Gateway and/or Ingress resources in Kubernetes?
No. A frontend should be unique to a single Ingress or Gateway resource. Multiple hostnames and routes can be defined in a given Gateway or Ingress resource to eliminate the need for numerous frontend resources.