Azure Public LB traffic flow

Peter Stieber 180 Reputation points
2025-06-26T20:21:58.3266667+00:00

I am trying to understand this setup link

How is internet traffic forwarded by the NVA to an application server VM in Azure?

If a client outside of Azure initiates a connection to the public IP of an Azure Load Balancer, my understanding is that the Load Balancer performs DNAT and forwards the traffic directly to one of its backend pool members. So, how does the packet end up reaching the app server VM?

I know that with an internal Load Balancer, we can use it as a next hop in a user-defined route (UDR), but I'm unclear on how this works with a public Load Balancer. Specifically, why or how would a public Load Balancer be used as a next hop, if at all?
Diagram that shows internet traffic with Load Balancer integration.

Azure Load Balancer
Azure Load Balancer
An Azure service that delivers high availability and network performance to applications.
0 comments No comments
{count} votes

Accepted answer
  1. Marcin Policht 53,675 Reputation points MVP Volunteer Moderator
    2025-06-26T21:13:56.1866667+00:00

    AFAIK, when a client outside Azure connects to the Public IP of an Azure Load Balancer (PLB)

    1. The PLB receives the packet on its public IP.
    2. It performs DNAT (destination NAT) to the private IP of one of its backend pool members, usually an Azure VM or a Virtual Machine Scale Set (VMSS) instance.
    3. The selected backend VM receives the packet directly, responds, and Azure SNATs the response if needed (depending on backend configuration). Effectively, there is no routing via a UDR or an NVA. The PLB does not support being used as a next hop in UDRs.

    If your goal is to force traffic through an NVA (e.g., for inspection, firewalling, etc.) before it reaches the app server VM, a different design pattern is needed, because Azure PLBs do not support UDRs or NVA redirection directly.

    There are two common patterns for this:

    Pattern 1: SNAT the incoming traffic through the NVA

    Architecture:

    • Public Load Balancer frontend → NVA VM in backend pool
    • NVA performs Layer 4 forwarding or proxying to internal app servers
    • UDRs on app server subnet ensure return traffic goes back through the NVA

    Flow:

    1. Client connects to the PLB frontend IP.
    2. NVA is in the PLB backend pool, so receives the traffic.
    3. NVA then forwards (or proxies) traffic to the actual app VM (in another subnet).
    4. App VM sends return traffic to NVA (enforced via UDR).
    5. NVA forwards response to the original client.

    Pros:

    • Centralized control and inspection.
    • Works with PLB because the NVA is a backend.

    Cons:

    • Requires NVA to handle full connection (can be performance-intensive).
    • Slightly more complex to configure.

    Pattern 2: NAT + Internal Load Balancer + UDR

    In more advanced scenarios, you can separate roles:

    Architecture:

    • PLB DNATs traffic to Internal Load Balancer (ILB)
    • ILB forwards to NVA or app VMs
    • UDRs are used in the internal network, not at the PLB level

    Note that a Public Load Balancer itself cannot be a next hop in a UDR. UDRs apply after the DNAT by PLB is complete, and they control outbound traffic from VMs, not the PLB behavior.

    • A next hop in a UDR must be:
      • A virtual appliance (like an NVA),
      • A Virtual Network Gateway,
      • An internet or Virtual Network default,
      • Or a none/default hop.
    • A Public Load Balancer is a PaaS construct that cannot act as a next hop. It handles incoming NAT and distribution, not routing.

    If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.

    hth

    Marcin


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.