An introduction to Linux bridging commands and features
A Linux bridge is a kernel module that behaves like a network switch, forwarding packets between interfaces that are connected to it. It's usually used for forwarding packets on routers, on gateways, or between VMs and network namespaces on a host.
The Linux bridge has included basic support for the Spanning Tree Protocol (STP), multicast, and Netfilter since the 2.4 and 2.6 kernel series. Features that have been added in more recent releases include:
- Configuration via Netlink
- VxLAN tunnel mapping
- Internet Group Management Protocol version 3 (IGMPv3) and Multicast Listener Discovery version 2 (MLDv2)
In this article, you'll get an introduction to these features and some useful commands to enable and control them. You'll also briefly examine Open vSwitch as an alternative to Linux bridging.
Basic bridge commands
All the commands used in this article are part of the iproute2 module, which invokes Netlink messages to configure the bridge. There are two iproute2 commands for setting and configuring bridges: ip link and bridge .
ip link can add and remove bridges and set their options. bridge displays and manipulates bridges on final distribution boards (FDBs), main distribution boards (MDBs), and virtual local area networks (VLANs).
The listings that follow demonstrate some basic uses for the two commands. Both require administrator privileges, and therefore the listings are shown with the # root prompt instead of a regular user prompt.
Show help information about the bridge object:
Create a bridge named br0 :
Show bridge details:
Show bridge details in a pretty JSON format (which is a good way to get bridge key-value pairs):
Add interfaces to a bridge:
Spanning Tree Protocol
The purpose of STP is to prevent a networking loop, which can lead to a traffic storm in the network. Figure 1 shows such a loop.
With STP enabled, the bridges will send each other Bridge Protocol Data Units (BPDUs) so they can elect a root bridge and block an interface, making the network topology loop-free (Figure 2).
Linux bridging has supported STP since the 2.4 and 2.6 kernel series. To enable STP on a bridge, enter:
Note: The Linux bridge does not support the Rapid Spanning Tree Protocol (RSTP).
Now you can show the STP blocking state on the bridge:
The line labeled 9 in the output shows that the veth2 interface is in a blocking state, as illustrated in Figure 3.
To change the STP hello time, enter:
You can use the same basic approach to change other STP parameters, such as maximum age, forward delay, ageing time, and so on.
The VLAN filter was introduced in Linux kernel 3.8. Previously, to separate VLAN traffic on the bridge, the administrator needed to create multiple bridge/VLAN interfaces. As illustrated in Figure 4, three bridges— br0 , br2 , and br3 —would be needed to support three VLANs to make sure that VLAN traffic went to the corresponding VLANs.
But with the VLAN filter, just one bridge device is enough to set all the VLAN configurations, as illustrated in Figure 5.
The following commands enable the VLAN filter and configure three VLANs:
Then the following command enables a VLAN filter on the br0 bridge:
This next command makes the veth1 bridge port transmit only VLAN 2 data:
The following command, similar to the previous one, makes the veth2 bridge port transmit VLAN 2 data. The pvid parameter causes untagged frames to be assigned to this VLAN at ingress ( veth2 to bridge), and the untagged parameter causes the packet to be untagged on egress (bridge to veth2 ):
The next command carries out the same operation as the previous one, this time on veth3 . The master parameter indicates that the link setting is configured on the software bridge. However, because master is a default option, this command has the same effect as the previous one:
The following command enables VLAN 2 and VLAN 3 traffic on eth1 :
To show the VLAN traffic state, enable VLAN statistics (added in kernel 4.7) as follows:
The previous command enables just global VLAN statistics on the bridge, and is not fine grained enough to show each VLAN's state. To enable per-VLAN statistics when there are no port VLANs in the bridge, you also need to enable vlan_stats_per_port (added in kernel 4.20). You can run:
Then you can show per-VLAN statistics like so:
VLAN tunnel mapping
VxLAN builds Layer 2 virtual networks on top of a Layer 3 underlay. A VxLAN tunnel endpoint (VTEP) originates and terminates VxLAN tunnels. VxLAN bridging is the function provided by VTEPs to terminate VxLAN tunnels and map the VxLAN network identifier (VNI) to the traditional end host's VLAN.
Previously, to achieve VLAN tunnel mapping, administrators needed to add local ports and VxLAN network devices (netdevs) into a VLAN filtering bridge. The local ports were configured as trunk ports carrying all VLANs. A VxLAN netdev for each VNI would then need to be added to the bridge. VLAN to VNI mapping was achieved by configuring a port VLAN identifier (pvid) for each VLAN as on the corresponding VxLAN netdev, as shown in Figure 6.
Since 4.11, the kernel has provided a native way to support VxLAN bridging. The topology for this looks like Figure 7. The vxlan0 endpoint in this figure was added with lightweight tunnel (LWT) support to handle multiple VNIs.
To create a tunnel, you must first add related VIDs to the interfaces:
Now enable a VLAN tunnel mapping on a bridge port:
Alternatively, you can enable the tunnel with this command:
Then add VLAN tunnel mapping:
Linux bridging has included support for IGMPv2 and MLDv1 support since kernel version 2.6. IGMPv3/MLDv2 support was added in kernel 5.10.
To use multicast, enable bridge multicast snooping, querier, and statistics as follows:
By default, when snooping is enabled, the bridge uses IGMPv2/MLDv1. You can change the versions with these commands:
After a port joins a group, you can show the multicast database (mdb) like so:
Bridging also supports multicast snooping and querier on a single VLAN. Set them as follows:
You can show bridge xstats (multicast RX/TX information) with this command:
There are other multicast parameters you can configure, including mcast_router , mcast_query_interval , and mcast_hash_max .
Linux bridging is always used when virtual machines (VMs) connect to physical networks, by using the virtio tap driver. You can also attach a Single Root I/O Virtualization (SR-IOV) virtual function (VF) in a VM guest to get better performance (Figure 8).
But the way Linux used to deal with SR-IOV embedded switches limited their expressiveness and flexibility. And the kernel model for controlling the SR-IOV eSwitch did not allow any forwarding unless it was based on MAC/VLAN.
To make VFs also support dynamic FDB (as in Figure 9) and maintain the benefits of the VLAN filter while still providing optimal performance, Linux bridging added switchdev support in kernel version 4.9. Switchdev allows the offloading of Layer 2 forwarding to a hardware switch such as Mellanox Spectrum devices , DSA-based switches, and MLX5 CX6 Dx cards.
In switchdev mode, the bridge is up and its related configuration is enabled, e.g., MLX5_BRIDGE for an MLX5 SRIOV eSwitch. Once in switchdev mode, you can connect the VF's representors to the bridge, and frames that are supposed to be transmitted by the bridge are transmitted by hardware only. Their routing will be done in the switch at the network interface controller (NIC).
Once a frame passes through the VF to its representor, the bridge learns that the source MAC of the VF is behind a particular port. The bridge adds an entry with the MAC address and port to its FDB. Immediately afterward, the bridge sends a message to the mlx5 driver, and the driver adds a relevant rule or line to two tables located in the eSwitch on the NIC. Later, frames with the same destination MAC address that come from the VF don't go through the kernel; instead, they go directly through the NIC to the appropriate port.
Switchdev support for embedded switches in NICs is simple, but for full-featured switches such as Mellanox Spectrum, the offloading capabilities are much richer, with support for link aggregation group (LAG) hashing (team, bonding), tunneling (VxLAN, etc.), routing, and TC offloading. Routing and TC offloading are out of scope for bridging, but LAGs can be attached to the bridge as well as to VxLAN tunnels, with full support for offloading.
Bridging with Netfilter
By default, the traffic forwarded by the bridge does not go through an iptables firewall. To let the iptables forward rules filter Layer 2 traffic, enter:
The same procedure works for ip6tables and arptables.
Bridge ageing time
Ageing determines the number of seconds a MAC address is kept in the FDB after a packet has been received from that address. After this time has passed, entries are cleaned up. To change the timer, enter:
Bridging versus Open vSwitch
Linux bridging is very useful and has become popular over the past few years. It supplies Layer 2 forwarding, and connects VMs and networks with VLAN/multicast support. Bridging on Linux is stable, reliable, and easy to set up and configure.
On the other hand, Linux bridging also has some limitations. It's missing some types of tunnel support, for instance. If you want to get easier network management, more tunnel support (GRE, VXLAN, etc.), Layer 3 forwarding, and integration with software-defined networking (SDN), you can try Open vSwitch (OVS).
To learn more about Linux network interfaces and other networking topics, check out these articles from Red Hat Developer:
- An introduction to Linux virtual interfaces: Tunnels
- Introduction to Linux interfaces for virtual networking
- Get started with XDP
- Skip to content
- Skip to search
- Skip to footer
Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide, Release 7.x
The documentation set for this product strives to use bias-free language. For the purposes of this documentation set, bias-free is defined as language that does not imply discrimination based on age, disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and intersectionality. Exceptions may be present in the documentation due to language that is hardcoded in the user interfaces of the product software, language used based on RFP documentation, or language that is used by a referenced third-party product. Learn more about how Cisco is using Inclusive Language.
- New and Changed Information
- Configuring VXLAN BGP EVPN
- Configuring VXLAN OAM
- Configuring VXLAN EVPN Multihoming
- Configuring VIP/PIP
- Configuring VXLAN EVPN Multi-Site
- Configuring Tenant Routed Multicast
- Configuring VXLAN QoS
- VXLAN Bud Node Over VPC
- DHCP Relay in VXLAN BGP EVPN
- EVPN with Transparent Firewall Insertion
- IPv6 Across a VXLAN EVPN Fabric
Chapter: Configuring VXLAN
Considerations for vxlan deployment, vpc considerations for vxlan deployment, network considerations for vxlan deployments, considerations for the transport network, considerations for tunneling vxlan, enabling vxlans, mapping vlan to vxlan vni, guidelines and limitations for port vlan mapping, configuring port vlan mapping on a trunk port, configuring inner vlan and outer vlan mapping on a trunk port, creating and configuring an nve interface and associate vnis, configuring static mac for vxlan vtep, disabling vxlans, configuring bgp evpn ingress replication, configuring static ingress replication, guidelines and limitations for q-in-vni, configuring q-in-vni, configuring selective q-in-vni, configuring q-in-vni with lacp tunneling, overview for qinq-qinvni, guidelines and limitations for qinq-qinvni, configuring qinq-qinvni, removing a vni, overview for fhrp over vxlan, guidelines and limitations for fhrp over vxlan, only supported deployments for fhrp over vxlan, new supported topology for configuring fhrp over vxlan, overview of igmp snooping over vxlan, guidelines and limitations for igmp snooping over vxlan, configuring igmp snooping over vxlan, configuring line cards for vxlan, deploying evpn, reachability between leaves, vpn to default-vrf reachability, guidelines and limitations, configuration examples for centralized vrf route leak, about vxlan tunnel egress qos policy, guidelines and limitations for vxlan tunnel egress qos policy, configuring vxlan tunnel egress qos policy, verifying the vxlan configuration, example of vxlan bridging configuration.
This chapter contains the following sections:
Information About VXLAN
Guidelines and Limitations for VXLAN
VXLAN has the following guidelines and limitations:
Non-blocking Multicast (NBM) running on a VXLAN enabled switch is not supported. Feature nbm may disrupt VXLAN underlay multicast forwarding.
The lacp vpc-convergence command can be configured in VXLAN and non-VXLAN environments that have vPC port channels to hosts that support LACP.
When entering the no feature pim command, NVE ownership on the route is not removed so the route stays and traffic continues to flow. Aging is done by PIM. PIM does not age out entries having a VXLAN encap flag.
Beginning with Cisco NX-OS Release 7.0(3)I7(3), Fibre Channel over Ethernet (FCoE) N-port virtualization (NPV) can co-exist with VXLAN on different fabric uplinks but on same or different front panel ports on the Cisco Nexus 93180YC-EX and 93180YC-FX switches.
Fibre Channel N-port virtualization (NPV) can co-exist with VXLAN on different fabric uplinks but on same or different front panel ports on the Cisco Nexus 93180YC-FX switches. VXLAN can only exist on the Ethernet front panel ports, but not on the FC front panel ports.
Beginning with Cisco NX-OS Release 7.0(3)I7(3), VXLAN is supported on the Cisco Nexus 9348GC-FXP switch.
When SVI is enabled on a VTEP (flood and learn, or EVPN) regardless of ARP suppression, make sure that ARP-ETHER TCAM is carved using the hardware access-list tcam region arp-ether 256 double-wide command. This is not applicable to the Cisco Nexus 9200 and 9300-EX platform switches and Cisco Nexus 9500 platform switches with 9700-EX line cards.
IP Unnumbered for VXLAN underlay is supported starting with Cisco NX-OS Release 7.0(3)I7(2). Only single unnumbered link between same devices (for example, spine - leaf) is supported. If multiple physical links are connecting the same leaf and spine, you must use the single L3 port-channel with unnumbered link.
For information about the load-share keyword usage for the PBR with VXLAN feature, see the Guidelines and Limitations section of the Configuring Policy-Based Routing chapter of the Cisco Nexus 9000 Series NX-OS Unicast Routing Configuration Guide, Release 7.x .
For Cisco NX-OS Release 7.0(3)F3(3) the following features are not supported:
VXLAN with vPC is not supported.
DHCP snooping, ACL, and QoS policies are not supported on VXLAN VLANs.
IGMP snooping is not supported on VXLAN enabled VLANs.
Beginning with Cisco NX-OS Release 7.0(3)F3(3), VXLAN Layer 2 Gateway is supported on the 9636C-RX line card. VXLAN and MPLS cannot be enabled on the Cisco Nexus 9508 switch at the same time.
Beginning with Cisco NX-OS Release 7.0(3)F3(3), if VXLAN is enabled, the Layer 2 Gateway cannot be enabled when there is any line card other than the 9636C-RX.
Beginning with Cisco NX-OS Release 7.0(3)F3(3), PIM/ASM is supported in the underlay ports. PIM-BiDir is not supported. For more information, see the Cisco Nexus 9000 Series NX_OS Multicast Routing Configuration Guide, Release 7.x .
Beginning with Cisco NX-OS Release 7.0(3)F3(3), IPv6 hosts routing in the overlay is supported.
Beginning with Cisco NX-OS Release 7.0(3)F3(3), ARP suppression is supported.
Beginning with Cisco NX-OS Release 7.0(3)I7(1), the keyword has been added to the Configuring a Route Policy procedure for the PBR over VXLAN feature.
For more information, see the Cisco Nexus 9000 Series NX_OS Unicast Routing Configuration Guide, Release 7.x .
Beginning with Cisco NX-OS Release 7.0(3)I6(1), a new CLI command lacp vpc-convergence is added for better convergence of Layer 2 EVPN VXLAN:
Beginning with Cisco NX-OS Release 7.0(3)I6(1), port-VLAN with VXLAN is supported on Cisco Nexus 9300-EX and 9500 Series switches with 9700-EX line cards with the following exceptions:
Only Layer 2 (no routing) is supported with port-VLAN with VXLAN on these switches.
No inner VLAN mapping is supported.
Beginning with Cisco NX-OS Release 7.0(3)I6(1), VXLAN is supported on Cisco Nexus 3232C and 3264Q switches. Cisco Nexus 3232C and 3264Q switches do not support inter-VNI routing.
IGMP snooping on VXLAN enabled VLANs is not supported in Cisco Nexus 3232C and 3264Q switches. VXLAN with flood and learn and Layer 2 EVPN is supported in Cisco Nexus 3232C and 3264Q switches.
The system nve ipmc CLI command is not applicable to the Cisco 9200 and 9300-EX platform switches and Cisco 9500 platform switches with 9700-EX line cards.
Bind NVE to a loopback address that is separate from other loopback addresses that are required by Layer 3 protocols. A best practice is to use a dedicated loopback address for VXLAN. This best practice should be applied not only for the VPC VXLAN deployment, but for all VXLAN deployments.
To remove configurations from an NVE interface, we recommend manually removing each configuration rather than using the default interface nve command.
When SVI is enabled on a VTEP (flood and learn or EVPN), make sure that ARP-ETHER TCAM is carved using the hardware access-list tcam region arp-ether 256 CLI command. This is not applicable to Cisco 9200 and 9300-EX Series switches and Cisco 9500 Series switches with 9700-EX line cards.
show commands with the internal keyword are not supported.
FEX ports do not support IGMP snooping on VXLAN VLANs.
Beginning with Cisco NX-OS Release 7.0(3)I4(2), VXLAN is supported for the Cisco Nexus 93108TC-EX and 93180YC-EX switches and for Cisco Nexus 9500 Series switches with the X9732C-EX line card.
DHCP snooping (Dynamic Host Configuration Protocol snooping) is not supported on VXLAN VLANs.
RACLs are not supported on Layer 3 uplinks for VXLAN traffic. Egress VACLs support is not available for de-capsulated packets in the network to access direction on the inner payload.
As a best practice, use PACLs/VACLs for the access to the network direction.
QoS classification is not supported for VXLAN traffic in the network to access direction on the Layer 3 uplink interface.
The QoS buffer-boost feature is not applicable for VXLAN traffic.
For 7.0(3)I1(2), Cisco Nexus 9500 platform switches do not support VXLAN tunnel endpoint functionality, however they can be used as spines.
SVI and subinterfaces as uplinks are not supported.
VTEPs do not support VXLAN encapsulated traffic over Parent-Interfaces if subinterfaces are configured. This is regardless of VRF participation.
VTEPs do not support VXLAN encapsulated traffic over subinterfaces. This is regardless of VRF participation or IEEE 802.1q encapsulation.
Mixing Sub-Interfaces for VXLAN and non-VXLAN enabled VLANs is not supported.
Point to multipoint Layer 3 and SVI uplinks are not supported.
For 7.0(3)I2(1) and later, a FEX HIF (FEX host interface port) is supported for a VLAN that is extended with VXLAN.
In an ingress replication VPC setup, Layer 3 connectivity is needed between vPC peer devices. This aids the traffic when the Layer 3 uplink (underlay) connectivity is lost for one of the vPC peers.
Rollback is not supported on VXLAN VLANs that are configured with the port VLAN mapping feature.
The VXLAN UDP port number is used for VXLAN encapsulation. For Cisco Nexus NX-OS, the UDP port number is 4789. It complies with IETF standards and is not configurable.
For 7.0(3)I2(1) and later, VXLAN is supported on Cisco Nexus 9500 Series switches with the following line cards:
Cisco Nexus 9300 Series switches with 100G uplinks only support VXLAN switching/bridging. (7.0(3)I2(1) and later)
Cisco Nexus 9200, Cisco Nexus 9300-EX, and Cisco Nexus 9300-FX platform switches do not have this restriction.
For 7.0(3)I2(1) and later, MDP is not supported for VXLAN configurations.
For 7.0(3)I2(1) and later, bidirectional PIM is not supported for underlay multicast.
Consistency checkers are not supported for VXLAN tables.
ARP suppression is supported for a VNI only if the VTEP hosts the First-Hop Gateway (Distributed Anycast Gateway) for this VNI. The VTEP and SVI for this VLAN must be properly configured for the Distributed Anycast Gateway operation (for example, global anycast gateway MAC address configured and anycast gateway with the virtual IP address on the SVI).
ARP suppression is a per-L2VNI fabric-wide setting in the VXLAN fabric. Enable or disable this feature consistently across all VTEPs in the fabric. Inconsistent ARP suppression configuration across VTEPs is not supported.
For Cisco Nexus 9200 platform switches that have the Application Spine Engine (ASE2). There exists a Layer 3 VXLAN (SVI) throughput issue. There is a data loss for packets of sizes 99–122. (7.0(3)I3(1) and later).
For the NX-OS 7.0(3)I2(3) release, the VXLAN network identifier (VNID) 16777215 is reserved and should not be configured explicitly.
For 7.0(3)I4(1) and later, VXLAN supports In Service Software Upgrade (ISSU).
VXLAN does not support co-existence with the GRE tunnel feature or the MPLS (static or segment-routing) feature on Cisco Nexus 9000 Series switches with a Network Forwarding Engine (NFE).
VTEP connected to FEX host interface ports is not supported (7.0(3)I2(1) and later).
In Cisco NX-OS Release 7.0(3)I4(1), resilient hashing (port-channel load-balancing resiliency) and VXLAN configurations are not compatible with VTEPs using ALE uplink ports.
If multiple VTEPs use the same multicast group address for underlay multicast but have different VNIs, the VTEPs should have at least one VNI in common. Doing so ensures that NVE peer discovery occurs and underlay multicast traffic is forwarded correctly. For example, leafs L1 and L4 could have VNI 10 and leafs L2 and L3 could have VNI 20, and both VNIs could share the same group address. When leaf L1 sends traffic to leaf L4, the traffic could pass through leaf L2 or L3. Because NVE peer L1 is not learned on leaf L2 or L3, the traffic is dropped. Therefore, VTEPs that share a group address need to have at least one VNI in common so that peer learning occurs and traffic is not dropped. This requirement applies to VXLAN bud-node topologies.
NVE source interface loopback for VTEP should only be IPv4 address. Use of IPv6 address for NVE source interface is not supported.
Next hop address in overlay (in bgp l2vpn evpn address family updates) should be resolved in underlay URIB to the same address family. For example, the use of VTEP (NVE source loopback) IPv4 addresses in fabric should only have BGP l2vpn evpn peering over IPv4 addresses.
The following features are not supported:
DHCP snooping and DAI features are not supported on VXLAN VLANs.
IPv6 for VXLAN EVPN ESI MH is not supported.
Native VLANs for VXLAN are not supported. All traffic on VXLAN Layer 2 trunks needs to be tagged. This limitation is applicable to Cisco Nexus 9300 and 9500 switches with 95xx line cards. This is not applicable to Cisco Nexus 9200, 9300-EX, 9300-FX, and 9500 platform switches with -EX or -FX line cards.
QoS buffer-boost is not applicable for VXLAN traffic.
QoS classification is not supported for VXLAN traffic in the network-to-host direction as ingress policy on uplink interface.
Static MAC pointing to remote VTEP (VXLAN Tunnel End Point) is not supported with BGP EVPN (Ethernet VPN).
TX SPAN (Switched Port Analyzer) for VXLAN traffic is not supported for the access-to-network direction.
VXLAN routing and VXLAN Bud Nodes features on the 3164Q platform are not supported.
The following ACL related features are not supported:
Egress RACL that is applied on an uplink Layer 3 interface that matches on the inner or outer payload in the access-to-network direction (encapsulated path).
Ingress RACL that is applied on an uplink Layer 3 interface that matches on the inner or outer payload in the network-to-access direction (decapsulated path).
When configuring VXLAN BGP EVPN, only the "System Routing Mode: Default" is applicable for the following hardware platforms:
Cisco Nexus 9200/9300-EX/FX/FX2
Cisco Nexus 9300 platform switches
Cisco Nexus 9500 platform switches with X9500 line cards
Cisco Nexus 9500 platform switches with X9700-EX/FX/FX2 line cards
The “System Routing Mode: template-vxlan-scale” is not applicable to Cisco NX-OS Release 7.0(3)I5(2) and later.
When using VXLAN BGP EVPN in combination with Cisco NX-OS Release 7.0(3)I4(x) or NX-OS Release 7.0(3)I5(1), the “System Routing Mode: template-vxlan-scale” is required on the following hardware platforms:
Cisco Nexus 9300-EX Switches
Cisco Nexus 9500 Switches with X9700-EX line cards
Changing the “System Routing Mode” requires a reload of the switch.
A loopback address is required when using the source-interface config command. The loopback address represents the local VTEP IP.
During boot-up of a switch (7.0(3)I2(2) and later), you can use the source-interface hold-down-time hold-down-time command to suppress advertisement of the NVE loopback address until the overlay has converged. The range for the hold-down-time is 0 - 2147483647 seconds. The default is 300 seconds.
To establish IP multicast routing in the core, IP multicast configuration, PIM configuration, and RP configuration is required.
VTEP to VTEP unicast reachability can be configured through any IGP protocol.
In VXLAN flood and learn mode (7.0(3)I1(2) and earlier), the default gateway for VXLAN VLANs should be provisioned on external routing devices.
In VXLAN flood and learn mode (7.0(3)I2(1) and later), the default gateway for VXLAN VLAN is recommended to be a centralized gateway on a pair of VPC devices with FHRP (First Hop Redundancy Protocol) running between them.
In BGP EVPN, it is recommended to use the anycast gateway feature on all VTEPs.
For flood and learn mode (7.0(3)I2(1) and later), only a centralized Layer 3 gateway is supported. Anycast gateway is not supported. The recommended Layer 3 gateway design would be a pair of switches in VPC to be the Layer 3 centralized gateway with FHRP protocol running on the SVIs. The same SVI's cannot span across multiple VTEPs even with different IP addresses used in the same subnet.
When configuring ARP suppression with BGP-EVPN, use the hardware access-list tcam region arp-ether size double-wide command to accommodate ARP in this region. (You must decrease the size of an existing TCAM region before using this command.)
VXLAN tunnels cannot have more than one underlay next hop on a given underlay port. For example, on a given output underlay port, only one destination MAC address can be derived as the outer MAC on a given output port.
This is a per-port limitation, not a per-tunnel limitation. This means that two tunnels that are reachable through the same underlay port cannot drive two different outer MAC addresses.
When changing the IP address of a VTEP device, you must shut the NVE interface before changing the IP address.
As a best practice, the RP for the multicast group should be configured only on the spine layer. Use the anycast RP for RP load balancing and redundancy.
Static ingress replication and BGP EVPN ingress replication do not require any IP Multicast routing in the underlay.
As a best practice when feature vPC is added or removed from a VTEP, the NVE interfaces on both the vPC primary and the vPC secondary should be shut before the change is made.
Bind NVE to a loopback address that is separate from other loopback addresses that are required by Layer 3 protocols. A best practice is to use a dedicated loopback address for VXLAN.
On vPC VXLAN, it is recommended to increase the delay restore interface-vlan timer under the vPC configuration, if the number of SVIs are scaled up. For example, if there are 1000 VNIs with 1000 SVIs, it is recommended to increase the delay restore interface-vlan timer to 45 Seconds.
If a ping is initiated to the attached hosts on VXLAN VLAN from a vPC VTEP node, the source IP address used by default is the anycast IP that is configured on the SVI. This ping can fail to get a response from the host in case the response is hashed to the vPC peer node. This issue can happen when a ping is initiated from a VXLAN vPC node to the attached hosts without using a unique source IP address. As a workaround for this situation, use VXLAN OAM or create a unique loopback on each vPC VTEP and route the unique address via a backdoor path.
The loopback address used by NVE needs to be configured to have a primary IP address and a secondary IP address.
The secondary IP address is used for all VxLAN traffic that includes multicast and unicast encapsulated traffic.
vPC peers must have identical configurations.
Consistent VLAN to VN-segment mapping.
Consistent NVE1 binding to the same loopback interface
Using the same secondary IP address.
Using different primary IP addresses.
Consistent VNI to group mapping.
For multicast, the vPC node that receives the (S, G) join from the RP (rendezvous point) becomes the DF (designated forwarder). On the DF node, encap routes are installed for multicast.
Decap routes are installed based on the election of a decapper from between the vPC primary node and the vPC secondary node. The winner of the decap election is the node with the least cost to the RP. However, if the cost to the RP is the same for both nodes, the vPC primary node is elected.
The winner of the decap election has the decap mroute installed. The other node does not have a decap route installed.
On a vPC device, BUM traffic (broadcast, unknown-unicast, and multicast traffic) from hosts is replicated on the peer-link. A copy is made of every native packet and each native packet is sent across the peer-link to service orphan-ports connected to the peer vPC switch.
To prevent traffic loops in VXLAN networks, native packets ingressing the peer-link cannot be sent to an uplink. However, if the peer switch is the encapper, the copied packet traverses the peer-link and is sent to the uplink.
When peer-link is shut, the loopback interface used by NVE on the vPC secondary is brought down and the status is Admin Shut . This is done so that the route to the loopback is withdrawn on the upstream and that the upstream can divert all traffic to the vPC primary.
When the vPC domain is shut, the loopback interface used by NVE on the VTEP with shutdown vPC domain is brought down and the status is Admin Shut. This is done so that the route to the loopback is withdrawn on the upstream and that the upstream can divert all traffic to the other vPC VTEP.
When peer-link is no-shut, the NVE loopback address is brought up again and the route is advertised upstream, attracting traffic.
For vPC, the loopback interface has 2 IP addresses: the primary IP address and the secondary IP address.
The primary IP address is unique and is used by Layer 3 protocols.
The secondary IP address on loopback is necessary because the interface NVE uses it for the VTEP IP address. The secondary IP address must be same on both vPC peers.
The vPC peer-gateway feature must be enabled on both peers.
As a best practice, use peer-switch, peer gateway, ip arp sync, ipv6 nd sync configurations for improved convergence in vPC topologies.
In addition, increase the STP hello timer to 4 seconds to avoid unnecessary TCN generations when vPC role changes occur.
The following is an example (best practice) of a vPC configuration:
When the NVE or loopback is shut in vPC configurations:
If the NVE or loopback is shut only on the primary vPC switch, the global VxLAN vPC consistency checker fails. Then the NVE, loopback, and vPCs are taken down on the secondary vPC switch.
If the NVE or loopback is shut only on the secondary vPC switch, the global VXLAN vPC consistency checker fails. Then the NVE, loopback, and secondary vPC are brought down on the secondary. Traffic continues to flow through the primary vPC switch.
As a best practice, you should keep both the NVE and loopback up on both the primary and secondary vPC switches.
- Redundant anycast RPs configured in the network for multicast load-balancing and RP redundancy are supported on vPC VTEP topologies.
Enabling vpc peer-gateway configuration is mandatory. For peer-gateway functionality, at least one backup routing SVI is required to be enabled across peer-link and also configured with PIM. This provides a backup routing path in the case when VTEP loses complete connectivity to the spine. Remote peer reachability is re-routed over peer-link in his case. In BUD node topologies, the backup SVI needs to be added as a static OIF for each underlay multicast group.
The following is an example of backup SVI with PIM enabled:
As a best practice when changing the secondary IP address of an anycast vPC VTEP, the NVE interfaces on both the vPC primary and the vPC secondary should be shut before the IP changes are made.
Using the ip forward command enables the VTEP to forward the VXLAN de-capsulated packet destined to its router IP to the SUP/CPU.
Before configuring it as an SVI, the backup VLAN needs to be configured on Cisco Nexus 9200, 9300-EX, 9300-FX, and 9300-FX2 platform switches as an infra-VLAN with the system nve infra-vlans command.
When ARP suppression is enabled or disabled in a vPC setup, a down time is required because the global VXLAN vPC consistency checker will fail and the VLANs will be suspended if ARP suppression is disabled or enabled on only one side.
MTU Size in the Transport Network
Due to the MAC-to-UDP encapsulation, VXLAN introduces 50-byte overhead to the original frames. Therefore, the maximum transmission unit (MTU) in the transport network must be increased by 50 bytes. If the overlays use a 1500-byte MTU, the transport network must be configured to accommodate 1550-byte packets at a minimum. Jumbo-frame support in the transport network is required if the overlay applications tend to use larger frame sizes than 1500 bytes.
ECMP and LACP Hashing Algorithms in the Transport Network
As described in a previous section, Cisco Nexus 9000 Series Switches introduce a level of entropy in the source UDP port for ECMP and LACP hashing in the transport network. As a way to augment this implementation, the transport network uses an ECMP or LACP hashing algorithm that takes the UDP source port as input for hashing, which achieves the best load-sharing results for VXLAN encapsulated traffic.
Multicast Group Scaling
The VXLAN implementation on Cisco Nexus 9000 Series Switches uses multicast tunnels for broadcast, unknown unicast, and multicast traffic forwarding. Ideally, one VXLAN segment mapping to one IP multicast group is the way to provide the optimal multicast forwarding. It is possible, however, to have multiple VXLAN segments share a single IP multicast group in the core network. VXLAN can support up to 16 million logical Layer 2 segments, using the 24-bit VNID field in the header. With one-to-one mapping between VXLAN segments and IP multicast groups, an increase in the number of VXLAN segments causes a parallel increase in the required multicast address space and the number of forwarding states on the core network devices. At some point, multicast scalability in the transport network can become a concern. In this case, mapping multiple VXLAN segments to a single multicast group can help conserve multicast control plane resources on the core devices and achieve the desired VXLAN scalability. However, this mapping comes at the cost of suboptimal multicast forwarding. Packets forwarded to the multicast group for one tenant are now sent to the VTEPs of other tenants that are sharing the same multicast group. This causes inefficient utilization of multicast data plane resources. Therefore, this solution is a trade-off between control plane scalability and data plane efficiency.
Despite the suboptimal multicast replication and forwarding, having multitenant VXLAN networks to share a multicast group does not bring any implications to the Layer 2 isolation between the tenant networks. After receiving an encapsulated packet from the multicast group, a VTEP checks and validates the VNID in the VXLAN header of the packet. The VTEP discards the packet if the VNID is unknown to it. Only when the VNID matches one of the VTEP’s local VXLAN VNIDs, does it forward the packet to that VXLAN segment. Other tenant networks will not receive the packet. Thus, the segregation between VXLAN segments is not compromised.
The following are considerations for the configuration of the transport network:
On the VTEP device:
Enable and configure IP multicast.*
Create and configure a loopback interface with a /32 IP address.
(For vPC VTEPs, you must configure primary and secondary /32 IP addresses.)
Enable UP multicast on the loopback interface. *
Advertise the loopback interface /32 addresses through the routing protocol (static route) that runs in the transport network.
Enable IP multicast on the uplink outgoing physical interface. *
Throughout the transport network:
With the Cisco Nexus 9200, 9300-EX, 9300-FX, and 9300-FX2, the use of the system nve infra-vlans command is required, as otherwise VXLAN traffic (IP/UDP 4789) is actively treated by the switch. The following scenarios are a non-exhaustive list but most commonly seen, where the need for a system nve infra-vlans definition is required.
Every VLAN that is not associated with a VNI (vn-segment) is required to be configured as system nve infra-vlans in the following cases:
In the case of VXLAN flood and learn as well as VXLAN EVPN, the presence of non-VXLAN VLANs could be related to:
An SVI related to a non-VXLAN VLAN is used for backup underlay routing between vPC peers via a vPC peer-link (backup routing).
An SVI related to a non-VXLAN VLAN is required for connecting downstream routers (external connectivity, dynamic routing over vPC).
An SVI related to a non-VXLAN VLAN is required for per Tenant-VRF peering (L3 route sync and traffic between vPC VTEPs in a Tenant VRF).
An SVI related to a non-VXLAN VLAN is used for first-hop routing toward endpoints (Bud-Node).
In the case of VXLAN flood and learn, the presence of non-VXLAN VLANs could be related to:
An SVI related to a non-VXLAN VLAN is used for an underlay uplink toward the spine (Core port).
The rule of defining VLANs as system nve infra-vlans can be relaxed for special cases such as:
An SVI related to a non-VXLAN VLAN that does not transport VXLAN traffic (IP/UDP 4789).
Non-VXLAN VLANs that are not associated with an SVI or not transporting VXLAN traffic (IP/UDP 4789).
DC Fabrics with VXLAN BGP EVPN are becoming the transport infrastructure for overlays. These overlays, often originated on the server (Host Overlay), require integration or transport over the top of the existing transport infrastructure (Network Overlay).
Nested VXLAN (Host Overlay over Network Overlay) support has been added starting with Cisco NX-OS Release 7.0(3)I7(4) on the Cisco Nexus 9200, 9300-EX, 9300-FX, 9300-FX2, 9500-EX, and 9500-FX platform switches.
To provide Nested VXLAN support, the switch hardware and software must differentiate between two different VXLAN profiles:
VXLAN originated behind the Hardware VTEP for transport over VXLAN BGP EVPN (nested VXLAN)
VXLAN originated behind the Hardware VTEP to integrated with VXLAN BGP EVPN (BUD Node)
The detection of the two different VXLAN profiles is automatic and no specific configuration is needed for nested VXLAN. As soon as VXLAN encapsulated traffic arrives in a VXLAN enabled VLAN, the traffic is transported over the VXLAN BGP EVPN enabled DC Fabric.
The following attachment modes are supported for Nested VXLAN:
Untagged traffic (in native VLAN on a trunk port or on an access port)
Tagged traffic (tagged VLAN on a IEEE 802.1Q trunk port)
Untagged and tagged traffic that is attached to a vPC domain
Untagged traffic on a Layer 3 interface of a Layer 3 port-channel interface
Port VLAN mapping has the following guidelines and limitations:
Before removing a port-channel which has VLAN mapping configured, VLAN mappings on the interface must be removed.
CoS (QoS) marking is not applicable for the VLANs which are translated on a port.
Do not configure translation on the native VLAN.
When SPAN / Ethanalyzer is used to capture the traffic on PV enabled ports, only the incoming 802.1q tag is seen in the captured traffic.
On a port VLAN translation enabled port, traffic should not be received in translated VLAN. If traffic is received on a translated VLAN on a port VLAN translation-enabled port, traffic will fail.
Overlapping VLAN mapping is supported. For example, switchport vlan mapping 10 20 , switchport vlan mapping 20 30 . Traffic can hit the port with VLAN 10 and VLAN 20, but not with VLAN 30 as it is a translated VLAN.
Port VLAN mapping is not supported on FEX ports.
Control packets support for translation are ARP, IPv6 neighbor discovery, IPv6 neighbor solicitation.
You can configure VLAN translation between the ingress (incoming) VLAN and a local (translated) VLAN on a port. For the traffic arriving on the interface where VLAN translation is enabled, the incoming VLAN is mapped to a translated VLAN that is VXLAN enabled.
On the underlay, this is mapped to a VNI, the inner dot1q is deleted, and switched over to the VXLAN network. On the egress switch, the VNI is mapped to a translated VLAN. On the outgoing interface, where VLAN translation is configured, the traffic is converted to the original VLAN and egress out. Refer to the VLAN counters on the translated VLAN for the traffic counters and not on the ingress VLAN. Port VLAN (PV) mapping is an access side feature and is supported with both multicast and ingress replication for flood and learn and BGP EVPN mode for VXLAN.
VLAN mapping helps with VLAN localization to a port, scoping the VLANs per port. A typical use case is in the service provider environment where the service provider leaf switch has different customers with overlapping VLANs that come in on different ports. For example, customer A has VLAN 10 coming in on Eth 1/1 and customer B has VLAN 10 coming in on Eth 2/2.
In this scenario, you can map the customer VLAN to a provider VLAN and map that to an L2 VNI. There is an operational benefit of terminating different customer VLANs and mapping them to the fabric-managed-VLANs, L2 VNIs.
Notes for Port VLAN Mapping:
Beginning with Cisco NX-OS Release 7.0(3)I7(5), routing is supported on translated VLANs with port VLAN mapping configured on trunk ports. This is supported on Cisco Nexus 9300-EX, 9300-EX, and 9300-FX2 platform switches.
Port VLAN mapping is supported on Cisco Nexus 9300 platform switches. Beginning with Cisco NX-OS Release 7.0(3)I6(1), port VLAN mapping is supported on Cisco Nexus 9300-EX and 9500 platform switches with 9700-EX line cards with the following exceptions:
Only Layer 2 (no routing) is supported with port VLAN on these switches.
Beginning with Release 7.0(3)I7(4), Cisco Nexus 9300, and 9500 switches support switching on overlapped VLAN interfaces; only VLAN-mapping switching is applicable for Cisco Nexus 9500 with EX/FX line cards.
Beginning with Cisco NX-OS 7.0(3)I7(3), port VLAN switching is supported on 9300-FX2 platform switches.
Beginning with Cisco NX-OS 7.0(3)I7(1), port VLAN switching is supported on 9300-FX platform switches.
Beginning with Cisco NX-OS Release 7.0(3)I2(1), Cisco Nexus 9300 platform switches with NFE ASIC Port VLAN switching is supported.
Beginning with Cisco NX-OS Release 7.0(3)I1(2), Cisco Nexus 9300 platform switches with NFRE ASIC Port VLAN routing is supported.
The ingress (incoming) VLAN does not need to be configured on the switch as a VLAN. The translated VLAN needs to be configured and a vn-segment mapping given to it. An NVE interface with VNI mapping is essential for the same.
All Layer 2 source address learning and Layer 2 MAC destination lookup occurs on the translated VLAN. Refer to the VLAN counters on the translated VLAN and not on the ingress (incoming) VLAN.
On Cisco Nexus 9300 Series switches with NFE ASIC, PV routing is not supported on 40 G ALE ports.
PV routing supports configuring an SVI on the translated VLAN for flood and learn and BGP EVPN mode for VXLAN.
VLAN translation (mapping) is supported on Cisco Nexus 9000 Series switches with a Network Forwarding Engine (NFE).
When changing a property on a translated VLAN, the port that has mapping configuration with that VLAN as the translated VLAN, should be flapped to ensure correct behavior.
The following is an example of overlapping VLAN for PV translation. In the first statement, VLAN-102 is a translated VLAN with VNI mapping. In the second statement, VLAN-102 the VLAN where it is translated to VLAN-103 with VNI mapping.
When adding a member to an existing port channel using the force command, the "mapping enable" configuration must be consistent.
Now int po 101 has the "switchport vlan mapping enable" configuration, while eth 1/8 does not. If you want to add eth 1/8 to port channel 101, you first need to apply the "switchport vlan mapping enable" configuration on eth 1/8, and then use the force command.
Port VLAN mapping is not supported on Cisco Nexus 9200 Series switches.
Before you begin
Ensure that the physical or port channel on which you want to implement VLAN translation is configured as a Layer 2 trunk port.
Ensure that the translated VLANs are created on the switch and are also added to the Layer 2 trunk ports trunk-allowed VLAN vlan-list.
Ensure that all translated VLANs are VXLAN enabled.
[ no ] switchport vlan mapping vlan-id translated-vlan-id
Translates a VLAN to another VLAN.
The range for both the vlan-id and translated-vlan-id arguments is from 1 to 4094.
On the underlay, this is mapped to a VNI, the inner dot1q is deleted, and switched over to the VXLAN network. On the egress switch, the VNI is mapped to a translated VLAN. On the outgoing interface, where VLAN translation is configured, the traffic is converted to the original VLAN and egress out.
[ no ] switchport vlan mapping all
Removes all VLAN mappings configured on the interface.
(Optional) copy running-config startup-config
Copies the running configuration to the startup configuration.
(Optional) show interface [ if-identifier ] vlan mapping
Displays VLAN mapping information for a range of interfaces or for a specific interface.
This example shows how to configure VLAN translation between (the ingress) VLAN 10 and (the local) VLAN 100. The show vlan counters command output shows the statistic counters as translated VLAN instead of customer VLAN.
You can configure VLAN translation from an inner VLAN and an outer VLAN to a local (translated) VLAN on a port. For the double tag VLAN traffic arriving on the interfaces where VLAN translation is enabled, the inner VLAN and outer VLAN are mapped to a translated VLAN that is VXLAN enabled.
Notes for configuring inner VLAN and outer VLAN mapping:
Inner and outer VLAN cannot be on the trunk allowed list on a port where inner VLAN and outer VLAN is configured.
On the same port, no two mapping (translation) configurations can have the same outer (or original) or translated VLAN. Multiple inner VLAN and outer VLAN mapping configurations can have the same inner VLAN.
When a packet comes double-tagged on a port which is enabled with the inner option, only bridging is supported.
VXLAN PV routing is not supported for double-tagged frames.
switchport vlan mapping outer-vlan-id inner inner-vlan-id translated-vlan-id
Translates inner VLAN and outer VLAN to another VLAN.
This example shows how to configure translation of double tag VLAN traffic (inner VLAN 12; outer VLAN 11) to VLAN 111.
An NVE interface is the overlay interface that terminates VXLAN tunnels.
You can create and configure an NVE (overlay) interface with the following:
The source interface must be a loopback interface that is configured on the switch with a valid /32 IP address. This /32 IP address must be known by the transient devices in the transport network and the remote VTEPs. This is accomplished by advertising it through a dynamic routing protocol in the transport network.
member vni vni
Associate VXLAN VNIs (Virtual Network Identifiers) with the NVE interface.
mcast-group start-address [ end-address ]
Assign a multicast group to the VNIs.
Static MAC for VXLAN VTEP is supported on Cisco Nexus 9300 Series switches with flood and learn. This feature enables the configuration of static MAC addresses behind a peer VTEP.
The following example shows the output for a static MAC address configured for VXLAN VTEP:
The following enables BGP EVPN with ingress replication for peers.
ingress-replication protocol bgp
Enables BGP EVPN with ingress replication for the VNI.
The following enables static ingress replication for peers.
member vni [ vni-id | vni-range ]
Maps VXLAN VNIs to the NVE interface.
ingress-replication protocol static
Enables static ingress replication for the VNI.
Enables peer IP.
Q-in-VNI has the following limitations:
Q-in-VNI and Selective Q-in-VNI are supported only with VXLAN Flood and Learn.
Q-in-VNI, Selective Q-in-VNI, and QinQ-QinVNI features are not supported with Multicast underlay on Nexus 9000 EX platforms.
It is recommended that you enter the system dot1q tunnel transit when running these features on vPC VTEPs.
For proper operation during L3 uplink failure scenarios on vPC VTEPs configure backup SVI and enter the system nve infra-vlans backup SVI vlan command. On Cisco Nexus 9000-EX platform switches, the backup SVI VLAN needs to be the native VLAN on the Peer-link.
Single tag is supported on Cisco Nexus 9300 platform switches. It can be enabled by unconfiguring the overlay-encapsulation vxlan-with-tag command from an interface NVE:
Single tag is not supported on Cisco Nexus 9500 platform switches; only double tag is supported.
Double tag is not supported on Cisco Nexus 9300-EX platform switches, only single tag is supported.
When upgrading from Cisco NX-OS Release 7.0(3)I3(1) or 7.0(3)I4(1) to Cisco NX-OS Release 7.0(3)I7(5) with Cisco Nexus 9300 platform switches without the overlay-encapsulation vxlan-with-tag command under interface NVE, you should add overlay-encapsulation vxlan-with-tag under the NVE interface in the older release before starting the ISSU upgrade. We were only supporting double tag in Cisco NX-OS Release 7.0(3)I3(1) and 7.0(3)I4(1). We now support single tag also in Cisco NX-OS Release 7.0(3)I7(5).
We do not support traffic between ports that are configured for Q-in-VNI and ports that are configured for trunk on Cisco Nexus 9300-EX platform switches.
Q-in-VNI is supported only with both flood and learn.
The Q-in-VNI feature cannot coexist with a VTEP which has Layer 3 subinterfaces configured.
The Q-in-VNI or selective Q-in-VNI feature is not supported with VXLAN or VXLAN EVPN on Cisco Nexus 9000-EX platform switches when Multicast is used for BUM replication (L2VNI).
Using Q-in-VNI provides a way for you to segregate traffic by mapping to a specific port. In a multi-tenant environment, you can specify a port to a tenant and send/receive packets over the VXLAN overlay.
Notes about configuring a Q-in-VNI:
Q-in-VNI only supports VXLAN bridging. It does not support VXLAN routing.
Q-in-VNI does not support FEX.
When configuring access ports and trunk ports:
For NX-OS 7.0(3)I2(2) and earlier releases, when a switch is in dot1q mode, you cannot have access ports or trunk ports configured on any other interface on the switch.
For NX-OS 7.0(3)I3(1) and later releases running on a Network Forwarding Engine (NFE), you can have access ports, trunk ports and dot1q ports on different interfaces on the same switch.
For NX-OS 7.0(3)I5(1) and later releases running on a Leaf Spine Engine (LSE), you can have access ports, trunk ports and dot1q ports on different interfaces on the same switch.
For NX-OS 7.0(3)I3(1) and later releases, you cannot have the same VLAN configured for both dot1q and trunk ports/access ports.
Configuring the Q-in-VNI feature requires:
The base port mode must be a dot1q tunnel port with an access VLAN configured.
VNI mapping is required for the access VLAN on the port.
If you have Q-in-VNI on one Cisco Nexus 9300-EX Series switch VTEP and trunk on another Cisco Nexus 9300-EX Series switch VTEP, the bidirectional traffic will not be sent between the two ports.
Cisco Nexus 9300-EX Series of switches performing VXLAN and Q-in-Q, a mix of provider interface and VXLAN uplinks is not considered. The VXLAN uplinks have to be separated from the Q-in-Q provider or customer interface.
For VPC use cases, the following considerations must be made when VXLAN and Q-in-Q are used on the same switch.
The VPC peer-link has to be specifically configured as a provider interface to ensure orphan-to-orphan port communication. In these cases, the traffic is sent with two IEEE 802.1q tags (double dot1q tagging). The inner dot1q is the customer VLAN ID while the outer dot1q is the provider VLAN ID (access VLAN).
The VPC peer-link is used as backup path for the VXLAN encapsulated traffic in the case of an uplink failure. In Q-in-Q, the VPC peer-link also acts as the provider interface (orphan-to-orphan port communication). In this combination, use the native VLAN as the backup VLAN for traffic to handle uplink failure scenarios. Also make sure the backup VLAN is configured as a system infra VLAN (system nve infra-vlans).
The following is an example of configuring a Q-in-VNI (NX-OS 7.0(3)I2(2) and earlier releases):
The following is an example of configuring a Q-in-VNI (NX-OS 7.0(3)I3(1) and later releases):
Selective Q-in-VNI is a VXLAN tunneling feature that allows a user specific range of customer VLANs on a port to be associated with one specific provider VLAN. Packets that come in with a VLAN tag that matches any of the configured customer VLANs on the port are tunneled across the VXLAN fabric using the properties of the service provider VNI. The VXLAN encapsulated packet carries the customer VLAN tag as part of the L2 header of the inner packet.
The packets that come in with a VLAN tag that is not present in the range of the configured customer VLANs on a selective Q-in-VNI configured port are dropped. This includes the packets that come in with a VLAN tag that matches the native VLAN on the port. Packets coming untagged or with a native VLAN tag are L3 routed using the native VLAN’s SVI that is configured on the selective Q-in-VNI port (no VXLAN).
Beginning with Cisco NX-OS Release 7.0(3)I5(2), selective Q-in-VNI is supported on both vPC and non-vPC ports on Cisco Nexus 9300-EX Series switches. This feature is not supported on Cisco Nexus 9300 Series and 9200 Series switches.
This feature is also supported with flood and learn in IR mode.
See the following guidelines for selective Q-in-VNI:
Beginning with Cisco NX-OS Release 7.0(3)I5(2), configuring selective Q-in-VNI on one VXLAN and configuring plain Q-in-VNI on the VXLAN peer is supported. Configuring one port with selective Q-in-VNI and the other port with plain Q-in-VNI on the same switch is supported.
Selective Q-in-VNI is an ingress VLAN tag-policing feature. Only ingress VLAN tag policing is performed with respect to the selective Q-in-VNI configured range.
For example, selective Q-in-VNI customer VLAN range of 100-200 is configured on VTEP1 and customer VLAN range of 200-300 is configured on VTEP2. When traffic with VLAN tag of 175 is sent from VTEP1 to VTEP2, the traffic is accepted on VTEP1, since the VLAN is in the configured range and it is forwarded to the VTEP2. On VTEP2, even though VLAN tag 175 is not part of the configured range, the packet egresses out of the selective Q-in-VNI port. If a packet is sent with VLAN tag 300 from VTEP1, it is dropped because 300 is not in VTEP1’s selective Q-in-VNI configured range.
Configure the system dot1q-tunnel transit CLI on the vPC switches with selective Q-in-VNI configurations. This CLI configuration is required to retain the inner Q-tag as the packet goes over the vPC peer link when one of the vPC peers has an orphan port. With this CLI configuration, the vlan dot1Q tag native functionality does not work.
The native VLAN configured on the selective Q-in-VNI port cannot be a part of the customer VLAN range. If the native VLAN is part of the customer VLAN range, the configuration is rejected.
The provider VLAN can overlap with the customer VLAN range. For example, switchport vlan mapping 100-1000 dot1q-tunnel 200
By default, the native VLAN on any port is VLAN 1. If VLAN 1 is configured as part of the customer VLAN range using the switchport vlan mapping < range > dot1q-tunnel < sp-vlan > CLI command, the traffic with customer VLAN 1 is not carried over as VLAN 1 is the native VLAN on the port. If customer wants VLAN 1 traffic to be carried over the VXLAN cloud, they should configure a dummy native VLAN on the port whose value is outside the customer VLAN range.
To remove some VLANs or a range of VLANs from the configured switchport VLAN mapping range on the selective Q-in-VNI port, use the no form of the switchport vlan mapping < range > dot1q-tunnel < sp-vlan > CLI command.
For example, VLAN 100-1000 is configured on the port. To remove VLAN 200-300 from the configured range, use the no switchport vlan mapping < 200-300 > dot1q-tunnel < sp-vlan > CLI command.
Only the native VLANs and the service provider VLANs are allowed on the selective Q-in-VNI port. No other VLANs are allowed on the selective Q-in-VNI port and even if they are allowed, the packets for those VLANs are not forwarded.
See the following configuration examples.
See the following example for the provider VLAN configuration:
See the following example for configuring VXLAN Flood and Learn with Ingress Replication:
See the following example for the interface nve configuration:
See the following example for the native VLAN configuration:
See the following example for configuring selective Q-in-VNI on a port. In this example, native VLAN 150 is used for routing the untagged packets. Customer VLANs 200-700 are carried across the dot1q tunnel. The native VLAN 150 and the provider VLAN 50 are the only VLANs allowed.
Q-in-VNI can be configured to tunnel LACP packets.
The following is an example of configuring a Q-in-VNI for LACP tunneling (NX-OS 7.0(3)I2(2) and earlier releases):
The following is an example of configuring a Q-in-VNI for LACP tunneling (NX-OS 7.0(3)I3(1) and later releases):
The following is an example topology that pins each port of a port-channel pair to a unique VM. The port-channel is stretched from the CE perspective. There is no port-channel on VTEP. The traffic on P1 of CE1 transits to P1 of CE2 using Q-in-VNI.
QinQ-QinVNI is a VXLAN tunneling feature that allows you to configure a trunk port as a multi-tag port to preserve the customer VLANs that are carried across the network.
On a port that is configured as multi-tag, packets are expected with multiple-tags or at least one tag. When multi-tag packets ingress on this port, the outer-most or first tag is treated as provider-tag or provider-vlan. The remaining tags are treated as customer-tag or customer-vlan.
This feature is supported on both vPC and non-vPC ports.
Ensure that the switchport trunk allow-multi-tag command is configured on both of the vPC-peers. It is a type 1 consistency check.
This feature is supported with VXLAN Flood and Learn and VXLAN EVPN.
This feature is supported on the Cisco Nexus 9300-FX and Cisco Nexus 9300-FX2 switches.
QinQ-QinVNI has the following guidelines and limitations:
On a multi-tag port, provider VLANs must be a part of the port. They are used to derive the VNI for that packet.
Untagged packets are associated with the native VLAN. If the native VLAN is not configured, the packet is associated with the default VLAN (VLAN 1).
Packets coming in with an outermost VLAN tag (provider-vlan), not present in the range of allowed VLANs on a multi-tag port, are dropped.
Packets coming in with an outermost VLAN tag (provider-vlan) tag matching the native VLAN are routed or bridged in the native VLAN's domain.
This feature is supported with VXLAN bridging. It does not support VXLAN routing.
Multicast data traffic with more than two Q-Tags in not supported when snooping in enabled on the VXLAN VLAN.
You need at least one multi-tag trunk port allowing the provider VLANs in up state on both the vPC peers. Otherwise, traffic traversing via the peer-link for these provider VLANs will not carry all inner C-Tags.
Use this procedure to remove a VNI.
Configuring FHRP Over VXLAN
Overview of fhrp.
Starting with Release 7.0(3)I5(1), you can configure First Hop Redundancy Protocol (FHRP) over VXLAN on Cisco Nexus 9000 Series switches. The FHRP provides a redundant Layer 3 traffic path. It provides fast failure detection and transparent switching of the traffic flow. The FHRP avoids the use of the routing protocols on all the devices. It also avoids the traffic loss that is associated with the routing or the discovery protocol convergence. It provides an election mechanism to determine the next best gateway. Current FHRP supports HSRPv1, HSRPv2, VRRPv2, and VRRPv3.
FHRP over VXLAN
The FHRP serves at the Layer 3 VXLAN redundant gateway for the hosts in the VXLAN. The Layer 3 VXLAN gateway provides routing between the VXLAN segments and routing between the VXLAN to the VLAN segments. Layer 3 VXLAN gateway also serves as a gateway for the external connectivity of the hosts.
See the following guidelines and limitations for configuring FHRP over VXLAN:
Configuring FHRP over VXLAN allows the FHRP protocols to peer using the hello packets that are flooded on the VXLAN overlay. The ACLs have been programmed into the Cisco Nexus 9500 Series switches that allow the HSRP packets that are flooded on the overlay to be punted to the supervisor module.
When using FHRP with VXLAN, ARP-ETHER TCAM must be carved using the hardware access-list tcam region arp-ether 256 CLI command.
Configuring FHRP over VXLAN is supported for both IR and multicast flooding of the FHRP packets. The FHRP protocol working does not change for configuring FHRP over VXLAN.
The FHRP over VXLAN feature is supported for flood and learn only.
For Layer 3 VTEPs in BGP EVPN, only anycast GW is supported.
Beginning with Cisco NX-OS Release 7.0(3)I5(2), configuring FHRP over VXLAN is supported on the Cisco Nexus 9200, 9300, and 9300-EX Series switches.
See the following illustrations for only supported deployments for FHRP over VXLAN protocols.
See the following configuration example for FHRP over VXLAN Leafs as Layer 3 Gateway (Figure 2) and FHRP over VXLAN Spine as Layer 3 Gateway (Figure 3):
Configuring FHRP over VXLAN is supported on the following Cisco Nexus 9000 Series switches and line cards:
Cisco Nexus 9300 Series switches
N9K-X9536PQ line cards
N9k-X9564TX line cards
N9K-X9564PX line cards
See the following new supported topology for configuring FHRP over VXLAN:
In the above topology, FHRP can be configured on the Spine Layer. The FHRP protocols synchronize its state with the hellos that get flooded on the Overlay without having a dedicated Layer 2 link in between the peers. The FHRP operates in an active/standby state as no vPC is being deployed.
See the following configuration example for the topology:
Starting with Cisco NX-OS Release 7.0(3)F3(4), you can configure IGMP snooping over VXLAN. This feature is available on the Cisco Nexus 9508 switch with 9636-RX line cards.
Starting with Cisco NX-OS Release 7.0(3)I5(1), you can configure IGMP snooping over VXLAN. The configuration of IGMP snooping is same in VXLAN as in configuration of IGMP snooping in regular VLAN domain. For more information on IGMP snooping, see the Configuring IGMP Snooping section in Cisco Nexus 9000 Series NX-OS Multicast Routing Configuration Guide, Release 7.x .
See the following guidelines and limitations for IGMP snooping over VXLAN:
For IGMP snooping over VXLAN, all the guidelines and limitations of VXLAN apply.
Beginning with Cisco NX-OS Release 7.0(3)I7(6), IGMP snooping on VXLAN VLANs is supported on N9K-C9364C, N9K-C93180-FX, and N9K-C9336C-FX2 platform switches.
Beginning with Cisco NX-OS Release 7.0(3)I6(1), IGMP snooping on VXLAN VLANs is supported for Cisco Nexus 9300 and 9300-EX platform switches with multicast overlay networks and ingress replication underlay networks.
Beginning with Cisco NX-OS Release 7.0(3)I5(1), IGMP snooping on VXLAN VLANs is supported for Cisco Nexus 9300 and 9300-EX platform switches and only with multicast underlay networks (not with ingress replication underlay networks).
Beginning with Cisco NX-OS Release 7.0(3)I5(2), VXLAN IGMP snooping is supported on Cisco Nexus 9300 platform switches and Cisco Nexus 9500 platform switches with N9K-X9732C-EX line cards.
By default, unknown multicast traffic gets flooded to the VLAN domains on Cisco Nexus 9300 platform switches.
IGMP snooping over VXLAN is not supported on any FEX enabled platforms and FEX ports.
For VXLAN IGMP snooping functionality, the ARP-ETHER TCAM must be configured in the double-wide mode using the hardware access-list tcam region arp-ether 256 double wide command for Cisco Nexus 9300 switches. This command is not required for Cisco Nexus 9300-EX switches. .
switch(config)# ip igmp snooping vxlan-umc drop vlan ?
Configures IGMP snooping over VXLAN to drop all the unknown multicast traffic on per VLAN basis using this global CLI command. On Cisco Nexus 9000 Series switches with Network Forwarding Engine (NFE), the default behavior of all unknown multicast traffic is to flood to the bridge domain.
This procedure applies only to the Cisco Nexus 9508 switch.
This procedure configures lines cards for either VXLAN or MPLS. All line cards in the chassis must be either VXLAN or MLPS. They cannot be mixed.
Reloads the Cisco NX-OS software.
switch(config)# show hardware profile module [ module | all ]
Displays the line cards that are configured with VXLAN.
Centralized VRF Route Leaking using Default-Routes and Aggregates
Centralizing VRF route leaks using default-routes facilitates installation and configuration of new hardware or software that must coexist with legacy systems, without any additional configuration overheads on the legacy nodes. However, enabling shared services and default-VRF access scenarios may require one additional configuration on a per-VRF-AF level in the Border Leaf (BL). Though the leaf nodes may not require configuration changes, the BLs must have the knowledge of all VRFs, as well as the fabric entry and exit points. EVPN enables multi-tenancy support by segregating traffic among the tenants. While segregation among different tenants is maintained in most cases, supporting the capability of cross-tenant traffic is also equally important for tenants to access common services. In order to achieve traffic segregation, the tenant’s routes are typically placed in different VRFs in an EVPN deployment case.
When an EVPN solution is deployed in an existing datacenter, the legacy switches, that do not have EVPN support, co-exists with EVPN-capable VTEPs. The VTEPs supports tenant traffic segregation. Tenant routes are placed in the VRF while the legacy switches are typically placed in the global VRF. Existing servers remains connected to legacy switches. The hosts in the tenant’s VRF must have access to servers placed under the legacy switches in the global VRF. Access to the default-VRF is enabled by allowing routes, that are imported already, in a non-default-VRF, to be re-imported into the default-VRF. That in turn advertises the VPN learnt prefixes outside of the fabric. Because there is no support in EVPN similar to VPNv4 for advertising the default-routes directly via the VPN session, the default-route must be originated from the VRF AF. You must preferably use route-maps to control prefix leaking from the VRFs into the default-VRF.
EVPN Cross-VRF Connectivity between leaves is achieved by packet re-encapsulation on the BL, which will be the VTEP for all VNIs requiring cross-VRF reachability. Default routes provides cross-VRF reachability to the legacy nodes.
Routes are not imported directly from VPN into the default-VRF. You must configure a VRF to import and hold those routes, which will then be evaluated for importing into the default-VRF after configuring the knob. Because all VRFs may be importing the other VRFs’ routes, only one VRF may be needed to leak its routes to the default-VRF for providing full VPN to default-VRF Reachability.
Centralized VRF Route Leaking is supported only on Cisco Nexus 9200 and 9300-EX platform switches
Each prefix needs to be imported into each VRF for full EVPN Cross-VRF Reachability.
Memory complexity of the deployment can be described by a O(NxM ) formula, where N is the number of prefixes, M is the number of EVPN VRFs.
You must configure “feature bgp” to have access to “export vrf default” command. In order to achieve the full Centralized Route Leaking on EVPN, you must support downstream VNI assignment.
Centralized route leaking applies the longest prefix matching. A leaf with a less specific local route, may not be able to reach a more specific address of that route’s subnet from another VNI, unless you manually configure the border leaf switch to generate those advertisements.
Hardware support for VXLAN packet re-encapsulation at BL is required for this functionality to work in EVPN.
The following example shows how to leak routes from tenant VRF to default VRF.
The following example shows how to leak routes from default VRF to tenant VRF.
The following is an example configuration on a border-leaf switch to route leaks from one tenant VRF (VRF150) to another tenant VRF (VRF250). In these examples, BL-11 is used as the border-leaf switch. The aggregate-address is used for BL switches to advertise VRF250’s address to leaf switches so that leaf switch can send the routes destined to VRF250 to BL.
VXLAN Tunnel Egress QoS Policy
This feature applies the QoS policy for VXLAN tunnel terminated packets coming to this site. This configuration can be applied to the NVE interface. You can apply all input policies such as policing , scheduling, and marking for decapsulated packets coming from the VXLAN tunnel.
The QoS policy is applied end-to-end. That is, the ingress QoS policy on access ports, as well as, the ingress NVE interface on the remote side.
The uniform mode is the default. You have the ability to change the QoS mode by entering the qos-mode pipe command.
VXLAN Tunnel Egress QoS Policy has the following guidelines and limitations:
Beginning with Cisco NX-OS Release 7.0(3)I7(5), support is added for this feature.
This feature is supported only on Cisco Nexus 9300-EX, 9300-FX, and 9300-FX2 platform switches.
This feature is supported only in the EVPN fabric.
This procedure configures the VXLAN Tunnel Egress QoS Policy.
VXLAN configuration must be present.
Enter the show running-config command to determine the current state.
service-policy type qos input policy-map-name
Input the service policy. Uniform mode is the default.
(Optional) qos-mode pipe
Defines the QoS mode as uniform or pipe. Default mode is uniform.
Negate shutdown command.
host-reachability protocol bgp
Defines BGP as the mechanism for host reachability advertisement.
Configure to suppress ARP under Layer 2 VNI.
To display the VXLAN configuration information, enter one of the following commands:
An example of a loopback interface configuration and routing protocol configuration:
Nexus 9000 VTEP-1 configuration:
Nexus 9000 VTEP-2 configuration:
An example of an ingress replication topology:
For a vPC VTEP configuration, the loopback address requires a secondary IP.
An example of a vPC VTEP configuration:
Nexus 9000 VTEP-3 configuration:
Was this Document Helpful?
- (Requires a Cisco Service Contract )
VXLAN vs VLAN: a Definitive Guide
With the widespread adoption of cloud technology, data centers play a huge role in running key applications and business processes for organizations around the world. In 2022, the IT spending on data centers is expected to be $227 billion, and $237 billion by 2023.
To run these datacenters profitably and deliver quality service to their customers, organizations are constantly trying to squeeze out maximum performance from the hardware. Two technologies, VLANs or Virtual Local Area Networks and VXLANS or Virtual eXtensible Local Area Networks improve network efficiency and contribute to improved security. In this article, we explore what they are, how they work, and the differences in VXLAN vs VLAN.
What is VLAN?
VLAN standards for Virtual LAN or Virtual Local Area Network. VLANs essentially create virtual networks within a local area network and let you group together devices logically. For example, in a LAN in an office or a school, all devices come under one network, with a switch (usually) connecting them. And all of these devices come under one broadcast domain, and maybe even under a single collision domain.
This presents a couple of problems. The packets from different devices may collide, and they’ll have to send again, creating network inefficiencies. This can be avoided by using multiple switches, but it still keeps the devices under the same broadcast domain. The network efficiency decreases further as the number of devices increases.
With a VLAN, you can create multiple networks and broadcast domains of smaller sizes. And you can use these virtual LANs for grouping together devices that frequently communicate with each other. For example, instead of connecting all devices in an office under a single broadcast domain or a single LAN, you can create virtual LANs for the finance department, the HR department, and the marketing department.
How do VLANs work?
They work by creating multiple virtual switches over a single physical switch, with each virtual switch handling the communication for a single VLAN. You can configure individual ports on a physical switch to handle communication only for a single VLAN.
And you can connect these virtual switches to other virtual switches in the same virtual LAN, even if they are on another physical switch.
As you can imagine, this is not scalable; for every VLAN, you’ll need a physical connection between the physical switches. For example, let's say there are three VLANs and two switches involved. To connect the virtual switches on these three VLANS, you’ll need three physical connections. And there are only so many ports on a physical switch.
To solve this, a method was devised to connect multiple switches over a single link called trunk ports. Here, data packets for a single port would be carried over a single port on each physical switch.
As we know, every data packet contains a layer 3 header with destination and source IP address, and a layer 2 header containing the MAC addresses. When data is sent over this trunk port, the information about the VLAN it belongs to is added to the layer 2 header. This tag is called the VID or VLAN ID which identifies the VLAN to which each frame belongs to. This ensures that the data packets in a single VLAN reach only the devices in that virtual LAN.
The VID is a 12-bit field that can create 4096 IDs. But 0 and 4095 are reserved, which means you can have up to 4094 VLANs in a single network.
What is VXLAN?
VXLAN or Virtual eXtensible Local Area Network is a tunneling protocol that carries layer 2 packets over a layer 3 network, that is ethernet over IP.
The need for VXLANs came from the limitations of VLAN, as well as the arrival of server virtualization. Due to its 12 bit identifier, you can only have up to 4094 virtual networks with VLAN. Meanwhile, with VXLAN, a 24-bit identifier — called VXLAN network identifier — is used, with which you can have around 16 million VXLANs.
With server virtualization, each physical server can have multiple virtual servers with its own IP address and operating systems. Different customers or clients may use these virtual servers or virtual machines and to effectively maintain these servers, maintain service continuity, and manage resources efficiently, you need dynamic VM migration. That is, in a data center, you should be able to move virtual machines from one physical server to another without affecting the user.
And for this to happen, the IP address must remain unchanged. So we can only make these changes within the data link layer and due to the constraints with the VID, you can only create a limited number of VLANs.
How VXLAN works
VXLAN creates layer 2 networks that span across layer 3 infrastructure, that is it, ethernet over IP. The ethernet layer works as an overlay network, and IP works as the underlay network. Here, a layer 2 ethernet frame is encapsulated into a VXLAN packet by adding a VXLAN header and a UDP header by a VTEP or a VXLAN Tunnel End Point. The VXLAN header consists of the VXLAN Network Identifier, which identifies the tenant or the virtual server or essentially the specific VXLAN.
The frames from the source server encapsulated by a VTEP are received across the tunnel by another VTEP which decapsulates it and sends it to the destination server. VTEPs can either be a physical device or it could be software deployed on a server.
VLAN vs. VXLAN: What are the differences?
It's time to oppose VLAN vs. VXLAN to take a closer look at their differences. First of all, while VXLANs were developed to overcome the limitations of VLANs, their applications are different, and sometimes VLAN isn’t even mentioned when you’re discussing VXLAN. That said, here are the main differences between VXLAN and VLAN.
VLAN has a 12-bit identifier called VID while VXLAN has a 24-bit identifier called VID network identifier. This means that with VLAN you can create only 4094 networks over ethernet, while with VXLAN, you can create up to 16 million. In terms of the overall infrastructure, you can further isolate networks and improve their efficiency.
In VLAN, a layer 2 network is divided into subnetworks using virtual switches and creating multiple broadcast domains within a single LAN network. In VXLAN, a layer 2 network is overlaid on an IP underlay, and the layer 2 ethernet frame is encapsulated in a UDP packet and sent over a VXLAN tunnel.
VLAN is often used by large businesses to better group devices for improved network performance and security. VLAN does network segmentation just like VXLAN, but its mainly used in data centers for dynamic migration.
Another difference is that VLAN uses the tree spanning protocol , which means half the ports are blocked for use while you can use all the ports in the case of VXLAN, further improving efficiency.
What are the advantages of VLANs?
- Improved security: VLANs let you create more networks with fewer devices. With this, you can segment and group your devices and prevent unauthorized access. Network managers can detect any security issues, set up firewalls, and restrict access to these individual networks. For example, you can keep sensitive data under a private VLAN while opening up a separate VLAN for public use. And even within an organization, segmenting the devices improves security.
- Improved performance: When all the devices are receiving all the messages, it creates congestion over the network. It reduces the bandwidth for communication. With VLAN, you can group together devices that communicate frequently, reduce the broadcast domain, and keep the bandwidth clear. Small broadcast domains are also easy to handle.
- Improved network flexibility: With VLAN, you’re not limited by the physical location of the devices. You can group devices based on their function or the department it belongs to, instead of their physical location. If employees switch to a different location in the company, they can still connect to the same VLAN to work.
- Reduced cost: Switches can usually only reduce the collision domain; you need routers to reduce the broadcast domain, which tends to be expensive. With VLANs, you can segment the network in multiple broadcast domains at a low cost.
- Simplified IT management: For the IT department, small networks with less number of devices are easier to manage and troubleshoot instead of a single large network. They provide more granular control over the networks; depending on the specific use case, you can configure the security for these individual networks.
What are the advantages of VXLANs?
- Improved scalability: Compared to VLAN, VXLAN is highly scalable, allowing 16 million isolated networks. This makes it easy to scale and highly useful in data centers, letting them accommodate more tenants.
- Supports dynamic VN migration: This is very important for continuity of services and efficient utilization of resources in a data center. This lets managers upgrade or maintain servers by shifting the VMs to another server without interrupting the services or the user knowing about it. If businesses want to add redundant servers at a different geographical location, they can manage the VMs using VXLANs. It keeps the data center robust and reliable.
- VXLAN can be easily configured and managed: VXLAN is a software-defined network (even though vendors have developed ASICs for VXLANs), and works as an overlay over an underlying IP network. This means the network can be managed and monitored with a centralized controller.
Being an overlay network brings a lot of additional advantages for VXLAN.
VXLAN: overlay over an underlying IP network
As we discussed earlier, VXLAN is a layer 2 virtual network over a layer 3 IP network. This is possible due to the encapsulation and decapsulation process; at the edges, the layer-2 frames are encapsulated into layer 3 packets which are then routed through the IP network.
This means that the overlay and the physical IP network are decoupled and you can make changes to either network without making any changes to the other. This doesn’t mean there won’t be any impact, if the underlying network can’t handle the traffic, it will affect the performance of the overlay network.
Another benefit is that the possibility of duplicate causing a problem is greatly reduced. With multiple VMs is that if two VMs have the same MAC address, it can create networking problems as the switches won’t know where to send the data packets. But a VXLAN can have duplicate MAC addresses without a problem as long as they’re in a different VXLAN segment.
The decoupled physical and virtual layers also mean tenants are not limited by the IP addresses or broadcast domains of the underlying IP network when planning their virtual networks.
In the MAC Address Table, a switch has to store the MAC Addresses of all the devices it is connected to and keep them updated. This means the more devices they’re connected to the more memory it needs and the higher the cost. With this overlay network, not all devices have to identify the MAC addresses of the VMs and the switch has to learn less number of MAC addresses.
How to deploy VXLAN? Three different methods
The different methods of deploying are more or less where the VTEP is located, whether it's in software or hardware.
1. Host-based VXLAN
As the name suggests, here the VXLAN runs on the host. In this case, a virtual switch acts as a VTEP encapsulating and decapsulating the data packets — and is also referred to as a software VTEP.
The virtual switch encapsulates the data before it goes to the physical network, and is only decapsulated at the destination VTEP. These VTEPs can even be inside hypervisor hosts. And because of this, there’s only IP traffic in the physical network.
2. Gateway-based VXLAN
In a gateway-based VXLAN or a hardware VXLAN, the VTEP is within a switch or a router. These devices will then be referred to as VXLAN gateways.
Here, the switches encapsulate and decapsulate the data packets and create tunnels with other VTEPs. The traffic from the hosts to the gateways will be layer 2, while the rest of the network will see only IP traffic.
3. Hybrid VXLAN
In a hybrid implementation, some of the VTEPs are on hardware while some are on hosts in virtual switches. Here, the traffic flows from the source VTEP to the destination VTEP and either of them may be hardware or software.
Frequently asked questions
What exactly is a vlan.
VLAN or Virtual Local Area Network creates multiple smaller broadcast domains over a single Ethernet network. They are used to logically group together devices and improve network efficiency and security.
What exactly is VXLAN?
VXLAN or Virtually eXtensible Local Area Networks overlay a layer 2 network on an underlying layer 3 IP network. They’re used for large-scale segmentation and isolation and handle multiple VMs in data centers.
Read other articles like this : IT General , vxlan , vlan , Network administrator
Read other articles like this:
Evaluate invgate as your itsm solution, 30-day free trial - no credit card needed.
Mapping a VNI to a VLAN | EVPN VXLAN
I have some questions regarding EVPN VXLAN.
So in a "traditional" network (just a normal three-tier network) we use normal VLANs (1-4096).
In a EVPN VXLAN network, we use VNI:s. Let say in a very big data center, we exceed 4096 VLANS. The good thing is we can use VNI and what I understand (correct me if Im wrong) of how EVPN VXLAN works is that you map a unique VNI to a VLAN, so for example VNI: 21500 to vlan 215. If we have used all vlans, so all 1-4096 VLANs, how can we then use VNIs?
So my questions:
If we use all 4096 VLANs, how will we then use the VNI? How does it works if all 4096 VLANs are used and there are no more left? Do we just use the VNI itself?
When using VNI, do we assign the VNI to an accessport as we do with traditional VLANS? So for example "switchport mode access "vni" 21500" or how does it work? Why/why not?
Do we "trunk" VNI:s? Why/why not?
Regarding point number 2, can we use multiple VLANs and map it to a single VNI? Why/why not?
Regarding point number 2 (again), let say we have a server with several VMs and with several VLANs and we use VXLAN EVPN. How will the configuration look like for that server? Do we map that server, so for example VNI:21500 to vlan 215? How will the physical port on the switch be configured, accessport or trunk port? Obviously, we usually trunk that port if that server has several VMs on it but how would that work in a VXLAN EVPN enviroment?
Sorry for my english, it is not my native language. If you want me to clarify, please do.
The switches are still limited to 4096 VLANs. The difference is that you can setup switchpairs with different purposes. You can dedicate one switchpair to a tenant and give that tenant VNIs 1XXXX where XXXX is the VLAN id. Tenant 2 gets VNI-range 2XXXX, Tenant 3 VNI-range 3XXXX etc. By separating your tenants into their own switches, you work around the 4096 VLAN limitations.
No, the VNI is just the tag that VXLAN sets when forwarding the packet. In fact, whatever VLAN tag belongs to that packet is stripped and replaced by the VXLAN VNI. So one switch can associate VNI 100 with vlan 200, and an other switch associate VNI 100 with vlan 300. This is possible because when the first switch receives traffic on vlan 200 and sends it to the other switch, the VLAN header is replaced with VXLAN VNI 100. The other switch receives the VXLAN packet, realizes it is locally mapped to vlan 300 and forwards the packet out on that VLAN.
Not sure what you mean here. I guess the answer is "yes"?
There is something called VLAN-aware bundles that bind multiple VLANs to a single VNI, I haven't looked into exactly what this does since we don't use it in our environment.
The serverport is a normal trunk switchport with whatever VLANs you want to allow on those ports.
Note that I'm basing these responses on my experience with Arista, other vendors may use other implementations.
in addition with vlan mapping you could also have vlan 10 on port 1 that belong to VNI 1XXXX and vlan 10 on port 2 that belong to VNI 2XXXX.
Despite this feature, you're still limited to 4096 vlan per switch
Thank you for the reply! I think I get it now, need just to study and lab VXLAN EVPN more.
In some (cloud) environments they also extend the EVPN fabric to the hypervisors themselves, so they can attach VMs directly to the VNIs without needing to breakout to VLANs in between.
So if I understand you correctly, you mean that we can configure the port where the server is connected to as an access port to a specific VNI?
Outside of some fabric management platforms, like ACI, VLANs are only locally significant to a switch. That is, when you bind a VLAN to a VNI, you do it locally on one leaf. This means one side can be VLAN 100 and the other side VLAN 456. As long as they’re attached to the same VNI, they’re in the same “network”.
This has some implications
1 . You are now limited to 4096 VLANs per rack (assuming one leaf per rack). The odds of you consuming all 4096 in a racks worth of gear is unlikely. 2. Management overhead increases as you need a way to stitch local VLAN assignments to hosts, particularly with any virtualized environment. Consider the vMotion that moves racks might need a different VLAN on the new target
At the end of the day, this becomes something to orchestrate, managed by a computer. An abstraction so you don’t have to worry about VNI assignments and stitching them together. Effectively, no longer managing the details of segmentation, just that there’s a manager doing it in concert with hosting solutions.
Otherwise, as you point out, you’re mostly back to managing 4096 VLANs across your fabric.
Multi-tenancy, assuming physical separation such that you eliminate any possible tenant-VLAN collisions, would then allow you to have 4096 VLANs per tenant. Logical tenancy, such as VRFing your DMZ, but maintaining the same physical compute, would require a converged VLAN-VNI assignment, assuming the same VLAN is used on both ends (for simplicity).
At the end of the day it’s your requirements and how much complexity you’re willing to adopt before it becomes unmanageable. VNI and EVPN open a lot of doors, but also push the boundaries of human manageable complexity. Abstracting away the details, though knowing the fundamentals to triage, are paramount. Like computer memory: you don’t worry about where in memory your app goes, just that you have enough space. The details of allocation, and even garbage collection in many modern languages, is abstracted and safe enough to almost ignore completely. In many ways we’re headed there in networking.
like others said, a switch usually only supports 4096 vlans. However, some devices can create multiple bridges and have 4096 vlans per bridge.
you still assign vlans to ports (tagged/untagged). The vxlan mapping is done after switching (evpn-route-lookup for the mac-address next-hop). If nexthop is remote-VTEP it encapsulates in vxlan and sends to remote switch vtep address and starts decapsulating.
yes you CAN encapsulate VLAN tag in a VNI. This is done in JunOS like this: https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/ref/statement/encapsulate-inner-vlan.html Or in a more granular way like this QinQ in EVPN/VXLAN: https://www.juniper.net/documentation/us/en/software/junos/evpn-vxlan/topics/topic-map/evpn-vxlan-flexible-vlan-tag.html -
Thank you for the reply!
Will look into the links you provided.
In physical switches, you must map a globally significant VNI to a locally significant VLAN.
The mappings don't need to be consistent between switches.
To maintain sanity, they usually should be treated as if they were. I.e., consistent VNI to VLAN mappings throughout your network.
The only exception is if you need more than 4000 VLANs (most places don't). In that case, you'll have something like what ACI does with VLAN domains. You would have a VLAN domain A and VLAN domain B, A has one set of VNI to VLAN mappings, VLAN domain B would have a different. As long as the domain A and B switches don't overlap, it's pretty easy.
If you need them to overlap, then you'll need to have something to keep track of ephemeral VLAN mappings between VLANs and VNI (which is what ACI does).
VLANs are locally significant (per switch) in a VXLAN setup. This means you can map VLAN 10 to VNI 10010 on one leaf, and VLAN 10 to VNI 20020 on another leaf. Each switch is still limited to its 4096 VLANs, but now your global VNI namespace (per fabric, not per switch) is as many VNIs as the header can provide (16.7 million). This means your DC as a whole can carry 16.7M L2 segments, and you can reach that by reusing the VLAN IDs across different leafs.
Your downlinks will be configured with standard 802.1Q tagging, since that's what servers and network appliances understand. The fact that it's mapped to a VNI in the background doesn't matter to the server/appliance.
No, the VNI is only needed in the VXLAN header which is exchanged in the data plane between leafs. VNIs aren't "trunkable" since the whole point of VXLAN is to break free of needing to stretch your broadcast domains over L2 across the network. If your question was regarding whether we can configure a "VNI trunk" on a server-facing port, that would require the server to be capable of VNI tagging and I haven't seen that myself before.
Yes, you can map different VLANs (say 20 and 30) to the same VNI say 10010). This can be done on the same leaf or on different leafs AFAIK. Why? Because now your VLAN ID is only relevant for classifying traffic into a VNI, and it doesn't define your broadcast domain -- the VNI does that now. This gives you extra flexibility at times.
Always keep in mind that the server doesn't (and shouldn't) be aware of the network topology or features. The fact that you run VXLAN or any other solution should be transparent to the server (and it is). The server will only use a standard trunk link, and you (the VXLAN fabric admin) will map each VLAN to a VNI as needed.
Appreciate your time answering me.
If I map VLAN 10 to VNI 10010 and again the same VLAN 10 to another VNI 20020, how will that exact work? Will the VLANS 10 “know” each other? They are both in a different VNIs so I assume not. What is the point of using same VLAN IDs, for example VLAN 10 and map it to two different VNIs. Is it because of limitation of 4096 VLANS so we reuse the VLANS again and just map it to another VNI and that will make that VLAN “unique” from the other “same” VLAN 10?
The reason I was thinking if we needed more than 4096 VLANs was that I was thinking about big data centers such as Facebook. The amount of servers they have and stuff..
Thank you for the reply, need to look more into this.
I see now, so If we map vlan 10 and vlan 20 to VNI 250, they are in the same network.
VNI is a different identifier from vlan ID, and they are not necessarily bond, at least not 1:1.
VNI can bind with other severices besides ethernet, like ipv4, ipv6, or maybe something X in the future.
network guys need to keep multi services in mind these days. multi service was the drive for MPLS, ATM, and today's overlay need to carry this requirement as well.
- VMware Technology Network
- VMware NSX Discussions
VxLAN to VLAN mapping
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Printer Friendly Page
- Mark as New
- Report Inappropriate Content
- configure vxlan networking
- vxlan networking
- All forum topics
- Previous Topic
View solution in original post
EOS 4.29.1F User Manual
- " onclick="window.open(this.href,'win2','status=no,toolbar=no,scrollbars=yes,titlebar=no,menubar=no,resizable=yes,width=640,height=480,directories=no,location=no'); return false;" rel="nofollow"> Print
Vxlan architecture, vxlan gateway, vxlan processes, multicast and broadcast over vxlan, vxlan and mlag, vxlan bridging and routing support, data structures.
The VXLAN architecture extends a Layer 2 network by connecting VLANs from multiple hosts through UDP tunnels called VXLAN segments. VXLAN segments are identified by a 24-bit Virtual Network Identifier (VNI). Within a host, each VLAN whose network is extended to other hosts is associated with a VNI. An extended Layer 2 network comprises the devices attached to VLANs from all hosts that are on VLANs that are associated with the same VNI.
The following figure displays the data objects that comprise a VXLAN implementation on a local host.
- VXLAN Tunnel End Point (VTEP): a host with at least one VXLAN Tunnel Interface (VTI).
- VXLAN Tunnel Interface (VTI): a switchport linked to a UDP socket that is shared with VLANs on various hosts. Packets bridged from a VLAN to the VTI are sent out the UDP socket with a VXLAN header. Packets arriving on the VTI through the UDP socket are demuxed to VLANs for bridging.
- Virtual Network Identifier (VNI): a 24-bit number that distinguishes between the VLANs carried on a VTI. It facilitates the multiplexing of several VLANs over a single VTI.
VNIs can be expressed in digital or dotted decimal formats. VNI values range from 1 to 16777215 or from 0.0.1 to 255.255.255 .
- VTEP IP address of 10.10.1.1 .
- UDP port of 4789 .
- One VTI that supports three VXLAN segments (UDP tunnels): VNI 200 , VNI 2000 , and VNI 20000
- Five VLANs, of which three VLANs can communicate with remote devices over Layer 2.
A VXLAN gateway is a service that exchanges VXLAN data and packets with devices connected to different network segments. VXLAN traffic must pass through a VXLAN gateway to access services on physical devices in a distant network.
- An IP address that is designated as the VXLAN interface source.
- VLAN to VNI mapping.
- VTEP list for each VNI.
- A method for handling broadcast, unknown unicast, and multicast (BUM) packets.
Arista switches manually perform VXLAN gateway services. The switch connects to VXLAN gateways that serve other network segments. MAC address learning is performed in hardware from inbound VXLAN packets.
When a packet enters a VLAN from a member (ingress) port, the VLAN learns the source address by adding an entry to the MAC address table that associates the source to the ingress-port. The VLAN then searches the table for destination address. If the MAC address table lists the address, the packet is sent out the corresponding port. If the MAC address table does not lists the address, the packet is flooded to all ports except the ingress port.
VXLANs extend VLANs through the addition of a VXLAN address table that correlates remote MAC addresses to their port and resident host IP address. Packets that are destined to a remote device are sent to the VXLAN tunnel interface (VTI), which is the switchport that is linked to the UDP socket. The packet is encapsulated with a VXLAN header which includes the VNI associated with the VLAN and the IP mapping of the destination host. The packet is sent through a UDP socket to the destination VTEP IP. The VTI on the remote host extracts the original packet and bridges it to the VLAN associated with the VNI on the remote host.
UDP port 4789 is recognized as the VXLAN socket and listed as the destination port on the UDP packets. The UDP source port field is filled with a hash of the inner header to facilitate load balancing.
- VNI 200 : VTEP 10.20.2.2: VLAN 1200 and VTEP 10.30.3.3: VLAN 200
- VNI 2000 : VTEP 10.10.1.1: VLAN 300 , VTEP 10.20.2.2: VLAN 1400 , and VTEP 10.30.3.3: VLAN 300
- VNI 20000 : VTEP 10.10.1.1: VLAN 200 , and VTEP 10.20.2.2: VLAN 1600
VXLAN routing is enabled by creating a VLAN interface on the VXLAN-enabled VLAN and assigning an IP address to the VLAN interface. The IP address serves as VXLAN gateway for devices that are accessible from the VXLAN-enabled VLAN.
These sections describe multicast and broadcast over VXLANs. Multicast packet flooding describes broadcast and multicast transmission by associating a multicast group to a VTI through a configuration command.
Multicast Packet Flooding
Multicast packet flooding is supported with VXLAN bridging without MLAG. A VTI is associated with a multicast group through a configuration command.
VXLAN and Broadcast
When a VLAN receives or sends a broadcast packet the VTI is treated as a bridging domain L2 interface. The packet is sent from this interface on the multicast group associated with the VTI. The VTIs on remote VTEPs that receive this packet extract the original packet, which is then handled by the VLAN associated with the packet’s VNI. The VLAN floods the packet, excluding the VTI. When the broadcast results in a response, the resulting packet can be unicast back to the originating VTEP because the VXLAN address table obtained the host MAC to VTEP association from the broadcast packet.
VXLAN and Multicast
A VTI is treated as an L2 interface in the VLAN for handling multicast traffic, which is mapped from the VLAN to the multicast group associated with the VTI. All VTEPs join the configured multicast group for inter-VTEP communication within a VXLAN segment; this multicast group is independent of any other multicast groups that the hosts in the VLAN join.
The IP address space for the inter-host VXLAN communication may be sourced from a different VRF than the address space of the hosts in the VLAN. The multicast group for inter-VTEP transmissions must not be used for other purposes by any device in the VXLAN segment space.
Head-end replication uses a flood list to support broadcast, unknown unicast, and multicast (BUM) traffic over VXLAN. The flood list specifies a list of remote VTEPs. The switch replicates BUM data locally for bridging across the remote VTEPs specified by the flood list. This data flooding facilitates remote MAC address learning by forwarding data with unknown MAC addresses.
Head-end replication is required for VXLAN routing and to support VXLANs over MLAG.
VXLAN over MLAG provides redundancy in hardware VTEPs. VTI configuration must be identical on each MLAG peer for them to act as a single VTEP. This also prevents the remote MAC from flapping between the remote VTEPs by ensuring that the rest of the network sees a host that is connected to the MLAG interface as residing behind a single VTEP.
- VXLAN routing recirculates a packet twice, with the first iteration performing the routing action involving an L2 header rewrite, and the second recirculation performing VXLAN encap and decap operations. Recirculation is achieved by MAC loopback on dedicated loopback interfaces.
- The configuration for VXLAN routing on an MLAG VTEP includes separate Recirc-Channel configuration on both peers. The virtual IP, virtual MAC, and virtual VARP VTEP IP addresses are identical on both peers.
- VLAN-VNI mappings
- VTEP IP address of the source loopback interface
- Flood VTEP list used for head-end replication
If OSPF is also in use, configure the OSPF router ID manually to prevent the switch from using the common VTEP IP address as the router ID.
- Only the MLAG peer that receives a packet performs VXLAN encapsulation on it.
- Packets are not VXLAN encapsulated if they are received from the peer link.
- If a packet is decapsulated and sent over the peer link, it should not be flooded to active MLAG interfaces.
- If a packet is sent over the peer link to the CPU, it is not head-end replicated to other remote VTEPs.
- If a packet’s destination is the VTEP IP address, it is terminated by the MLAG peer that receives it.
- These commands complete the configuration required for a VXLAN routing deployment. switch(config)# interface Vxlan1 switch(config-if-Vx1)# vxlan source-interface Loopback0 switch(config-if-Vx1)# vxlan udp-port 4789 switch(config-if-Vx1)# vxlan vlan 2417 vni 8358534 switch(config-if-Vx1)# vxlan flood vtep 22.214.171.124 126.96.36.199 switch(config-if-Vx1)# interface Vlan2417 switch(config-if-Vl2417)# ip address 188.8.131.52/24 switch(config-if-Vl2417)# interface Loopback0 switch(config-if-Lo0)# ip address 184.108.40.206/32 switch(config-if-Lo0)# ip routing switch(config)# interface Recirc-Channel627 switch(config-if-Re627)# switchport recirculation features vxlan switch(config-if-Re627)# interface Ethernet 1 switch(config-if-Et1)# traffic-loopback source system device mac switch(config-if-Et1)# channel-group recirculation 627 switch(config-if-Et1)# exit switch(config)# interface Ethernet 2 switch(config-if-Et2)# traffic-loopback source system device mac switch(config-if-Et2)# channel-group recirculation 627 switch(config-if-Et2)#
There are 2 remote VTEPs configured 220.127.116.11 , and 18.104.22.168 . The remote VTEP 22.214.171.124 is reachable through Ethernet54/1.4095 and remote VTEP 126.96.36.199 is reachable through port-channel1.4095 .
Configuring Unconnected Ethernet Interfaces for Recirculation
On systems where bandwidth is not fully used by the front panel ports, unused bandwidth is used for recirculation.
The following example is applicable to the DCS-7050X series platform.
These commands expose unconnected Ethernet interfaces which are used for recirculation, in order to use them to replace or use along with front panel Ethernet interfaces.
The following example enables display of the inactive interfaces using the show command.
Running a show command generates the following output:
On previous releases, Ethernet 21/2, 21/4 do not exist and the output would be the following:
Describes the support of VXLAN Bridging and Routing on the R3 series of DCS 7280, 7500, and 7800 Arista switches.
Differences with DCS-7500R2 Implementation
- There is no need to configure the VXLAN-routing TCAM profile to enable VXLAN routing on the R3 Series switches. The command is still accepted for backward compatibility reasons.
- CPU bound traffic after VXLAN decapsulation (such as routing protocol packets) use the same CoPP queues used by the non VXLAN decapsulated packets. This is an improvement over the R2 series behavior where the CPU bound traffic after VXLAN decapsulation took a different CoPP queue that was shared with other IP Unicast packets.
There is no EVPN VXLAN Multicast (Type 6/7/8 NLRI) support.
VXLAN implementation requires two VXLAN tables and a MAC address table accommodation.
MAC Address Table VXLAN Support
MAC address table entries correlate MAC addresses with the port upon which packets arrive. In addition to Ethernet and port channels, the port column may specify a VTI for packets that arrive on a VLAN from a remote port through the VXLAN segment.
VTEP-MAC Address Table
VTEP-MAC address table entries correlate MAC address with the IP address of the VTEP from where packets bearing the MAC address arrive. The VTI uses this table to determine the destination address for packets that are sent to remote hosts.
The VNI-VLAN map displays the one-to-one correspondence between the VNIs assigned on the switch and the VLANs to which they are assigned. Each VNI can be assigned to only one VLAN; each VLAN can be assigned a maximum of one VNI. Each VNI-VLAN assignment constitutes a VXLAN segment.
- Cisco Community
- Technology and Support
- Data Center and Cloud
- Other Data Center Subjects
VxLAN VNI to VLAN Mapping
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Printer Friendly Page
- Mark as New
- Report Inappropriate Content
06-30-2018 08:25 AM - edited 03-01-2019 08:48 AM
- Other Data Center Topics
- All forum topics
- Previous Topic
01-13-2020 09:58 PM - edited 01-15-2020 01:53 AM
07-10-2020 02:51 AM
08-06-2020 07:00 AM
12-15-2020 04:31 AM - edited 12-15-2020 05:38 AM
05-08-2022 03:10 AM
05-23-2022 09:44 PM - edited 05-23-2022 09:47 PM
10-25-2022 04:49 AM
10-25-2022 09:21 AM
10-25-2022 01:12 PM
10-25-2022 09:29 AM
10-25-2022 01:08 PM
10-26-2022 03:46 AM
- « Previous
- Next »
Find answers to your questions by entering keywords or phrases in the Search bar above. New here? Use these resources to familiarize yourself with the community:
- How to use Community
- New Community Member Guide
Configuring Citrix ADC-owned IP addresses
Configuring the NSIP address
Configuring and Managing Virtual IP (VIP) Addresses
Configuring ARP response Suppression for Virtual IP addresses (VIPs)
Configuring Subnet IP Addresses (SNIPs)
Configuring GSLB Site IP Addresses (GSLBIP)
Removing a Citrix ADC-owned IP address
Configuring Application Access Controls
How the Citrix ADC Proxies Connections
Enabling Use Source IP Mode
Configuring Network Address Translation
Coexistence of INAT and Virtual Servers
Stateless NAT46 Translation
Stateful NAT64 Translation
Configuring Prefix-Based IPv6-IPv4 Translation
IP Prefix NAT
Configuring Static ARP
Setting the Timeout for Dynamic ARP Entries
Configuring Neighbor Discovery
Configuring IP Tunnels
Class E IPv4 packets
Monitor the free ports available on a Citrix ADC appliance for a new back-end connection
Configuring MAC-Based Forwarding
Configuring Network Interfaces
Configuring Forwarding Session Rules
Configuring a VLAN
Configuring VLANs on a Single Subnet
Configuring VLANs on Multiple Subnets
Configuring Multiple Untagged VLANs across Multiple Subnets
Configuring Multiple VLANs with 802.1q Tagging
Associate an IP Subnet with a Citrix ADC Interface by using VLANs
Best practices for VLAN configurations
Configuring Allowed VLAN List
Configuring Bridge Groups
Configuring Virtual MACs
Configuring Link Aggregation
Redundant Interface Set
Binding an SNIP address to an Interface
Monitoring the Bridge Table and Changing the Aging time
Citrix ADC Appliances in Active-Active Mode Using VRRP
Configuring Active-Active Mode
Configuring Send to Master
Configuring VRRP Communication Intervals
Configuring Health Tracking based on Interface State
Keeping a VIP Address in the Backup State
Using the Network Visualizer
Configuring Link Layer Discovery Protocol
Configuring Jumbo Frames Support on a Citrix ADC Appliance
Use Case 1 – Jumbo to Jumbo Setup
Use Case 2 – Non-Jumbo to Jumbo Setup
Use Case 3 – Coexistence of Jumbo and Non-Jumbo flows on the Same Set of Interfaces
Citrix ADC Support for Microsoft Direct Access Deployment
Access Control Lists
Simple ACLs and Simple ACL6s
Extended ACLs and Extended ACL6s
MAC Address Wildcard Mask for ACLs
Blocking Traffic on Internal Ports
Configuring Dynamic Routes
Configuring IPv6 RIP
Configuring IPv6 OSPF
Installing Routes to the Citrix ADC Routing Table
Advertisement of SNIP and VIP Routes to Selective Areas
Configuring Bidirectional Forwarding Detection
Configuring Static Routes
Route Health Injection Based on Virtual Server Settings
Configuring Policy-Based Routes
Configuring a Policy-Based Routes (PBR) for IPv4 Traffic
Configuring a Policy-Based Routes (PBR6) for IPv6 Traffic
MAC Address Wildcard Mask for PBRs
Using NULL Policy Based Routes to Drop Outgoing Packets
Traffic distribution in multiple routes based on five tuples information
Troubleshooting Routing Issues
Generic Routing FAQs
Troubleshooting OSPF-Specific Issues
Internet Protocol version 6 (IPv6)
Inter Traffic Domain Entity Bindings
Virtual MAC Based Traffic Domains
Best practices for networking configurations
Configure to source Citrix ADC FreeBSD data traffic from a SNIP address
This content has been machine translated dynamically.
Dieser Inhalt ist eine maschinelle Übersetzung, die dynamisch erstellt wurde. (Haftungsausschluss)
Cet article a été traduit automatiquement de manière dynamique. (Clause de non responsabilité)
Este artículo lo ha traducido una máquina de forma dinámica. (Aviso legal)
이 콘텐츠는 동적으로 기계 번역되었습니다. 책임 부인
Este texto foi traduzido automaticamente. (Aviso legal)
Questo contenuto è stato tradotto dinamicamente con traduzione automatica. (Esclusione di responsabilità))
This article has been machine translated.
Dieser Artikel wurde maschinell übersetzt. (Haftungsausschluss)
Ce article a été traduit automatiquement. (Clause de non responsabilité)
Este artículo ha sido traducido automáticamente. (Aviso legal)
이 기사는 기계 번역되었습니다. 책임 부인
Este artigo foi traduzido automaticamente. (Aviso legal)
Questo articolo è stato tradotto automaticamente. (Esclusione di responsabilità))
Citrix ADC appliances support Virtual eXtensible Local Area Networks (VXLANs). A VXLAN overlays Layer 2 networks onto a layer 3 infrastructure by encapsulating Layer-2 frames in UDP packets. Each overlay network is known as a VXLAN Segment and is identified by a unique 24-bit identifier called the VXLAN Network Identifier (VNI). Only network devices within the same VXLAN can communicate with each other.
VXLANs provide the same Ethernet Layer 2 network services that VLANs do, but with greater extensibility and flexibility. The two main benefits of using VXLANs are the following:
- Higher scalability. Server virtualization and cloud computing architectures have dramatically increased the demand for isolated Layer 2 networks in a datacenter. The VLAN specification uses a 12-bit VLAN ID to identify a Layer 2 network, so you cannot scale beyond 4094 VLANs. That number can be inadequate when the requirement is for thousands of isolated Layer 2 networks. The 24-bit VNI accommodates up to 16 million VXLAN segments in the same administrative domain.
- Higher flexibility. Because VXLAN carries Layer 2 data frames over Layer 3 packets, VXLANs extend L2 networks across different parts of a datacenter and across geographically separated datacenters. Applications that are hosted in different parts of a datacenter and in different datacenters but are part of the same VXLAN appear as one contiguous network.
- How VXLANs Work
VXLAN Segments are created between VXLAN Tunnel End Points (VTEPs). VTEPs support the VXLAN protocol and perform VXLAN encapsulation and decapsulation. You can think of a VXLAN segment as a tunnel between two VTEPs, where one VTEP encapsulates a Layer2 frame with a UDP header and an IP header and sends it through the tunnel. The other VTEP receives and decapsulates the packet to get the Layer 2 frame. A Citrix ADC is one example of a VTEP. Other examples are third-party hypervisors, VXLAN aware virtual machines, and VXLAN capable switches.
The following illustration displays virtual machines and physical servers connected through VXLAN tunnels.
The following illustration displays the format of a VXLAN packet.
VXLANs on a Citrix ADC use a Layer 2 mechanism for sending broadcast, multicast, and unknown unicast frames. A VXLAN supports the following modes for sending these L2 frames.
- Unicast mode : In this mode, you specify the IP addresses of VTEPs while configuring a VXLAN on a Citrix ADC. The Citrix ADC sends broadcast, multicast, and unknown unicast frames over Layer 3 to all VTEPs of this VXLAN.
- Multicast mode : In this mode, you specify a multicast group IP address while configuring a VXLAN on a Citrix ADC. Citrix ADCs do not support Internet Group Management Protocol (IGMP) protocol. Citrix ADCs rely on the upstream router to join a multicast group, which shares a common multicast group IP address. The Citrix ADC sends broadcast, multicast, and unknown unicast frames over Layer 3 to the multicast group IP address of this VXLAN.
Similar to a Layer 2 bridge table, Citrix ADCs maintain VXLAN mapping tables based on the inner and outer header of the received VXLAN packets. This table maps of remote host MAC addresses to VTEP IP addresses for a particular VXLAN. The Citrix ADC uses the VXLAN mapping table to look up the destination MAC address of a Layer 2 frame. If an entry for this MAC address is present in the VXLAN table, the Citrix ADC sends the Layer 2 frame over Layer 3, using the VXLAN protocol, to the mapped VTEP IP address specified in the mapping entry for a VXLAN.
Because VXLANs function similarly to VLANs, most of the Citrix ADC features that support VLAN as a classification parameter support VXLAN. These features include an optional VXLAN parameter setting, which specifies the VXLAN VNI.
In a high availability (HA) configuration, the VXLAN configuration is propagated or synchronized to the secondary node.
- VXLAN Use Case: Load Balancing across Datacenters
To understand the VXLAN functionality of a Citrix ADC, consider an example in which Example Corp hosts a site at www.example.com. To ensure application availability, the site is hosted on three servers, S0, S1, and S2. A load balancing virtual server, LBVS, on Citrix ADC NS-ADC is used to load balance these servers. S0, S1, and S2 reside in datacenters DC0, DC1, and DC2, respectively. In DC0, server S0 is connected to NS-ADC.
S0 is a physical server, and S1 and S2 are virtual machines (VMs). S1 runs on virtualization host device Dev-VTEP-1 in datacenter DC1, and S2 runs on host device Dev-VTEP-2 in DC2. NS-ADC, Dev-VTEP-1, and Dev-VTEP-2 support the VXLAN protocol.
S0, S1, and S2 are part of the same private subnet, 188.8.131.52/24. S0, S1, and S2 be part of a common broadcast domain, VXLAN 9000 is configured on NS-ADC, Dev-VTEP-1, and Dev-VTEP-2. Servers S1 and S2 are made part of VXLAN9000 on Dev-VTEP-1 and Dev-VTEP-2, respectively.
The following table lists the settings used in this example: VXLAN settings .
Services SVC-S0, SVC-S1, and SVC-S2 on NS-ADC represent S0, S1, and S2. As soon as these services are configured, NS-ADC broadcasts ARP requests for S0, S1, and S2 to resolve IP-to-MAC mapping. These ARP requests are also sent over VXLAN 9000 to Dev-VTEP-1 and Dev-VTEP-2.
Following is the traffic flow for resolving the ARP request for S2:
- Sourced IP address = Subnet IP address SNIP-for-Servers (184.108.40.206)
- Source MAC address = MAC address of the NS-ADC’s interface from which the packet is sent out = NS-MAC-1
- VXLAN header with an ID (VNI) of 9000
- Standard UDP header, UDP checksum set to 0×0000, and destination port set to 4789.
- Source IP address = SNIP-VTEP-0 (220.127.116.11).
- Dev-VTEP-2 receives the UDP packet and decapsulates the UDP header, from which Dev-VTEP-2 learns that the packet is a VXLAN related packet. Dev-VTEP-2 then decapsulates the VXLAN header and learns the VXLAN ID of the packet. The resulting packet is the ARP request packet for S2, which is same as in step 1.
- From the inner and outer header of the VXLAN packet, Dev-VTEP-2 makes an entry in its VXLAN mapping table that shows the mapping of MAC address (NS-MAC-1) and SNIP-VTEP-0 (18.104.22.168) for VXLAN9000.
- Dev-VTEP-2 sends the ARP packet to S2. S2’s response packet reaches Dev-VTEP-2. Dev-VTEP-2 performs a lookup in its VXLAN mapping table and gets a match for the destination MAC address NS-MAC-1. The Dev-VTEP-2 now knows that NS-MAC-1 is reachable through SNIP-VTEP-0 (22.214.171.124) over VXLAN 9000.
- Destination IP address = Subnet IP address SNIP-for-Servers (126.96.36.199)
- Destination MAC address = NS-MAC-1
- S2’s response packet reaches Dev-VTEP-2. Dev-VTEP-2 performs a lookup in its VXLAN mapping table and gets a match for the destination MAC address NS-MAC-1. The Dev-VTEP-2 now knows that NS-MAC-1 is reachable through SNIP-VTEP-0 (188.8.131.52) over VXLAN 9000. Dev-VTEP-2 encapsulates the ARP response with VXLAN and UDP headers, and sends the resultant packet to SNIP-VTEP-0 (184.108.40.206) of NS-ADC.
- NS-ADC on receiving the packet, decapsulates the packet by removing the VXLAN and UDP headers. The resultant packet is S2’s ARP response. NS-ADC updates it VXLAN mapping table for S2’s MAC address (MAC-S2) with Dev-VTEP-2’s IP address (220.127.116.11) for VXLAN 9000. NS-ADC also updates it ARP table for S2’s IP address (18.104.22.168) with S2’s MAC address (MAC-S2).
Following is the traffic flow for load balancing virtual server LBVS in this example:
- Source IP address = IP address of client CL (198.51.100.90)
- Destination IP address = IP address (VIP) of LBVS = 22.214.171.124
- LBVS of NS-ADC receives the request packet, and its load balancing algorithm selects server S2 of datacenter DC2.
- Source IP address = Subnet IP address on NS-ADC= SNIP-for-Servers (126.96.36.199)
- Destination IP address = IP address of S2 (188.8.131.52)
- NS-ADC finds a VXLAN mapping entry for S2 in its bridge table. This entry indicates that S2 is reachable through Dev-VTEP-2 over VXLAN 9000.
- Source IP address = SNIP address = SNIP-VTEP-0 (184.108.40.206)
- Destination IP address = IP address of Dev-VTEP-2 (220.127.116.11)
- Dev-VTEP-2 receives the UDP packet and decapsulates the UDP header, from which Dev-VTEP-2 learns that the packet is a VXLAN related packet. Dev-VTEP-2 then decapsulates the VXLAN header and learns the VXLAN ID of the packet. The resulting packet is the same packet as in step 3.
- Dev-VTEP-2 then forwards the packet to S2.
- Source IP address = IP address of S2 (18.104.22.168)
- Destination IP address = Subnet IP address on NS-ADC= SNIP-for-Servers (22.214.171.124)
- Dev-VTEP-2 encapsulates the response packet in the same way that NS-ADC encapsulated the request packet in steps 4 and 5. Dev-VTEP-2 then sends the encapsulated UDP packet to SNIP address SNIP-for-Servers (126.96.36.199) of NS-ADC.
- NS-ADC, upon receiving the encapsulated UDP packet, decapsulates the packet by removing the UDP and VXLAN headers in the same way that Dev-VTEP-2 decapsulated the packet in step 7. The resultant packet is the same response packet as in step 9.
- Destination IP address = IP address (VIP) of LBVS (188.8.131.52)
- Points to Consider for Configuring VXLANs
Consider the following points before configuring VXLANs on a Citrix ADC:
A maximum of 2048 VXLANs can be configured on a Citrix ADC.
VXLANs are not supported in a cluster.
Link-local IPv6 addresses cannot be configured for each VXLAN.
Citrix ADCs do not support Internet Group Management Protocol (IGMP) protocol to form a multicast group. Citrix ADCs rely on the IGMP protocol of its upstream router to join a multicast group, which share a common multicast group IP address. You can specify a multicast group IP address while creating VXLAN bridge table entries but the multicast group must be configured on the upstream router. The Citrix ADC sends broadcast, multicast, and unknown unicast frames over Layer 3 to the multicast group IP address of this VXLAN. The upstream router then forwards the packet to all the VTEPs that are part of the multicast group.
VXLAN encapsulation adds an overhead of 50 bytes to each packet:
Outer Ethernet Header (14) + UDP header (8) + IP header (20) + VXLAN header (8) = 50 bytes
To avoid fragmentation and performance degradation, you must adjust the MTU settings of all network devices in a VXLAN pathway, including the VXLAN VTEP devices, to handle the 50 bytes of overhead in the VXLAN packets.
Important: Jumbo frames are not supported on the Citrix ADC VPX virtual appliances, Citrix ADC SDX appliances, and Citrix ADC MPX 15000/17000 appliances. These appliances support an MTU size of only 1500 bytes and cannot be adjusted to handle the 50 bytes overhead of VXLAN packets. VXLAN traffic might be fragmented or suffer performance degradation, if one of these appliances is in the VXLAN pathway or acts as a VXLAN VTEP device.
On Citrix ADC SDX appliances, VLAN filtering does not work for VXLAN packets.
You cannot set a MTU value on a VXLAN.
You cannot bind interfaces to a VXLAN.
- Configuration Steps
Configuring a VXLAN on a Citrix ADC appliance consists of the following tasks.
- Add a VXLAN entity . Create a VXLAN entity uniquely identified by a positive integer, which is also called the VXLAN Network Identifier (VNI). In this step, you can also specify the destination UDP port of remote VTEP on which the VXLAN protocol is running. By default, the destination UDP port parameter is set to 4789 for the VXLAN entity. This UDP port setting must match the settings on all remote VTEPs for this VXLAN. You can also bind VLANs to this VXLAN. The traffic (which includes broadcasts, multicasts, unknown unicasts) of all bound VLANs are allowed over this VXLAN. If no VLANs are bound to the VXLAN, the Citrix ADC allows traffic of all VLANs, on this VXLAN, that are not part of any other VXLANs.
- Bind the local VTEP IP address and to the VXLAN entity . Bind one of the configured SNIP address to the VXLAN to source outgoing VXLAN packets.
- Add a bridgetable entry . Add a bridgetable entry specifying the VXLAN ID and the remote VTEP IP address for the VXLAN to be created.
- (Optional) Bind different feature entities to the configured VXLAN . VXLANs function similarly to VLANs, most of the Citrix ADC features that support VLAN as a classification parameter also support VXLAN. These features include an optional VXLAN parameter setting, which specifies the VXLAN VNI.
- (Optional) Display the VXLAN mapping table . Display the VXLAN mapping table, which includes mapping entries for remote host MAC address to VTEP IP address for a particular VXLAN. In other words, a VXLAN mapping states that a host is reachable through the VTEP on a particular VXLAN. The Citrix ADC learns VXLAN mappings and updates its mapping table from the VXLAN packets it receives. The Citrix ADC uses the VXLAN mapping table to lookup for the destination MAC address of a Layer 2 frame. If an entry for this MAC address is present in the VXLAN table, the Citrix ADC sends the Layer 2 frame over Layer 3, using the VXLAN protocol, to the mapped VTEP IP address specified in the mapping entry for a VXLAN.
To add a VXLAN entity by using CLI:
At the command prompt, type
- add vxlan <id>
- show vxlan <id>
To bind the local VTEP IP address to the VXLAN by using CLI:
- bind vxlan <id> - SrcIP <IPaddress>
To add a bridgetable by using CLI:
- add bridgetable - mac <macaddress> - vxlan <ID> - vtep <IPaddress>
- show bridgetable
To display the VXLAN forwarding table by using the command line:
At the command prompt, type:
To add a VXLAN entity and bind a local VTEP IP address by using the GUI:
Navigate to System > Network > VXLANs , and add a new VXLAN entity or modify an existing VXLAN entity.
To add a bridgetable by using the GUI:
Navigate to System > Network > Bridge Table , set the following paramters while adding or modifying a VXLAN bridge table entry:
To display the VXLAN forwarding table by using the GUI:
Navigate to System > Network > Bridge Table .
- Support of IPv6 Dynamic Routing Protocols on VXLANs
The Citrix ADC appliance supports IPv6 dynamic routing protocols for VXLANs. You can configure various IPv6 Dynamic Routing protocols (for example, OSPFv3, RIPng, BGP) on VXLANs from the VTYSH command line. An option IPv6 Dynamic Routing Protocol has been added to VXLAN command set for enabling or disabling IPv6 dynamic routing protocols on a VXLAN. After enabling IPv6 dynamic routing protocols on a VXLAN, processes related to the IPv6 dynamic routing protocols are required to be started on the VXLAN by using the VTYSH command line.
To enable IPv6 Dynamic routing protocols on a VXLAN by using the CLI:
- add vxlan <ID> [- ipv6DynamicRouting ( ENABLED | DISABLED )]
- Extending VLANs from Multiple Enterprises to a Cloud using VXLAN-VLAN Maps
CloudBridge Connector tunnels is used to extend an enterprise’s VLAN to a cloud. VLANs extended from multiple enterprises can have overlapping VLAN IDs. You can isolate each enterprise’s VLANs, by mapping them to a unique VXLAN in the cloud. On a Citrix ADC appliance, which is the CloudBridge connector endpoint in the cloud, you can configure a VXLAN-VLAN map that links an enterprise’s VLANs to a unique VXLAN in the cloud. VXLANs support VLAN tagging for extending multiple VLANs of an enterprise from CloudBridge Connector to the same VXLAN.
Perform the following tasks for extending VLANs of multiple enterprises to a cloud:
- Create a VXLAN-VLAN map.
- Bind the VXLAN-VLAN map to a network bridge based or PBR based CloudBridge Connector tunnel configuration on the Citrix ADC appliance on cloud.
- (Optional) Enable VLAN tagging in a VXLAN configuration.
To add a VXLAN-VLAN map by using the CLI:
- add vxlanVlanMap <name>
- show vxlanVlanMap <name>
To bind a VXLAN and VLANS to a VXLAN-VLAN map by using the CLI:
- bind vxlanVlanMap <name> [- vxlan <positive_integer> - vlan <int[-int]> …]
To bind a VXLAN-VLAN map to a network bridge based CloudBridge Connector tunnel by using the CLI:
At the command prompt, type one of the following sets of commands.
if adding a new network bridge:
- add netbridge <name> [- vxlanVlanMap <string>]
- show netbridge <name>
if reconfiguring an existing network bridge:
- set netbridge <name> [- vxlanVlanMap <string>]
To bind a VXLAN-VLAN map to a PBR based CloudBridge Connector tunnel by using the CLI:
if adding a new PBR:
- add pbr <name> ALLOW (- ipTunnel <ipTunnelName> [- vxlanVlanMap <name>])
- show pbr <name>
if reconfiguring an existing PBR:
- set pbr <name> ALLOW (- ipTunnel <ipTunnelName> [- vxlanVlanMap <name>])
To include VLAN tags in packets related to a VXLAN by using the CLI:
if adding a new VXLAN:
- add vxlan <vnid> - vlanTag ( ENABLED | DISABLED )
- show vxlan <vnid>
if reconfiguring an existing VXLAN:
- set vxlan <vnid> - vlanTag ( ENABLED | DISABLED )
To add a VXLAN-VLAN map by using the GUI:
Navigate to System > Network > VXLAN VLAN Map , add a VXLAN VLAN map.
To bind a VXLAN-VLAN map to a netbridge based CloudBridge Connector tunnel by using the GUI:
Navigate to System > CloudBridge Connector > Network Bridge , select a VXLAN-VLAN map from the VXLAN VLAN drop down list while adding a new network bridge, or reconfiguring an existing network bridge.
To bind a VXLAN-VLAN map to a PBR based CloudBridge Connector tunnel by by using the GUI:
Navigate to System > Network > PBRs , on the Policy Based Routing (PBRs) tab, select a VXLAN-VLAN map from the VXLAN VLAN drop down list while adding a new PBR, or reconfiguring an existing PBR.
To include VLAN tags in packets related to a VXLAN by using the GUI:
Navigate to System > Network > VXLANs , enable Inner VLAN Tagging while adding a new VXLAN, or reconfiguring an existing VXLAN.
In this article
This Preview product documentation is Citrix Confidential.
You agree to hold this documentation confidential pursuant to the terms of your Citrix Beta/Tech Preview Agreement.
The development, release and timing of any features or functionality described in the Preview documentation remains at our sole discretion and are subject to change without notice or consultation.
The documentation is for informational purposes only and is not a commitment, promise or legal obligation to deliver any material, code or functionality and should not be relied upon in making Citrix product purchase decisions.
If you do not agree, select Do Not Agree to exit.
Machine Translation Feedback Form
What Is 802.1Q-in-802.1Q (QinQ)?
802.1Q-in-802.1Q (QinQ), defined by IEEE 802.1ad, expands VLAN space by adding an additional 802.1Q tag to 802.1Q-tagged packets. It is also called VLAN stacking or double VLAN. QinQ is widely used on carriers' backbone networks. By encapsulating the VLAN tag of a private network in the VLAN tag of a public network, QinQ enables packets with double VLAN tags to traverse the backbone network (public network) of a carrier, so as to expand VLAN space and implement refined user management.
- Why Do We Need QinQ?
- What Are Application Scenarios of QinQ?
- What Is the QinQ Packet Format?
- How Can QinQ Be Used?
- How Does QinQ Work?
- What Are Technologies Related to QinQ?
IEEE 802.1Q defines a 12-bit VLAN ID field and can identify only 4096 VLANs. With the growth of networks, this limitation has become more acute. IEEE 802.1ad, as an amendment to IEEE 802.1Q, adds an additional 802.1Q tag (also known as a VLAN tag) to single-tagged 802.1Q packets, expanding VLAN space to 4094 x 4094. Such double-tagged packets are called QinQ packets.
As Ethernet networks develop and carriers need to refine their service operations, QinQ is applied in scenarios other than simply to expand VLAN space. Inner and outer VLAN tags can be used to differentiate packets based on users and services. For example, the inner tag can represent a user and the outer tag can represent a service. In addition, QinQ can provide simple VPNs because the inner tag of QinQ packets can be transparently transmitted over a carrier network.
In summary, QinQ is developed to expand VLAN space and allow refined service management.
On an enterprise network, outer VLAN tags can be added to packets based on their service type. For example, in the following figure, PC, VoIP, and IPTV users belong to different VLANs. Different outer VLAN tags are added to the packets they send for Internet access.
- Packets from PC users: have an inner VLAN tag with VLAN ID 101 and an outer VLAN tag with VLAN ID 1001.
- Packets from VoIP users: have an inner VLAN tag with VLAN ID 301 and an outer VLAN tag with VLAN ID 2001.
- Packets from IPTV users: have an inner VLAN tag with VLAN ID 501 and an outer VLAN tag with VLAN ID 3001.
Adding different inner VLAN tags can help differentiate packets from different departments. In addition, packets from different departments can be added with the same outer VLAN tag to save VLANs on the carrier's public network. In the following figure, users in different departments need to communicate with each other across the carrier network. To save VLANs on the carrier network, all packets traveling on the carrier network are added with the same outer VLAN tag, that is, VLAN tag with VLAN ID 3.
A QinQ packet has a fixed format, in which a second 802.1Q tag is inserted in front of the first tag of the single-tagged 802.1Q packet. As such, a QinQ packet has 4 more bytes than a single-tagged 802.1Q packet. This additional 4-byte tag is used as the outer tag, that is, the public VLAN tag of a carrier network. The original 802.1Q tag is used as the inner tag, that is, the private VLAN tag. The following figure shows the encapsulation format of a QinQ packet.
According to the comparison between a 802.1Q packet and a QinQ packet in the following figure, the QinQ packet has an additional 802.1Q tag.
According to the modes in which packets are identified and the positions where outer VLAN tags are added, QinQ can be implemented in the following two modes:
This encapsulation mode is also called basic QinQ or QinQ tunneling. It encapsulates packets arriving at an interface with the same outer VLAN tag.
This encapsulation mode is also called selective QinQ. It classifies packets arriving at an interface into different flows based on specific rules, and then determines the outer VLAN tags to add based on the packet type.
Assume that an enterprise uses different VLANs to identify services. Selective QinQ can be used to classify service packets based on their VLAN ID. For example, VLANs 101 to 200 are allocated for Internet access PC users, VLANs 201 to 300 for IPTV users, and VLANs 301 to 400 for VIP users. After receiving service packets, a device adds outer tag with VLAN ID 100 to packets from PC users, outer tag with VLAN ID 300 to packets from IPTV users, and outer tag with VLAN ID 500 to packets from VIP users.
Selective QinQ classifies packets in the following ways:
- Adds outer VLAN tags based on inner VLAN IDs.
- Adds outer VLAN tags based on 802.1p priorities in inner VLAN tags.
- Adds outer VLAN tags based on traffic policies so that differentiated services can be provided based on service types.
On a typical QinQ network, there are two key device roles: customer edge (CE) and provider edge (PE). A CE is connected to users and adds inner VLAN tags to user packets, whereas a PE is connected downstream to a CE and adds outer VLAN tags to packets received from the CE.
In the following figure, departments A and B are located in different offices and use VLANs 10 and 20, respectively. They communicate with each other across the carrier network using the public VLAN 3. When a user connected to CE1 sends a packet to a user connected to CE3:
- CE1 adds a tag with VLAN ID 10 to the packet received from its connected user.
- After receiving the single-tagged packet from CE1, PE1 adds an additional VLAN tag with VLAN ID 3 to the packet.
- PE1 sends the double-tagged packet (whose inner VLAN tag has a VLAN ID of 10 and outer VLAN tag has a VLAN ID of 3) to PE2.
- After receiving the packet, PE2 removes the outer VLAN tag with VLAN ID 3 from the packet and then sends the packet with only the inner VLAN tag to CE3.
- After receiving the single-tagged packet, CE3 removes the remaining VLAN tag and then sends it to the destination user.
When a user connected to CE3 needs to communicate with a user connected to CE1, the same process is implemented in reverse.
As described above, QinQ technology connects two Layer 2 networks in the same VLAN through a backbone network. It, however, requires an extra packet overhead, that is, an additional VLAN tag. VLAN mapping can achieve the same goal without adding an extra VLAN tag.
When VLAN-tagged packets from a user network arrive at a backbone network, an edge device on the backbone network changes the customer VLAN (C-VLAN) ID to the service provider VLAN (S-VLAN) ID that can be identified and carried by the backbone network. After the packets arrive at the edge device connected to the destination user network, the edge device retrieves the C-VLAN ID to ensure seamless interworking between the two user networks.
If VLAN IDs on two directly connected Layer 2 networks are different due to different VLAN plans, you can configure VLAN mapping on the devices connecting the two networks to map VLAN IDs on the two networks. This means the two networks can be managed as a single Layer 2 network, while it helps implement Layer 2 user communication and unified deployment of Layer 2 protocols. For details about VLAN mapping, see VLAN Mapping Configuration (S Series Switch) .
Virtual eXtensible Local Area Network ( VXLAN ) is a network virtualization technology that extends VLANs. As a Network Virtualization over Layer 3 (NVO3) technology, VXLAN is essentially a VPN technology and can be used to build a Layer 2 virtual network over any networks with reachable routes. VXLAN uses VXLAN gateways to implement communication within a VXLAN network and communication between a VXLAN network and a non-VXLAN network. VXLAN uses a VXLAN Network Identifier (VNI) field similar to the VLAN ID field. The VNI field has 24 bits and can identify up to 16M VXLAN segments, effectively isolating massive tenants in cloud computing scenarios. For details about VXLAN, see VXLAN Mapping Configuration (S Series Switch) .
- Author： Gu Suqin
- Updated on： 2022-05-07
- Views： 11464
- Average rating：
- About Huawei
- About Huawei Enterprise
- Branch Office
- Huawei Events
- Huawei Facts
- Get Pricing
- eDeal Ordering System
- Find a Reseller
- Become a Partner
- Partner Training
- Partner Policy
- Partner Marketing
- Resource Center
- Video Library
- Case Studies
- ICT Insights Podcast
- Support Community
- HUAWEI CLOUD
- FusionSolar Smart PV
Copyright © 2022 Huawei Technologies Co., Ltd. All rights reserved.
- RSS Subscribe
Stack Exchange Network
Stack Exchange network consists of 181 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Network Engineering Stack Exchange is a question and answer site for network engineers. It only takes a minute to sign up.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
VXLAN over Wireguard with VLAN-VNI mapping [closed]
I am currently attempting to setup a L2 bridge between two sites using VXLAN to provide the L2 connectivity and Wireguard as transport/L3, eventually with plans to expand it multiple sites. I've previously done a Layer 2 bridge between two sites using GRE over Wireguard and it's been rock-solid, but I'm trying to better understand VXLAN now, and am looking to replace the GRE with VXLAN.
I've been trying to make use of the info both here and here but for the life of me I can't get traffic to pass over the bridge between sites.
I have two Debian machines with bridge-utils installed. They're also running nftables with rules to drop all DHCP traffic as when I first setup the GRE tunnel I ended up with machines getting assigned IPs from the remote network...
Host A is setup with:
Host B is setup with:
The AllowedIPs on the Wireguard configs is only for the Wireguard subnet 172.30.100.0/24. This was working with the GRE config and I'd assume would work with VXLAN too, as the VXLAN traffic is encapsulated within the Wireguard tunnel.
The bridges both have port ens18 , vlan-aware yes and bridge-vids 1-4096 in /etc/network/interfaces
I have a script based on the 'Recipe 2' from the first link I posted above, i.e. a single tunnel with multiple VNIs. The script adds the VXLAN interface vx0 to br0, which waits until after wg0 is up, and loops to do the VLAN/VNI mapping:
I may be completely on the wrong track here, and it might just be down to routing. But if there's anything that looks off in the above, any guidance in the right direction would be greatly appreciated!
- VXLAN creates VLAN-enabled L2.tunneling links over an IP/L3 network . Accordingly, if there's already an L2 tunnel over VPN then you don't really need VXLAN. However, using primarily L3 VPN links with VXLAN to create L2 connectivity where required might be what you're really after, but that's not clear from your question. – Zac67 ♦ May 22, 2022 at 10:31
- That's right, the L2 connectivity over L3 VPN is what I'm trying to achieve. I've got it working with GRE over Wireguard already but I'd like to completely replace this with VXLAN over Wireguard – ChownAlone May 22, 2022 at 10:44
- Added a little bit of clarification to the original post :) – ChownAlone May 22, 2022 at 10:50
- Unfortunately, questions about host/server configurations are off-topic here. You could try to ask this question on Server Fault for a business network. – Ron Maupin ♦ May 22, 2022 at 13:49
- Setup your WG "mesh". Then setup the VXLAN VTEP's to use the protected links to reach each other. At that point, it's just simple routing. – Ricky May 23, 2022 at 21:42
Browse other questions tagged vpn vxlan or ask your own question .
- The Overflow Blog
- How Intuit democratizes AI development across teams through reusability sponsored post
- The nature of simulating nature: A Q&A with IBM Quantum researcher Dr. Jamie...
- Featured on Meta
- We've added a "Necessary cookies only" option to the cookie consent popup
Hot Network Questions
- Euler: “A baby on his lap, a cat on his back — that’s how he wrote his immortal works” (origin?)
- Why is this sentence from The Great Gatsby grammatical?
- Calculating probabilities from d6 dice pool (Degenesis rules for botches and triggers)
- QGIS - Countif function
- Spacing in tuplets (in sheet-music)
- Largest Binary Area
- How to follow the signal when reading the schematic?
- Counting Letters in a String
- ncdu: What's going on with this second size column?
- Why do small African island nations perform better than African continental nations, considering democracy and human development?
- remove package from CTAN
- Minimising the environmental effects of my dyson brain
- Who owns code in a GitHub organization?
- Where does this (supposedly) Gibson quote come from?
- "We, who've been connected by blood to Prussia's throne and people since Düppel"
- What is the correct way to screw wall and ceiling drywalls?
- Partner is not responding when their writing is needed in European project application
- What's the difference between a power rail and a signal line?
- Why do many companies reject expired SSL certificates as bugs in bug bounties?
- My code is GPL licensed, can I issue a license to have my code be distributed in a specific MIT licensed project?
- Copyright issues when journal is defunct
- Is it suspicious or odd to stand by the gate of a GA airport watching the planes?
- A-Z related to countries
- Acidity of alcohols and basicity of amines
Help us improve your experience.
Let us know what you think.
Do you have time for a two-minute survey?
Configuring EVPN with VLAN-Based Service
VLAN-based service supports the mapping of one or more routing instances of type EVPN to only one VLAN. There is only one bridge table that corresponds to the one VLAN. If the VLAN consists of multiple VLAN IDs (VIDs)—for example, there is a different VID per Ethernet segment on a provider edge device—then VLAN translation is required for packets that are destined to the Ethernet segment.
To configure VLAN-based service and Layer 3 routing with two EVPN routing instances on a provider edge device.
PV routing supports configuring an SVI on the translated VLAN for flood and learn and BGP EVPN mode for VXLAN. VLAN translation (mapping) is supported on Cisco Nexus 9000 Series switches with a Network Forwarding Engine (NFE).
VLAN-oriented mode; in this case you can define the mapping in two ways: per-switch mapping - you can have at least 4K L2-domain per switch per-port mapping (i think this replay to your question) - you can have at least 4K L2-domain per port
VLAN tunnel mapping. VxLAN builds Layer 2 virtual networks on top of a Layer 3 underlay. A VxLAN tunnel endpoint (VTEP) originates and terminates VxLAN tunnels. VxLAN bridging is the function provided by VTEPs to terminate VxLAN tunnels and map the VxLAN network identifier (VNI) to the traditional end host's VLAN.
Only Layer 2 (no routing) is supported with port-VLAN with VXLAN on these switches. No inner VLAN mapping is supported. Beginning with Cisco NX-OS Release 7.0 (3)I6 (1), VXLAN is supported on Cisco Nexus 3232C and 3264Q switches. Cisco Nexus 3232C and 3264Q switches do not support inter-VNI routing.
VLAN id is mapped to a VNI to extend a vlan across a layer 3 infrastructure, it encapsulating layer 2 into an IP packet and routing it across the network. So a VLAN ID is associated with a VNI in a VXLAN n environment and is a method to transport a VLAN across an underlying L3 infrastructure.
VXLAN or Virtual eXtensible Local Area Network is a tunneling protocol that carries layer 2 packets over a layer 3 network, that is ethernet over IP. The need for VXLANs came from the limitations of VLAN, as well as the arrival of server virtualization. Due to its 12 bit identifier, you can only have up to 4094 virtual networks with VLAN.
VLANs are locally significant (per switch) in a VXLAN setup. This means you can map VLAN 10 to VNI 10010 on one leaf, and VLAN 10 to VNI 20020 on another leaf. Each switch is still limited to its 4096 VLANs, but now your global VNI namespace (per fabric, not per switch) is as many VNIs as the header can provide (16.7 million).
VxLAN to VLAN mapping Jump to solution I am a newbie to VMware NSX. Started learning and went through many videos of NSX VxLAN, but still not able to understand the mapping of VLAN with VxLAN. VLAN > 12 bits > 4096 addresses VxLAN > 24 bits > 16 M addresses Taking an E.g.: (1) 2 Clusters
The switches connected to these devices, acting as VTEPs, can map that VLAN to the same VXLAN, and the VXLAN traffic can then be routed between the two networks. (QFX5110 and QFX5120 switches with EVPN-VXLAN) Act as a Layer 3 gateway to route traffic between different VXLANs in the same data center.
A VXLAN gateway is a service that exchanges VXLAN data and packets with devices connected to different network segments. VXLAN traffic must pass through a VXLAN gateway to access services on physical devices in a distant network. An IP address that is designated as the VXLAN interface source. VLAN to VNI mapping.
Double tag mapping, in which an outer VLAN + inner VLAN tag pair is mapped to a VNI on an IEEE 802.1ad port (traffic is received double-tagged in ingress and is marked with two tags in egress after VXLAN decapsulation). This type of mapping can be used on multi-VLAN ports (sometimes called QinQ trunks) facing for example an external cloud provider.
Navigate to System > Network > VXLAN VLAN Map, add a VXLAN VLAN map. To bind a VXLAN-VLAN map to a netbridge based CloudBridge Connector tunnel by using the GUI: Navigate to System > CloudBridge Connector > Network Bridge , select a VXLAN-VLAN map from the VXLAN VLAN drop down list while adding a new network bridge, or reconfiguring an existing ...
VxLAN VNI Interface Mapping Hello, I'm testing VxLAN/eVPN features via EVE-NG topology. I would like to assign a vni to a physical port instead of a vlan id. Doing a 1:1 VNI VLAN mapping is a very big limitation because on a leaf switch we still are limited of 4096 VLAN.
VLAN Mapping. As described above, QinQ technology connects two Layer 2 networks in the same VLAN through a backbone network. It, however, requires an extra packet overhead, that is, an additional VLAN tag. ... VXLAN. Virtual eXtensible Local Area Network is a network virtualization technology that extends VLANs. As a Network Virtualization over ...
VXLAN creates VLAN-enabled L2.tunneling links over an IP/L3 network. Accordingly, if there's already an L2 tunnel over VPN then you don't really need VXLAN. However, using primarily L3 VPN links with VXLAN to create L2 connectivity where required might be what you're really after, but that's not clear from your question. - Zac67 ♦
VLAN-based service supports the mapping of one or more routing instances of type EVPN to only one VLAN. There is only one bridge table that corresponds to the one VLAN. If the VLAN consists of multiple VLAN IDs (VIDs)—for example, there is a different VID per Ethernet segment on a provider edge device—then VLAN translation is required for packets that are destined to the Ethernet segment.