In last year’s VMware homelab NSX series, I’ve showed howto setup a NSX setup with BGP and later with OSPF. This time, I’m going to deploy and configure NSX-T with a static routing setup using single Edge uplinks. NSX-T is used 3.2.2 in the lab environment.
In this lab, we have two ToR switches, configured with VRRP. The ESXi server is physically connected with one uplink “Uplink1” to ToR-1 and with another uplink “Uplink2” to ToR-2.
The Edge Node VM design in the environment is driven by the following goals:
- 1 virtual uplink used (redundancy is provided by ESXi pNICs)
- A single N-VDS per Edge node carrying both overlay and external traffic
The Tier-0 gateway is configured with a HA VIP and sets it default route to the ToR virtual router group IP address. The ToR routes all traffic destined for our Overlay segment to the Tier-0 HA VIP.
The overall topology can be seen in the following diagram.
Prerequisites
Before we can configure an NSX-T using vSphere Distributed Switch (VDS), we must ensure that the VDS created on a vCenter Server 7.0 or a later version is configured to manage NSX-T traffic. We name the VDS “vds1”.
We then configure two uplinks per ESXi host named Uplink1 and Uplink2 (active-active teaming for simplicity in this lab).
The VDS has portgroups for at least management traffic, vMotion and depending on the backing setup vSAN traffic. Each ESXi host participating in the VDS, has a vmkernel interface in each of these portgroups.
In this setup, we have a VI managment network configured as VLAN 1611 with 172.16.11.0/24 as the network, and 172.16.11.253 as the gateway (a virtual address on the two ToR switches).
The underlay network for NSX is 172.16.14.0/24 in VLAN 1614.
NSX Manager Setup
The NSX Manager provides a web-based user interface to manage the NSX-T environment. In production environments, VMware recommends deploying a cluster of three NSX Manager nodes for high availability. In the lab environment the NSX-T Manager is deployed as medium-sized standalone appliance.
The NSX-T Manager has been configured as follows:
- Hostname: nsx1a.poc.corp
- Rolename: NSX Manager (default)
- Default IPv4 Gateway: 172.16.11.253
- Management Network IPv4 Address: 172.16.11.101
- Management Network Netmask: 255.255.255.0
- DNS Server: 172.16.11.4
- Domain Search List: poc.corp
- NTP server: 172.16.11.4
When the installation is complete, the NSX Manager VM must be powered on. After a few minutes, you are able to login into the NSX-T Manager web UI at https://nsx1a.poc.corp.
Connect NSX-T Manager with a Compute Manager
A compute manager, for example, vCenter Server, is an application that manages resources such as hosts and VMs.
NSX-T Data Center polls compute managers to collect cluster information from vCenter Server.
In NSX-T Manager, select System → Fabric → Compute Managers → Add Compute Manager. Configure as follows:
- Name: vc1
- FQDN: vc1.poc.corp
- HTTPS port: 443
- Username/Password as needed
- Enable Trust: yes
- Access Level: Full access
Configure the IP pool
IP pools are used for the tunnel endpoints such as ESXi hosts and Edge nodes.
In NSX-T Manager, navigate to Networking → IP Address Pools → Add IP Address Pool.
Name: ip-pool-teps
Click on Subnets and enter the following information:
- CIDR: 172.16.14.0/24
- IP Ranges: 172.16.14.10-172.16.14.250
Create the Transport Zones
Transport zones dictate which hosts and which VMs can participate in the use of a particular network. A transport zone does this by limiting the hosts that can “see” a segment and, therefore, which VMs can be attached to the segment. A transport zone can span one or more host clusters, and a transport node can be associated to multiple transport zones.
The overlay and VLAN transport zone is used by both host transport nodes and NSX Edge nodes. The VLAN transport zone is used by the NSX Edge and host transport nodes for its VLAN uplinks.
In NSX-T Manager, select System → Fabric → Transport Zones → Add Zone.
Create the Overlay transport zone as follows:
Create the VLAN transport zone as follows:
Create Host and Edge Uplink Profiles
An uplink is a link from the NSX Edge nodes to the ToR switches or NSX-T logical switches. A link is from a physical network interface on an NSX Edge node to a switch.
In NSX-T Manager, select System → Fabric → Profiles → Uplink Profiles → Add Profile.
Create the host uplink profile as follows:
- Name: Host-Uplink-Profile
- Teamings
- Default Teaming
- Teaming Policy: Load Balance Source
- Active uplinks: uplink-1,uplink2
- Default Teaming
- Transport VLAN: 1614
Create the Edge uplink profile as follows:
- Name: Edge-Uplink-Profile
- Teamings
- Default Teaming
- Teaming Policy: Load Balance Source
- Active uplinks: uplink-1,uplink2
- Default Teaming
- Transport VLAN: 1614
- MTU: 9000
The defined profiles look as depicted below:
Create Host Transport Node Profile
A Transport Node Profile is a template to define networking configuration that is applied to a cluster.
In NSX-T Manager, select System → Fabric → Hosts. On the Hosts page, select Transport Node Profile → Add Transport Node Profile.
Configure it as follows:
- Name: Host-TransportNode-Profile
In the Host Switch field, select Set.
Configure the Host Switch as follows.
- vCenter: vc1
- VDS: vds1
- Mode: Standard
- Transport Zones: Overlay-Transport-Zone, VLAN-Transport-Zone
- Uplink Profile: Host-Uplink-Profile
- IP Assignment: Use IP Pool
- IP Pool: ip-pool-teps
- Teaming Policy Uplink Mapping
- Uplink “uplink1” → VDS Uplink “Uplink1”
- Uplink “uplink2” → VDS Uplink “Uplink2”
Configure NSX on the vSphere Cluster
We are now applying the transport node profile on the ESXi cluster to automatically prepare all hosts as NSX-T transport nodes.
In NSX-T Manager, select System → Fabric → Hosts > Clusters. Select a cluster and click “Configure NSX”.
We apply the created Transport Node Profile “Host-TransportNode-Profile”.
The NSX installation kicks off. This will take some minutes.
Once the NSX installation has been finished, the cluster status should look as follows:
Create Edge Uplink Trunk and Transit Segments
We now create the Edge Uplink Trunk segment and the Edge Transit segment.
Configure the Edge Uplink Trunk segment as follows:
- Name: Edge-Uplink-Segment-Trunk1
- Transport Zone: VLAN-Transport-Zone
- VLAN: 1610, 1614
Configure the Edge Transit segment as follows:
- Name: Edge-Transit
- Transport Zone: VLAN-Transport_zone
- VLAN: 1610
The two created segments should look as follows:
Add Edge Node 1
An NSX Edge Node is a transport node that runs the local control plane daemons and forwarding engines implementing the NSX-T data plane. It runs an instance of the NSX-T virtual switch called the NSX Virtual Distributed Switch, or N-VDS. The Edge Nodes are service appliances dedicated to running centralized network services that cannot be distributed
to the hypervisors.
In NSX-T Manager, select System → Fabric → Nodes → Edge Transport Nodes → Add Edge Node.
Provide name and description.
Provide credentials.
Configure the deployment.
Configure Node settings.
Configure NSX.
Add Ege Node 2
In NSX-T Manager, select System → Fabric → Nodes → Edge Transport Nodes → Add Edge Node.
Provide name and description.
Provide credentials.
Configure the deployment.
Configure Node settings.
Configure NSX.
Both Edges have been deployed:
Create the Edge Cluster
Having a multi-node cluster of NSX Edges helps ensure that at least one NSX Edge is always available.
In NSX-T Manager select System → Fabric → Nodes → Edge Clusters → Add Edge Clusters.
The Edge Cluster has been deployed:
Create the Tier-0 Gateway
A Tier-0 Gateway provides north-south connectivity and connects to the physical routers. In our lab it will be configured as an active-standby cluster.
In NSX-T Manager, select Networking → Tier-0 Gateways. Click “Add Tier-0 Gateway”.
Configure the Tier-0 Gateway as follows:
Configure Static Routing
We configure the default static route on the Tier-0 gateway to external networks.
Set the next hop to the VIP of the ToR switches in the Transit network as follows:
Disable BGP
As we want to use static routing, we will disable BGP.
Configure Tier-0 Uplink Interfaces
We configure one interface for each of the two Edge nodes.
Configure the interface of the first Edge node as follows.
Configure the interface of the second Edge node as follows.
The interfaces have been configured as follows:
Configure the HA VIP
With HA VIP (high availability virtual IP) configured, a Tier-0 logical router is operational even if one uplink is down. The physical router (in our case the ToR switches) interacts with the HA VIP only.
In NSX-T Manger, select Networking → Tier-0 Logical Routers. Click the tier-0 logical router name “Tier0-GW-1”. Click Configuration → HA VIP. Click Add.
Configure the HA VIP as follows:
- IP Address/Mask: 172.16.10.12/28
- Enabled: yes
- Interface: Edge1-Interface-1, Edge2-Interface-2
Create the Tier-1 Gateway
A Tier-1 Gateway connects to one Tier-0 Gateway for northbound connectivity to the subnetworks attached to it. It connects to one or more overlay networks for southbound connectivity to its subnetworks. In our lab the Tier-1 Gateway will be configured as an active-standby cluster.
We configure the Tier-1 Gateway as follows:
- Name: T1-vRA-Tenant
- Linked Tier-0 Gateway: Tier0-GW-1
- Edge Cluster: nsx-edge-cluster-1
- Fail OVer: Non Preemptive
- Edges: Auto allocated
Configure the route advertisement for the Tier-1 to advertise all connected segments and service ports.
Create a Segment
We create a NSX segment where VMs can be connected to.
- Name: vRA-Tenant
- Connected Gateway: T1-vRA-Tenant
- Transport Zone: Overlay-Transport-Zone
- Subnets: 172.16.17.1/24
The segments have been configured as follows:
Test connectivity to the ToR default gateway
To verify the connectivity between the Tier-0 and the uplink router (here the ToR switch), we log in to the NSX Edge CLI of one of the Edge Nodes.
On the NSX Edge, run the get logical-routers command to find the VRF number of the tier-0 service router.
Run the get route command to list all routes.
Finally ping the ToR VIP 172.16.10.1.
Leave a Reply