VMware home lab NSX-T setup

In the previous article of the VMware homelab series, I’ve configured the core vSphere services. This time, I’m going to deploy and configure NSX-T.
The setup is a typical topology with two NSX edges to route to the ToR routers (VyOS appliances) via BGP. I’m currently using NSX-T 3.1.2 in the lab environment.

The overal topology can be seen in the followoing diagram.

The Edge Node VM design in the lab is driven by the following goals:

  • 2 pNICs available
  • A single N-VDS per edge node carrying both overlay and external traffic
  • Load balancing of overlay traffic with multi-TEP configuration
  • Deterministic North-South traffic pattern

Deploy and configure NSX-T Manager

The NSX Manager provides a web-based user interface to manage the NSX-T environment. In production environments, VMware recommends deploying a cluster of three NSX Manager nodes for high availability. In the lab environment he NSX-T Manager is deployed as small-sized standalone appliance.

The NSX-T Manager has been configured as follows:

  • Hostname – nsx1a.lab.local
  • Rolename – NSX Manager (default)
  • Default IPv4 Gateway – 172.16.11.253
  • Management Network IPv4 Address – 172.16.11.206
  • Management Network Netmask – 255.255.255.0
  • DNS Server – 172.16.11.4
  • Domain Search List – lab.local
  • NTP server – 172.16.11.4

When the installation is complete, the NSX Manager VM must be powered on. After a few minutes, you are able to login into the NSX-T Manager web UI at https://nsx1a.lab.local.

I’ve added the vCenter Server vc2.lab.local as a Compute Manager:

  • Name: vc2
  • Type: vCenter (default)
  • FQDN or IP Address: vc2.lab.local
  • HTTPS Port: 443 (default)
  • Username: administrator@vsphere.local
  • Enable Trust: Yes (required for Tanzu)

The vCenter server appears in the Compute Managers list. After a few seconds, its registration status changes to Registered, and the Connection Status is Up.

Adding the vCenter as a compute manager adds the ESXi servers as host transport nodes. We’ll come back to this later.

Configure NSX-T

Create the transport zones

Transport zones dictate which hosts and VMs can participate in a particular network. The Overlay Transport zone is used for internal NSX-T Data Center tunneling between transport nodes. VLAN transport zones are used by NSX Edges and host transport nodes for uplinks external to NSX-T Data Center.

In the lab environment, I’m using two transport zones:

  • Lab-Overlay-TZ (Traffic type: Overlay)
  • LAB-VLAN-TZ (Traffic type: VLAN)

Create the uplink profiles

Uplink profiles define policies for the links from hypervisor hosts to NSX-T Data Center logical switches or from NSX Edge nodes to ToR switches.

This deployment example uses the following uplink profiles.

  • Lab-Edge-Uplink-Profile
    • Teamings
      • Default Teaming
        • Teaming Policy: Load Balance Source
        • Active Uplinks: uplink-1, uplink-2
        • Standby Uplinks: –
    • Transport VLAN: 1614
    • MTU: 9000
  • Lab-Host-Uplink-Profile
    • Teamings
      • Default Teaming
        • Teaming Policy: Load Balance Source
        • Active Uplinks: uplink-1, uplink-2
        • Standby Uplinks: –
    • Transport VLAN: 1614
    • MTU: 1600

To create the uplink profiles, go to System > Fabric > Profiles > Uplink Profiles in the NSX-T Manager UI and add the new Uplink Profiles as defined.

Create and apply the teaming policies

The following uplink teaming policies must be created in our lab environment by editing the above created uplink profiles:

  • Lab-Edge-Uplink-Profile Teamings
    • uplink-1
      • Teaming Policy: Failover Order
      • Active Uplinks: uplink-1
      • Standby Uplinks: –
    • uplink-2
      • Teaming Policy: Failover Order
      • Active Uplinks: uplink-2
      • Standby Uplinks: –
  • Lab-Host-Uplink-Profile Teamings
    • uplink-1
      • Teaming Policy: Failover Order
      • Active Uplinks: uplink-1
      • Standby Uplinks: –
    • uplink-2
      • Teaming Policy: Failover Order
      • Active Uplinks: uplink-2
      • Standby Uplinks: –
    • uplink-1-active-uplink-2-standby
      • Teaming Policy: Failover Order
      • Active Uplinks: uplink-1
      • Standby Uplinks: uplink-2
    • uplink-2-active-uplink-1-standby
      • Teaming Policy: Failover Order
      • Active Uplinks: uplink-2
      • Standby Uplinks: uplink-1

Create the segments for uplink networks

To create the uplink segments for the NSX Edges and the Tier-0 gateway, goto Networking > Segments in the NSX-T Manager web UI. Create the following segments:

  • Edge-Uplink1
    • Connected Gateway: None
    • Transport Zone: Lab-Overlay-TZ
    • VLAN: 1614, 2711, 2712
    • Uplink Teaming Policy: uplink-1-active-uplink-2-standby
  • Edge-Uplink2
    • Connected Gateway: None
    • Transport Zone: Lab-Overlay-TZ
    • VLAN: 1614, 2711, 2712
    • Uplink Teaming Policy: uplink-2-active-uplink-1-standby
  • Tier0-Router-Uplink1
    • Connected Gateway: None
    • Transport Zone: Lab-VLAN-TZ
    • VLAN: 2711
    • Uplink Teaming Policy: uplink-1
  • Tier0-Router-Uplink2
    • Connected Gateway: None
    • Transport Zone: Lab-VLAN-TZ
    • VLAN: 2712
    • Uplink Teaming Policy: uplink-2

Create the transport node profile for the ESXi servers

A transport node profile is a template to define a configuration that is applied to all transport nodes in a vCenter cluster.

We create the transport node profile in the NSX-T Manager web UI under System > Fabric > Profiles > Transport Node Profiles as follows:

  • Name: Lab-ESXi-TN-Profile
  • New Node Switch
    • Type: VDS
    • Mode: Standard (All hosts).
    • Name: vc2.lab.local, SA-DSwitch1 (i.e. the used distributed Switch)
    • Transport Zone: Lab-VLAN-TZ, Lab-Overlay-TZ
    • Uplink Profile: Lab-Host-Uplink-Profile
    • IP Assignment: Use IP Pool
    • IP Pool: Lab-Overlay-Host-TEP-IP
    • Teaming Policy Uplink Mapping (Uplink to VDS Uplink)
      • uplink-1: Uplink 1
      • uplink-2: Uplink 2

Configure ESXi servers as host transport nodes

To configure the VxRail nodes as transport nodes for NSX-T, go to System > Fabric > Nodes > Host Transport Nodes in the NSX-T Manager web UI.

  • Next to Managed by, select the vCenter vc2.lab.local from the drop-down menu.
  • Check the box next to the vSphere cluster (in our example SA-Compute-1) and click CONFIGURE NSX.
  • In the NSX Installation dialog box, select Lab-ESXi-TN-Profile from the drop-down menu and click APPLY.

The NSX-T transport node configuration starts and will take several minutes to complete. When completed, the NSX Configuration column shows Success and the Node Status shows Up for each host in the cluster.

Deploy the NSX-T Edge appliances

The NSX Edge provides routing services and connectivity to networks that are external to the NSX-T Data Center deployment. VMware recommends deploying a cluster of two NSX Edge nodes for high availability.

In the NSX Manager web UI, go to System > Fabric > Nodes > Edge Transport Nodes and click +ADD EDGE VM.

Configure the first edge as follows:

  • Name and Description
    • Name: nsx-en1
    • Host name/FQDN: nsx-en1.lab.local
    • Form Factor: Medium
  • Credentials as needed (enable SSH)
  • Configure Deployment
    • Compute Manager: vc2.lab.local
    • Cluster: SA-Compute-1
    • Datastore: Datastore-1
  • Configure Node Settings
    • IP Assignment (TEP): Static
    • Management IP: 172.16.11.69/24
    • Default Gateway: 172.16.11.253
    • Management Network: Select Interface “Management Network” from distributed Switch
    • Seach Domain Names: lab.local
    • DNS Servers: 172.16.11.4
    • NTP Servers: 172.16.11.4
  • Configure NSX
    • New Node Switch
      • Edge Switch Name: nsxHostSwitch
      • Transport Zone: Lab-Overlay-TZ, Lab-VLAN-TZ
      • Uplink Profile: Lab-Edge-Uplink-Profile
      • IP Assignment (TEP): Use IP Pool
      • IP Pool: Lab-Overlay-Edge-TEP-IP
      • Teaming Policy Uplink Mapping (Uplink to DPDK Fastpath Interface)
        • uplink-1: Edge-Uplink1 (VLAN Segment/Logical Switch)
        • uplink-2: Edge-Uplink2 (VLAN Segmant/Logical Switch)

After clicking FINISH in the wizard, the NSX-T Edge VM deployment starts. Repeat the above deployment for the second Edge (nsx-en2).

When the deployment of both edges is complete, the following Edge Transport Nodes are shown in the NSX-T Manager web UI.

Create an NSX-T Edge cluster

An NSX Edge cluster is required to use Tier-0 and Tier-1 gateways and of course helps to ensure that at least one NSX Edge is always available.

In the lab, I’ve created an Edge cluster named nsx-ec1 and selected both NSX Edge Nodes as Transport Nodes (leaving the default cluster profile).

It is best practice to create VM-Host rules for the NSX-T Edges to keep them on separate ESXi servers.

Create and configure the Tier-0 gateway

The Tier-0 gateway is the interfaces to the physical network. It runs BGP and peers with lab-router1 and lab-router2.

I’ve configured the Tier-0 gateway as follows in the NSX-T Manager web UI, under Networking > Tier-0 Gateways:

  • Tier-0 Gateway Name: Tier0-GW-1
  • HA Mode: Active Standby
  • Edge Cluster: nsx-ec1

Then I’ve configured the uplink interfaces for the Tier-0 gateway:

  • Name: Edge1-IF-2711
    • Type: External
    • IP Address / Mask: 172.27.11.2/24
    • Connected To(Segment): Tier0-Router-Uplink1
    • Edge Node: nsx-en1
    • MTU: 9000
  • Name: Edge1-IF-2712
    • Type: External
    • IP Address / Mask: 172.27.12.2/24
    • Connected To(Segment): Tier0-Router-Uplink2
    • Edge Node: nsx-en1
    • MTU: 9000
  • Name: Edge1-IF-2711
    • Type: External
    • IP Address / Mask: 172.27.11.3/24
    • Connected To(Segment): Tier0-Router-Uplink1
    • Edge Node: nsx-en2
    • MTU: 9000
  • Name: Edge1-IF-2712
    • Type: External
    • IP Address / Mask: 172.27.12.3/24
    • Connected To(Segment): Tier0-Router-Uplink2
    • Edge Node: nsx-en2
    • MTU: 9000

Configure BGP on the lab routers

Add the following configuration on lab-router1:

set protocols bgp local-as 65001
set protocols bgp parameters router-id 172.27.11.1
set protocols bgp neighbor 172.27.11.2 update-source eth5
set protocols bgp neighbor 172.27.11.2 remote-as 65003
set protocols bgp neighbor 172.27.11.3 remote-as 65003
set protocols bgp neighbor 172.27.11.2 password VMware1!
set protocols bgp neighbor 172.27.11.3 password VMware1!

set protocols bgp address-family ipv4-unicast network 172.16.11.0/24
set protocols bgp address-family ipv4-unicast network 172.16.12.0/24

Add the following configuration on lab-router2:

set protocols bgp local-as 65001
set protocols bgp parameters router-id 172.27.12.1
set protocols bgp neighbor 172.27.12.2 update-source eth1
set protocols bgp neighbor 172.27.12.2 remote-as 65003
set protocols bgp neighbor 172.27.12.3 remote-as 65003
set protocols bgp neighbor 172.27.12.2 password VMware1!
set protocols bgp neighbor 172.27.12.3 password VMware1!

Configure BGP on the Tier-0 gateway

In the NSX-T Manager web UI, under Networking > Tier-0 Gateways edit the gateway Tier0-GW-1 and expand BGP:

  • Local AS: 65003
  • BGP: On
  • Inter SR iBGP: off
  • ECMP: off
  • Multipath Relax: off
  • Graceful Restart: Helper Omly
  • Graceful Restart Timer: 180
  • Graceful Restart Stale Timer: 600

Next to BGP Neighbors click Set and click ADD BPG NEIGHBOR. I’ve added both lab routers as neighbors:

  • IP Address: 172.27.11.1
  • BFD: Disabled
  • Remote AS number: 65001
  • Source addresses: 172.27.11.2, 172.27.11.3
  • Password: VMware1!
  • IP Address: 172.27.12.1
  • BFD: Disabled
  • Remote AS number: 65001
  • Source addresses: 172.27.12.2, 172.27.12.3
  • Password: VMware1!

Configure Route redistribution on the Tier-0 gateway

In the NSX-T Manager web UI, under Networking > Tier-0 Gateways edit the gateway Tier0-GW-1 and expand ROUTE RE-DISTRIBUTION and click on Set. Then add the following Route redistribution:

  • Name: Lab-Default-RR
  • Tier-0 Subnets: select all
  • Advertiesed Tier-1 Subnets: select all

Create a Tier-1 gateway

A Tier-1 gateway has uplink ports to connect to a Tier-0 gateway for external network connectivity, and downlink ports to connect to NSX-T segments.

In the lab, I’ve created the following Tier-1 gateway in the NSX-T Manger web UI under Networking > Tier-1 Gateways:

  • Tier-1 Gateway Name: Tier1-GW-1
  • Linked Tier-0 Gateway: Tier0-GW-1
  • Edge Cluster: nsx-ec1
  • Edges: nsx-en1, nsx-en2

After saving the settings, expand Route Advertisment for the Tier-1 gateway and enable all sliders.

Create tenant segments

To simulate a 2-Tier application, I’ve created two segments named App-1 and Web-1 as follows:

  • Segment Name: App-1
  • Connected Gateway: Tier1-GW-1
  • Transport Zone: Lab-Overlay-TZ
  • Subnets: 10.10.20.1/24
  • Segment Name: Web-1
  • Connected Gateway: Tier1-GW-1
  • Transport Zone: Lab-Overlay-TZ
  • Subnets: 10.10.10.1/24

For enabling vSphere with Tanzu later on, I’ve created a segment dedicated for Kubernetes:

  • Segment Name: Tanzu-1
  • Connected Gateway: Tier1-GW-1
  • Transport Zone: Lab-Overlay-TZ
  • Subnets: 192.168.20.1/24

Verify the Tier-0 Logical Router and TOR Connection

For routing to work on the uplink from the tier-0 router, connectivity with the top-of-rack device must be in place.

To verify the connectivity, log into an Edge node CLI and execute the following commands to find the Tier-0 Service Router VRF:

nsx-en1> get logical-routers
Mon Jan 24 2022 UTC 21:37:25.664
Logical Router
UUID                                   VRF    LR-ID  Name                              Type                        Ports   Neighbors
736a80e3-23f6-5a2d-81d6-bbefb2786666   0      0                                        TUNNEL                      4       10/5000
8d423f97-3f34-496c-a2c4-964f851efd39   1      1048   SR-domain-c1010:3686082f-a4d0-4   SERVICE_ROUTER_TIER1        5       2/50000
2a44bc35-8e90-4ed3-8096-422b58219710   2      1053   DR-t1-domain-c1010:3686082f-a4d   DISTRIBUTED_ROUTER_TIER1    5       0/50000
8c4abfee-7761-4a50-8302-20bda7faf0b5   3      1045   SR-Tier0-GW-1                     SERVICE_ROUTER_TIER0        6       1/50000
0c838316-f8d9-414e-a963-77489eef5af4   5      1050   SR-t1-domain-c1010:3686082f-a4d   SERVICE_ROUTER_TIER1        5       2/50000
37d023e7-cd5a-4f44-b284-49d8f929f928   6      1046   DR-Tier1-GW-1                     DISTRIBUTED_ROUTER_TIER1    7       3/50000
84a14cdc-4640-4537-b67f-535f0b7aa9d2   7      1044   DR-Tier0-GW-1                     DISTRIBUTED_ROUTER_TIER0    9       10/50000
d7fa83ad-a539-4917-b5a5-bce57cdb3603   8      1047   DR-domain-c1010:3686082f-a4d0-4   DISTRIBUTED_ROUTER_TIER1    4       0/50000
68dc9a18-4b01-4c34-a27b-21883ebe5469   9      2050   SR-acme-egw-01                    SERVICE_ROUTER_TIER1        5       2/50000
d70d859c-0ad0-4e5f-80d1-fb97f4368a07   10     1054   SR-t1-domain-c1010:3686082f-a4d   SERVICE_ROUTER_TIER1        5       2/50000
d0c83f8e-9018-4829-a36b-0bd5b98f6181   11     1049   DR-t1-domain-c1010:3686082f-a4d   DISTRIBUTED_ROUTER_TIER1    5       2/50000

nsx-en1> vrf 3

Then exermine the routing table:

nsx-en1(tier0_sr)> get route

Flags: t0c - Tier0-Connected, t0s - Tier0-Static, b - BGP, o - OSPF
t0n - Tier0-NAT, t1s - Tier1-Static, t1c - Tier1-Connected,
t1n: Tier1-NAT, t1l: Tier1-LB VIP, t1ls: Tier1-LB SNAT,
t1d: Tier1-DNS FORWARDER, t1ipsec: Tier1-IPSec, isr: Inter-SR,
> - selected route, * - FIB route

Total number of routes: 36

t0s> * 0.0.0.0/0 [1/0] via 172.27.11.1, uplink-318, 10:51:05
t1c> * 10.10.10.0/24 [3/0] via 100.64.240.1, linked-308, 10:51:04
t1c> * 10.10.20.0/24 [3/0] via 100.64.240.1, linked-308, 10:51:04
t0c> * 100.64.240.0/31 is directly connected, linked-308, 10:51:05
t0c> * 100.64.240.2/31 is directly connected, downlink-336, 10:51:05
t0c> * 100.64.240.4/31 is directly connected, downlink-332, 10:51:05
t0c> * 100.64.240.6/31 is directly connected, downlink-328, 10:51:05
t0c> * 100.64.240.8/31 is directly connected, downlink-302, 10:51:05
t1c> * 100.100.0.0/28 [3/0] via 100.64.240.3, downlink-336, 10:51:04
t1c> * 100.100.0.16/28 [3/0] via 100.64.240.5, downlink-332, 10:51:04
t1c> * 100.100.0.32/28 [3/0] via 100.64.240.7, downlink-328, 10:51:04
t1c> * 100.100.0.48/28 [3/0] via 100.64.240.5, downlink-332, 10:51:04
t1c> * 100.100.0.64/28 [3/0] via 100.64.240.7, downlink-328, 10:51:04
t0c> * 169.254.0.0/24 is directly connected, downlink-300, 10:51:05
b  > * 172.16.11.0/24 [110/20] via 172.27.11.1, uplink-318, 01:41:33
b  > * 172.16.12.0/24 [110/20] via 172.27.11.1, uplink-318, 01:41:33
b  > * 172.16.13.0/24 [110/20] via 172.27.11.1, uplink-318, 01:41:33
b  > * 172.16.14.0/24 [110/20] via 172.27.11.1, uplink-318, 01:41:33
t0c> * 172.27.11.0/24 is directly connected, uplink-318, 10:51:05
t0c> * 172.27.12.0/24 is directly connected, uplink-304, 10:51:05
b  > * 172.27.13.0/24 [110/20] via 172.27.11.1, uplink-318, 01:41:33
t1c> * 192.168.20.0/24 [3/0] via 100.64.240.1, linked-308, 10:51:04
t1l> * 192.168.21.1/32 [3/0] via 100.64.240.3, downlink-336, 10:51:04
t1l> * 192.168.21.2/32 [3/0] via 100.64.240.7, downlink-328, 10:51:04
t1l> * 192.168.21.3/32 [3/0] via 100.64.240.5, downlink-332, 10:51:04
t1l> * 192.168.21.4/32 [3/0] via 100.64.240.5, downlink-332, 10:51:04
t1n> * 192.168.22.1/32 [3/0] via 100.64.240.3, downlink-336, 10:51:04
t1n> * 192.168.22.2/32 [3/0] via 100.64.240.5, downlink-332, 10:51:04
t1n> * 192.168.22.3/32 [3/0] via 100.64.240.7, downlink-328, 10:51:04
b  > * 192.168.123.0/24 [110/20] via 172.27.11.1, uplink-318, 01:41:33
t0c> * fc37:f210:9a29:a800::/64 is directly connected, linked-308, 10:51:05
t0c> * fc37:f210:9a29:a801::/64 is directly connected, downlink-336, 10:51:05
t0c> * fc37:f210:9a29:a802::/64 is directly connected, downlink-332, 10:51:05
t0c> * fc37:f210:9a29:a803::/64 is directly connected, downlink-328, 10:51:05
t0c> * fc37:f210:9a29:a804::/64 is directly connected, downlink-302, 10:51:05
t0c> * fe80::/64 is directly connected, downlink-300, 10:51:05
Mon Jan 24 2022 UTC 21:38:02.446

Finally, you can test connectivity to the home net router (outside the SDN):

nsx-en1(tier0_sr)> ping 192.168.123.1
PING 192.168.123.1 (192.168.123.1): 56 data bytes
64 bytes from 192.168.123.1: icmp_seq=0 ttl=63 time=2.925 ms
64 bytes from 192.168.123.1: icmp_seq=1 ttl=63 time=3.040 ms
64 bytes from 192.168.123.1: icmp_seq=2 ttl=63 time=2.886 ms
64 bytes from 192.168.123.1: icmp_seq=3 ttl=63 time=2.535 ms
^C
--- 192.168.123.1 ping statistics ---
5 packets transmitted, 4 packets received, 20.0% packet loss
round-trip min/avg/max/stddev = 2.535/2.846/3.040/0.189 ms

Works 🙂

Deploy and create tenant VMs

To validate the connectivity end to end, I’ve created two VMs in the cluster SA-Compute-1 in vc2 vCenter Server.

  • VM Name: App-VM-1
  • Network Card Portgroup: App-1
  • IP address: 10.10.20.100/24
  • Gateway: 10.10.20.1
  • DNS server: 172.16.11.4
  • VM Name: Web-VM-1
  • Network Card Portgroup: Web-1
  • IP address: 10.10.10.100/24
  • Gateway: 10.10.20.1
  • DNS server: 172.16.11.4

Previous

VMware home lab vSphere setup

Next

VMware home lab vSphere with Tanzu setup

4 Comments

  1. Jim Dandy

    This configuration wont work as described (perhaps it worked in earlier versions of VyOS??), the static route wont be advertised properly and you will not have connectivity from the tenant VMs to the outside world.
    VyOS documentation states that by default it wont advertise the static route even if its in the routing table – you must add a specific command to force this to happen. At a minimum you will need to set “redistribute static” as part of your bgp configuration. This will correct the issue with VyOS.

  2. AZ

    BGP for home lab )))))

  3. David Vincent

    i noticed that you glossed over where you created the TEP IP Pools, both Lab-Overlay-Edge-TEP-IP and Lab-Overlay-Host-TEP-IP. Any chance you could document or comment on the ranges you used?

    • Jim Dandy

      He appears to have made a decision fairly late in the process to use a single TEP for both the hosts and the edges and then let DHCP serve them.

Leave a Reply

Your email address will not be published. Required fields are marked *

All your base are belong to us.