We have to shutdown the management components of the VMware homelab in a specific order to keep components operational by maintaining the necessary infrastructure, networking, and management services prior before shutdown.
The order is as follows:
We have to shutdown the management components of the VMware homelab in a specific order to keep components operational by maintaining the necessary infrastructure, networking, and management services prior before shutdown.
The order is as follows:
VMware vRealize Suite is a purpose-built management solution for the heterogeneous data center and the hybrid cloud. It delivers and manages infrastructure and applications to increase the business agility while maintaining IT control. It provides the most comprehensive management stack for private and public clouds, multiple hypervisors, and physical infrastructure.
It consists of the following solutions:
To automate installation, configuration, upgrade, patch, configuration management, drift remediation and health from within a single pane of glass, we will use vRealize Suite Lifecycle Manager.
The below diagram shows technological capabilities and organizational constructs.
In the lab environment we’ll install all solutions as single node instances with the following sizings:
Name | Purpose | Size | vCPU | Memory (GB) | Disk (GB) |
vrslcm1 | Lifecycle Manager | – | 2 | 6 | 78 |
wsa1a | vIDM | Medium | 8 | 16 | 60 |
vra1a | vRealize Automation | Medium | 12 | 42 | 236 |
vrops1a | vRealize Operations | Extra small | 2 | 8 | 274 |
vrli1a | vRealize Log Insight | Small | 4 | 8 | 530 |
After the deployment of these solutions, we’re going to initially integrate them.
With VMware VMware Cloud Director you can build secure, multi-tenant clouds by pooling virtual infrastructure resources into virtual data centers and exposing them to users through Web-based portals and programmatic interfaces as a fully automated, catalog-based service.
In the lab environment, we’ll setup a simple single cell installation, and add our workload vCenter Server vc2.lab.local and the NSX-T Manager nsx1.lab.local as infrastructure resources.
From these infrastructures we’ll create cloud resources such as a provider VDC, a Geneve network pool, and an External network.
Then we’ll create a tenant organization and assign resources from the provider VDC as an organization VDC to this particular organization. We’ll also create an Edge Gateway to allow the tenant to access the outside world from within his Cloud.
In this lab session, I want to transform my workload cluster into a “native Kubernetes platform” by using vSphere with Tanzu.
VMware Tanzu is a portfolio of products and solutions which allow its customers to build, run, and manage Kubernetes controlled container-based applications.
In the Operations (or Run) catalog depicted above, VMware has different implementations for Tanzu Kubernetes Grid, all of which provision and manage the lifecycle of Tanzu Kubernetes clusters on multiple platforms. It consists of the following options:
In this session, we’ll cover vSphere with Tanzu.
In the previous article of the VMware homelab series, I’ve configured the core vSphere services. This time, I’m going to deploy and configure NSX-T.
The setup is a typical topology with two NSX edges to route to the ToR routers (VyOS appliances) via BGP. I’m currently using NSX-T 3.1.2 in the lab environment.
The overal topology can be seen in the followoing diagram.
The Edge Node VM design in the lab is driven by the following goals:
All your base are belong to us.