VMware Cloud Management what’s new in Aria – Recap from Explore Barcelona 23

The VMware Explore 2023 event in Barcelona took place from Nov. 6-9. In this blog post, I’m going to summarize recent developments and announcements from Explore Barcelona in the multi-cloud management area with focus on the Aria portfolio.

Some significant changes in the Aria and Tanzu portfolio have been taken place during the last months, i.e. VMware moved AIOps, FinOps and IT automation products from the Aria portfolio into Tanzu. VMware has rebranded four Aria products as Tanzu Intelligence Services and Tanzu Hub, which are now positioned alongside the existing Tanzu Application Platform (TAP).

In a nutshell: VMware Tanzu will be the multi-cloud application brand that accelerates application delivery with key capabilities to develop, operate, and optimize (D-O-O framework) applications on any cloud. VMware Aria will continue to provide cloud management capabilities and specifically be a critical solution in helping VMware customers to transform their physical computing resources into a true IaaS surface, but will no longer be part of the DOO narrative.

The first part of the post will explain the current multi-cloud app strategy and how the Aria and Tanzu portfolios fit in there.

The second part will summarise the recent developments and announcements within the Aria portfolio.

Replacing the Aria Automation default SSL certificate when Platform Lifecycle fails with error LCMVRAVACONFIG90039

Recently, I wanted to replace the self-signed certificate of Aria Automation using Aria Platform Lifecycle (formerly known as vRealize Lifecycle Manager). The customer has signed my CSR (created via Aria Platform Lifecycle) through their CA using the ECDSA (Elliptic Curve Digital Signature Algorithm) hashing algorithm in their intermediate certificates.

This ultimately fails with error LCMVRAVACONFIG90039 due to some arbitrary restrictions in the backend of Aria Automation.

Orchestrator workflows fails when invoked from Aria Automation Service Broker

Lately, I was developing an Extensibility Subscription workflow in Orchestrator, which queries the Aria Automation CMX REST API.
While it was perfectly running when being executed manually within Orchestrator, it fails when invoked from Aria Automation Service Broker as part of an Extensibility subscription (here Kubernetes Supervisor Namespace Post Provision).
The resulting error message was:

Catalog Item Deployment NS Test failed for Supervisor Namespace: Extensibility error for topic kubernetes.sv.namespace.provision.post: [10040] SubscriberID: vro-gateway-elsAsEMn7yjjbAGz, RunnableID: 587ed41a-a51b-4cdc-a10d-7c705a57db39 and SubscriptionID: sub_1695305241572 failed with the following error: Workflow run [fd626a0a-0386-4778-b2ad-8e7ffd5f5e9f] completed with error [Error in worker: HTTP error 500 - {"timestamp":"2023-08-20T16:26:45.991+0000","path":"/cmx/api/resources/supervisor-namespaces","status":500,"error":"Internal Server Error","message":"No orgId in token for vro-gateway-elsAsEMn7yjjbAGz","requestId":"f7763022-202212","@type":"java.lang.IllegalStateException"} (Dynamic Script Module name : executeRestCall#11) (Workflow:Kubernetes Supervisor Namespace Post Provision / Control WF (item4)#5)]

Deploy a Tanzu supervisor namespace in Cloud Assembler

This post describes how to add Tanzu supervisor clusters with Aria Automation Cloud Assembler for use in deployments and how to create namespaces in a supervisor cluster using a Cloud Template.

Supervisor clusters are customised Kubernetes clusters associated with vSphere. They expose Kubernetes APIs to end users, and they use ESXI as a platform for worker nodes rather than Linux. Supervisor namespaces facilitate access control to Kubernetes resources, because it is typically easier to apply policies to namespaces than to individual virtual machines. We can create multiple namespaces for each supervisor cluster.

NSX-T setup with Edge single NIC uplink profile and static routing

In last year’s VMware homelab NSX series, I’ve showed howto setup a NSX setup with BGP and later with OSPF. This time, I’m going to deploy and configure NSX-T with a static routing setup using single Edge uplinks. NSX-T is used 3.2.2 in the lab environment.

In this lab, we have two ToR switches, configured with VRRP. The ESXi server is physically connected with one uplink “Uplink1” to ToR-1 and with another uplink “Uplink2” to ToR-2.

The Edge Node VM design in the environment is driven by the following goals:

  • 1 virtual uplink used (redundancy is provided by ESXi pNICs)
  • A single N-VDS per Edge node carrying both overlay and external traffic

The Tier-0 gateway is configured with a HA VIP and sets it default route to the ToR virtual router group IP address. The ToR routes all traffic destined for our Overlay segment to the Tier-0 HA VIP.

The overall topology can be seen in the following diagram.

Page 2 of 9

All your base are belong to us.