Aria Automation supports integration with Ansible Open Source configuration management as well as with Ansible Automation Platform, fomerly Ansible Tower. After configuring an integration, we can add Ansible components to new or existing deployments from the cloud template editor.

This post demonstrates how to setup Ansible Open Source integration and how to use it in a cloud template.

When we integrate Ansible with Automation Assembler, we can configure it to run one or more Ansible playbooks in a given order when a new machine is provisioned to automate configuration management. We specify the desired playbooks in the cloud template for a deployment.

In our lab setup, we use a freshly installed Ubuntu 22.04 LTS system as the Ansible control machine (the defined hostname is iac.corp.local).

The setup procedure is as follows:

  1. Prepare the Ansible control machine
  2. Configure Ansible integration in Aria Automation
  3. Create the cloud template

Prepare the Ansible control machine

On the iac server login as an user with administrative permissions (e.g. to sudo to root), and install OpenSSH server and Ansible:

sudo apt get install openssh-server ansible -y

Create a dedicated user called svc_aaiac for Aria Automation:

sudo useradd svc_aaiac

Ensure that the following is set in the sudoers configuration, e.g. in /etc/sudoers.d/aria-auto-iac:

Defaults:svc_aaiac !requiretty
svc_aaiac ALL=(ALL) NOPASSWD: ALL

Switch to the newly created service user svc_aaiac:

su - svc_aaiac

Create an empty Ansible configuration inside the service user home directory:

mkdir $HOME/.ansible
touch $HOME/.ansible.cfg

Open $HOME/.ansible/ansible.cfg in an editor and insert the following lines:

[defaults]
host_key_checking=False
vault_password_file=~/.ansible/vault_password.txt
log_path=~/.ansible/debug.log

[ssh_connection]
ssh_args=-o UserKnownHostsFile=/dev/null

Create the vault password file:

echo 'VMware123!' > $HOME/.ansible/vault_password.txt

Create a simple Ansible playbook in $HOME/.ansible/ubuntu-install-webserver.yml to use in our cloud template later on:

- hosts: all
  name: Ansible playbook to install a web server
  tasks:
    - name: Update the apt repository and install nginx in latest version
      apt:
        name: nginx
        state: latest
        update_cache: yes
    - name: Ensure nginx is running
      service:
        name: nginx
        state: started
        enabled: yes

Configure Ansible integration in Aria Automation

To setup the Ansible integration go to Infrastructure > Integrations and click on the “ADD INTEGRATION” button. Then click on “Ansible“:

When setting up an Ansible integration, we must specify the hostname of the Ansible control machine as well as the inventory file path that defines information for managing resources. In addition, we must provide the username and password of the account we’ve created on the Ansible control machine above to allow Aria Automation to access it via SSH, i.e. svc_aaiac.

Create the cloud template

After the integration has been added we create a cloud template to deploy a Ubuntu VM in a vSphere network, which will be then configured with an Nginx server using the Ansible integration. We will use the following simple cloud template as a basis (if you want to learn more about using the cloudConfig property in a cloud template, read my blog post about customizing the guest OS during firstboot with cloudConfig):

formatVersion: 1
inputs:
  rootPassword:
    type: string
    title: Root Password
    description: |
      Choose a password for the root account.<br>
      Must be 8 characters long at minimum.<br>
      Allowed characters: a-z0-9A-Z@#$]+'
    minLength: 8
    maxLength: 64
    pattern: '[a-z0-9A-Z@#$]+'
    encrypted: true
resources:
  Cloud_VM_1:
    type: Cloud.vSphere.Machine
    properties:
      image: Ubuntu22
      cpuCount: 1
      totalMemoryMB: 2048
      folderName: vRA deployed VMs
      storage:
        constraints:
          - tag: storage:bronze
      networks:
        - network: ${resource.Cloud_Net_1.id}
          assignment: static
      attachedDisks: []
      constraints:
        - tag: cz:vsphere
      customizeGuestOs: false
      cloudConfig: |
        #cloud-config
        write_files:
          - path: /etc/netplan/99-installer-config.yaml
            content: |
              network:
                version: 2
                renderer: networkd
                ethernets:
                  ens160:
                    addresses:
                      - ${self.networks[0].address}/${resource.Cloud_Net_1.prefixLength}
                    gateway4: ${resource.Cloud_Net_1.gateway}
                    nameservers:
                      search: ${resource.Cloud_Net_1.dnsSearchDomains}
                      addresses: ${resource.Cloud_Net_1.dns}
        ssh_pwauth: true
        disable_root: false
        chpasswd:
          list: |
            root:${input.rootPassword}
            ubuntu:${input.rootPassword}
          expire: false
        runcmd:
          - netplan apply
          - hostnamectl set-hostname --static ${self.resourceName}
          - sed -i '/PermitRootLogin/d' /etc/ssh/sshd_config
          - echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
          - systemctl restart sshd.service
          - eject /dev/cdrom
          - touch /etc/cloud/cloud-init.disabled
  Cloud_Net_1:
    type: Cloud.Network
    properties:
      networkType: existing
      constraints:
        - tag: net:vsphere-mgmt


Now, drag the Ansible resource from the Configuration Management section on the canvas and connect it with the Cloud.vSphere.Machine object. Adjust the YAML block for the Cloud.Ansible object as follows:

  Cloud_Ansible_1:
    type: Cloud.Ansible
    properties:
      host: ${resource.Cloud_VM_1.*}
      osType: linux
      account: iac.corp.local
      username: root
      password: ${input.rootPassword}
      playbooks:
        provision:
          - /home/svc_aaiac/.ansible/ubuntu-install-webserver.yml

The account property must point to the Ansible control machine, we enter the hostname iac.corp.local here.

The username and password properties define the user, that Ansible is using to connect to the target machine (the deployed VM). Here, we’re using the root user for simplicity (we also made sure that root is allowed to login via SSH in the cloudConfig section). In our cloud template, we allow the user to specify the root password as well as the password of the ubuntu user in the input form of the deployment.

As the provision playbook we specify the path to our playbook created on the Ansible control machine.

You can grab the complete cloud template from my Github repository.

Deploy time

Finally, we can deploy an instance by clicking on the DEPLOY button at the bottom of the cloud template editor.

After the VM has been successfully deployed, we login to the freshly deployed VM via SSH using the ubuntu account. We check if nginx has been installed and is up and running:

sudo systemctl status nginx
● nginx.service - A high performance web server and a reverse proxy server
     Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset:>
     Active: active (running) since Wed 2024-07-10 19:03:07 UTC; 11min ago
       Docs: man:nginx(8)
   Main PID: 3052 (nginx)
      Tasks: 2 (limit: 2237)
     Memory: 3.5M
        CPU: 24ms
     CGroup: /system.slice/nginx.service
             ├─3052 "nginx: master process /usr/sbin/nginx -g daemon on; master>
             └─3056 "nginx: worker process" "" "" "" "" "" "" "" "" "" "" "" "">

Jul 10 19:03:07 mcm-0011 systemd[1]: Starting A high performance web server and>
Jul 10 19:03:07 mcm-0011 systemd[1]: Started A high performance web server and

That’s all for now… 🙂

Note: I got the following error when first trying to delete a deployment with Ansible integration:

Unable to parse inventory to obtain existing groups JSON for host mcm-0001 in inventory /home/svc_aaiac/.ansible/hosts. Ensure inventory is valid and host exists.. Refer to logs at var/tmp/vmware/provider/user_defined_script/bf21c42d-7bcb-4aa8-85aa-a00edc9f07b9 on Ansible Control Machine for more details.
}
}

This happened as we’re using an Ubuntu 22.04 system as the Ansible control machine, which has python3 installed, but the python command fails due to a missing link.
The fix in my setup was to create a symlink for the python command, i.e.:

ln -s /usr/bin/python3 /usr/bin/python