New Zealand | Ansible Tower in Google Cloud Platform

Sebastian Baszcyj - 08.09.202120210908

Ansible Tower in Google Cloud Platform

New Zealand | Ansible Tower in Google Cloud Platform

This blog will guide you through how to install Ansible in Google Cloud Platform (GCP).  If you are not familiar with Ansible yet you may want to start with my previous blog Ansible – My New Found Friend for a bit of a primer.

Some requirements and recommendations before jumping into the process:

  1. Take the backup of your on-premises Ansible Tower using the latest minor version for your tower. For example, if you are running 3.7.1 and there is 3.7.4 available – download this version, move the inventory file to this version and run the backup
  2. Before migrating and taking the backup I recommend upgrading the Ansible Tower to the latest version. This way, the target environment will not require additional upgrade post restore

If the backup file is large, you need to ensure you have enough space on the file system from which you are running the restore. Ensure you also have enough space for /var file system. This applies to all systems in the cluster as the restore job will copy the recovery file to all

Installing Ansible in Google Cloud Platform

Once you are ready, here is the install procedure for GCP:

1. Log into console.cloud.google.com

2. Create a new Project if it has not been created yet: in the UI, click on Project → New Project. Specify the Name: Ansible Tower. Specify the Location and click Create

3. Navigate to VM Instances and enable Compute Engine API if it is not enabled

4. Click Create New Instance and create three (3) instances:

 

Name

Specify the hostname following naming convention, eg.: ansible01

Region

Australia-southeast1 (Sydney)

Zone

australia-southeast1-b

Machine Type

e2-standard-4 (4 vCPU, 16GB memory)

Boot Disk

Public Images: Red Hat Enterprise Linux

Version: Red Hat Enterprise Linux 8

Boot disk type: Balanced Persistent Disk

Size: 60GB

Firewall

Allow HTTP traffic

Allow HTTPS traffic

Networking

Network tags: ansible-tower

Network interfaces

Specify the network for this project

Disks

Boot disk Deletion Rule: Disable Delete boot disk when instance is deleted

Device Name: Based on instance name

Additional Disks

Add new disk for the Primary Instance: 128GB

Name: ansible01-backup

Type: Balanced persistent disk

Snapshot schedule: No schedule

Source Type: Blank disk

Mode: Read/Write

Deletion Rule: Keep Disk

Size: 128GB

5. Log into the first machine 

6. Elevate to root using sudo -i 

7. Configure password-less ssh authentication 

8. Run the following command on all nodes 

 

ssh-keygen -t rsa -b 2048

9. Copy /root/.ssh/id_rsa.pub from the first node to /root/.ssh/authorized_keys on all other servers

10. Ensure the option PermitRootLogin is set to yes on all servers in /etc/ssh/sshd_config (this should be set to no once the installation is finished)

11. Restart the sshd service:

 

# systemctl restart sshd

12. Validate the passwordless ssh communication is working from the first node to all other nodes (this should be disabled once the installation is finished)

13. On the first node install wget:

# dnf install wget -y

1. On the first node, change directory to /root/ and run the following command to download the ansible-bundle:

# wget releases.ansible.com/ansible-tower/setup-bundle/ansible-tower-setup-bundle-latest.el8.tar.gz

2. Untar the package:

# tar xzvpf ansible-tower-setup-bundle-latest.el8.tar.gz

Create GCP SQL PostgreSQL

1. In GCP Console, navigate to SQL and select Create Instance

2. Select PostgreSQL

Instance ID

ansibletowerdb

Password

Generate or enter the password for the Administrative user

Database version

PostgreSQL 10

Region

Australia-southeast1 (Sydney)

Single Zone

Given this is a backup Ansible Tower, Single Zone should be sufficient

 

Machine type

Custom: 4vCPU, 8GB Memory

Storage

100GB

Enable Automatic Storage Increases

Enabled

Public IP

Disabled

Private IP

Enabled

Network

Specify the Network and setup connection

Backups

Leave default

 

3. Click Create and wait for the PostgreSQL to be created

4. In SQL Section, click on Overview and note down the Private IP address. This IP will be later used in configuration of the inventory file.

5. Navigate to Databases section in SQL and click ‘Create Database’

6. Specify Database name: awx and click Create

7. Navigate to Users section in SQL and click ‘Add User’

8. Specify Basic Authentication; Username ‘awx’ and password. Click Add

Configure Inventory and Install Ansible Tower

  1. Log to the primary node and elevate to root
  2. Change directory to ansible-tower-setup-bundle-version
  3. Using vim or any other editor, edit the inventory file to read, where ansible01, ansible02, ansible03 should be replaced with the hostnames of the servers created for Ansible Tower.

[tower]

Specify the server names of Ansible Tower nodes

[automationhub]

Leave blank

[database]

Leave blank (used only if DB is co-located on the VM)

[all:vars]

admin_password – this is the password used by the admin

pg_host = ‘Private IP Address

pg_port = ‘5432’

pg_database = ‘awx’

pg_username = ‘awx’

pg_password = ‘Your_Postgres_AWX_Password’

[tower] ansible01 ansible_connection=local
ansible02
ansible03


[automationhub]

[database]

[all:vars]
admin_password=’Your_Tower_Password’


pg_host=’Private IP Address’
pg_port=’5432′


pg_database=’awx’
pg_username=’awx’
pg_password=’Your_Postgresql_awx_password’
pg_sslmode=’prefer’ # set to ‘verify-full’ for client-side enforced SSL

4. Save the changes and exit

5. On the primary node, create file system to host backup/restore files. Use additional disk allocated for this node. The following steps outline the process for disk /dev/sdc. Ensure this is the disk allocated for the backup/restore purposes.

# # mkfs.xfs /dev/sdb
# cd /
# mkdir ansiblebkp


Add the following entry to /etc/fstab (UUID should be found using blkid command)
UUID=”9252f3ed-8283-4ce0-9f0d-00d308ceeaa3″ /ansiblebkp xfs defaults 0 0


# mount -a


# df -h | grep ansible
/dev/sdb 128G 946M 128G 1% /ansiblebkp

6. Run setup.sh located in the bundle’s directory

7. Once the process finishes, confirm all the nodes are visible:

awx-manage list_instances

8. Open the browser and navigate to https://ansible_server_name

9. Enter the License Key

Ansible Tower Migration to GCP

This section describes the process required to migrate (restore) the on-premises Ansible Tower to the GCP Tower Cluster we built as the rehearsal to this section.

1. Estimate the source Ansible Tower backup disk requirements, using the following commands:

a.) Log into existing Ansible Tower node.

b.) Connect to the existing production PostgreSQL node using the following command:

psql “host=postgres_hostname port=5432 dbname=postgres user=awx password=yourpass”

c.) Verify the size of the database using the SQL statement below (the size below is for demonstration purposes only):

postgres=> SELECT pg_size_pretty( pg_database_size(‘awx’) );
pg_size_pretty
—————-
20 MB
(1 row)

2. Select the file system with space available and using the LATEST ansible-tower-bundle, execute the backup:

# ./setup.sh -e ‘backup_dest=/path/to/backup_dir/’ -b

3. Copy the backup file to the primary node in GCP Ansible Tower Cluster:

4. Change the directory where Ansible Tower binaries are located and execute the restore:

#./setup.sh -e ‘restore_backup_file=/ansiblebkp/tower-restore/ansible-tower-backup.tar.gz’ -r

5. Observe the progress. The restore should finish with no errors reported.

6. Execute the following command to verify if the restoration worked:

# awx-manage list_instances

7. The output should list only GCP nodes

8. Login to Ansible Tower UI to confirm if the configuration is correct. Remember to use the admin password from the SOURCE Ansible Tower

9. Validate LDAP login

10. Once the restoration process has been confirmed, run a test Ansible job to confirm the functionality

11. Execute the backup of the new configuration on the primary node:

./setup.sh -e ‘backup_dest=/ansiblebkp/’ -b

12. Execute the restore of the new configuration on the primary node. This step is required to ensure all of the credentials and the entire configuration works as expected.

./setup.sh -r

13. This concludes the migration process

 

For information on running Ansible AWX with Isolated Nodes, check out this blog.

Join the Insentra Community with the Insentragram Newsletter

Hungry for more?

If you’re waiting for a sign, this is it.

We’re a certified amazing place to work, with an incredible team and fascinating projects – and we’re ready for you to join us! Go through our simple application process. Once you’re done, we will be in touch shortly!

New Zealand | Ansible Tower in Google Cloud Platform

Unleashing the power of Microsoft Copilot

This comprehensive guide provides everything you need to get your organisation ready for and successfully deploy Copilot.

Who is Insentra?

Imagine a business which exists to help IT Partners & Vendors grow and thrive.

Insentra is a 100% channel business. This means we provide a range of Advisory, Professional and Managed IT services exclusively for and through our Partners.

Our #PartnerObsessed business model achieves powerful results for our Partners and their Clients with our crew’s deep expertise and specialised knowledge.

We love what we do and are driven by a relentless determination to deliver exceptional service excellence.

New Zealand | Ansible Tower in Google Cloud Platform

Insentra ISO 27001:2013 Certification

SYDNEY, WEDNESDAY 20TH APRIL 2022 – We are proud to announce that Insentra has achieved the  ISO 27001 Certification.