- Deploying NSX-T Using Ansible – Part 1: Setting Up The Environment
- Deploying NSX-T Using Ansible – Part 2: Setting Up The Playbook
- Deploying NSX-T Using Ansible – Part 3: Running The Playbook
In my previous post, I covered how to prepare your Ansible environment and install the VMware NSX-T modules. I also provided the details on how to install my Ansible playbooks for deploying NSX-T in your environments.
In this post, I am going to detail how to configure these playbooks to meet your environment/requirements. I have chosen to break out my variables into multiple files. This gives me the flexibility to assign values specific to a group of hosts, inherit values from a parent group and to store usernames, passwords and license information more securely, in their own Ansible Vault encrypted file.
The deployment examples that I will demonstrate include 2 sites, that each includes the following:
- A management environment at each site. This includes a vCenter Server instance with a single management cluster.
- A compute resource (CMP) environment at each site. This includes a vCenter Server instance with a single resource cluster.
I will deploy an NSX-T instance at each management cluster. These NSX-T instances will be used to provide SDN capabilities to the compute resource clusters (when I get time I’ll create a diagram!).
An overview of the playbook tree:
├── ansible.cfg ├── nsxt_create_environment.yml ├── nsxt_example_add_compute_manager.yml ├── nsxt_example_apply_license.yml ├── nsxt_example_create_ip_pools.yml ├── nsxt_example_create_transport_profiles.yml ├── nsxt_example_create_transport_zones.yml ├── nsxt_example_create_uplink_profiles.yml ├── nsxt_example_deploy_ova.yml ├── group_vars │ ├── all │ ├── nsxt_managers_controllers │ ├── site_a │ ├── site_a_cmp_nsxt │ ├── site_b │ └── site_b_cmp_nsxt ├── inventory │ └── hosts ├── roles │ ├── nsxt_add_compute_managers │ ├── nsxt_apply_license │ ├── nsxt_check_manager_status │ ├── nsxt_configure_transport_clusters │ ├── nsxt_create_ip_pools │ ├── nsxt_create_transport_profiles │ ├── nsxt_create_transport_zones │ ├── nsxt_create_uplink_profiles │ └── nsxt_deploy_ova ├── ssh_config
Page Contents
Configure Inventory
The inventory is configured in the ‘inventory/hosts‘ file. I have used a single file for the inventory to try and keep things simple. You are not limited to using this approach and can do whatever fits best for your environment. As long as the hosts are defined and in their respective groups is all that matters.
So let’s take a look at my ‘inventory/hosts‘ file for a single site.
[local] localhost #################### # Host Definitions # #################### [site_a_mgmt_vcenter] sg1-vsa001 ansible_host=sg1-vsa001.sgroot.local [site_a_cmp_vcenter] sg1-vsa002 ansible_host=sg1-vsa002.sgroot.local [site_a_cmp_nsxt] sg1-nsx002 ansible_host=sg1-nsx002.sgroot.local ########################## # Host Group Definitions # ########################## # This group contains all the management hosts for Site A. [site_a_mgmt:children] site_a_mgmt_vcenter # This group contains all the CMP hosts for Site A. [site_a_cmp:children] site_a_cmp_vcenter site_a_cmp_nsxt # This group contains all the hosts for Site A. [site_a:children] site_a_mgmt site_a_cmp # This group contains all the NSX-T Manager hosts (for all sites). [nsxt_managers_controllers:children] site_a_cmp_nsxt
I have 3 hosts defined, the management vCenter Server, the CMP vCenter Server and the NSX-T instance that will be used for the CMP stack. Each of these hosts are placed into their respective, site-specific groups, site_a_mgmt_vcenter, site_a_cmp_vcenter and site_a_cmp_nsxt.
I also have some additional groups:
Group | Description | Members |
site_a_mgmt | All management hosts in Site A | site_a_mgmt_vcenter |
site_a_cmp | All CMP hosts in Site A | site_a_cmp_vcenter site_a_cmp_nsxt |
site_a | All hosts in Site A | site_a_mgmt site_a_cmp |
nsxt_managers_controllers | All NSX-T Managers/Controllers | site_a_cmp_nsxt |
I like to use groups in this way as it allows me to use variable inheritance. This is useful, as you can re-use the same variables in your playbooks and have a different value assigned, for the host or group.
In summary, you only need to define your NSX-T hosts and you’re good to go. I have included my vCenter Server hosts so that you get the idea, but these aren’t required to be set for this playbook.
Configure Site Variables
I have variables that are site-specific configured in the ‘group_vars/site_a/site_a_vars.yml‘ file. These include things like DNS and NTP servers. I also include the management vCenter Server details here so that they are accessible by all hosts in the site. I also use these variables when deploying NSX-T.
--- # Site A Variables site_name: SiteA ## NTP Servers ntp_server_default: pool.ntp.org ntp_server_1: 0.pool.ntp.org ntp_server_2: 1.pool.ntp.org ntp_server_3: 2.pool.ntp.org ntp_server_4: 3.pool.ntp.org # DNS Servers dns_server_1: 10.1.10.10 dns_server_2: 10.1.10.11 # Site A Management vCenter Details mgmt_vcenter_server: sg1-vsa001.sgroot.local mgmt_vcenter_datacenter: "SG1" mgmt_vcenter_cluster: "SG1-CLS-MGMT-01"
Make sure to set the vCenter Server variables to that where the NSX-T manager will be deployed to/hosted on.
When deploying NSX-T, only ‘dns_server_1‘ will be configured (this is a limitation of the module at this time).
The NSX-T ova module only supports the configuration of a single NTP server, so ‘ntp_server_default‘ is used.
Configure DNS Default Domain
You will need to configure the default DNS suffix/domain for your environment. This variable is appended to your NSX-T host definition to ensure that it resolves when making changes via the modules.
The default DNS domain is configured as a global variable in the YAML file under ‘group_vars/all/dns_vars.yml‘.
--- # DNS Specific Variables dns_default_domain: sgroot.local
Configure NSX-T Global Variables
I have several variables that are used globally across all NSX-T hosts, regardless of environment or site. I can use the ‘nsxt_managers_controllers‘ group to allow me to target variables in this way to all NSX-T hosts. In the Ansible way of doing things, I have a YAML file under ‘group_vars/nsxt_managers_controllers/nsxt_manager_global_vars.yml‘.
--- # NSX-T OVF/OVA Variables nsxt_ovftool_path: "/usr/bin" nsxt_ova_path: "/mnt/iso/VMware/NSX/NSX-T/2.4" nsxt_ova_filename: "nsx-unified-appliance-2.4.0.0.0.12456291.ova" # NSX-T Default Deployment Variables nsxt_default_deployment_size: "small" nsxt_default_role: "nsx-manager nsx-controller" nsxt_default_origin_type: "vCenter" # NSX-T Misc Variables nsxt_ssh_enabled: True nsxt_validate_certs: False nsxt_status_check_delay: 50 nsxt_default_mtu: 1600
These variables should be pretty self-explanatory and can easily be overridden by creating the same variable names against other Ansible groups or even the host directly. This would allow you to specify values that differ between hosts for some reason.
Configure NSX-T Local Variables
I configure all variables for the NSX-T instance in ‘group_vars/site_a_cmp_nsxt/site_a_cmp_nsxt_vars.yml‘. Although this is a group, it only contains a single NSX-T host used for the compute resource (CMP) clusters.
Populate these values to match your environment.
--- # Site A CMP NSX-T Deployment Details nsxt_datastore: "vsanDatastore" nsxt_portgroup: "Management" nsxt_network_ip_address: "10.1.10.16" nsxt_network_netmask: "255.255.255.0" nsxt_network_gateway: "10.1.10.254" # Site A CMP vCenter Server and Cluster Details nsxt_compute_manager_name: "{{ site_name }} CMP vCenter Server" nsxt_compute_manager_host: sg1-vsa002.sgroot.local nsxt_transport_clusters: - SITEA-CLS-CLOUD-01 # Transport Zone and Profile Details nsxt_transport_zone_name: "{{ site_name }}-CMP-Transport" nsxt_transport_zone_desc: "Transport Zone for {{ site_name }} Cloud Resource Cluster." nsxt_transport_node_profile_name: "{{ nsxt_transport_zone_name }}-Profile" # Transport Switch vmknic IP Address Pool Details nsxt_transport_switch_ip_pool_name: "{{ nsxt_transport_zone_name }}-Pool" nsxt_transport_switch_ip_pool_start: "10.1.111.10" nsxt_transport_switch_ip_pool_end: "10.1.111.19" nsxt_transport_switch_ip_pool_cidr: "10.1.111.0/24" # Transport Switch Details nsxt_transport_switch_name: "{{ nsxt_transport_zone_name }}-Switch" nsxt_transport_switch_uplink_profile_name: dualUplinkProfile nsxt_transport_switch_uplink_profile_policy: FAILOVER_ORDER nsxt_transport_switch_uplink_profile_vlan: 0 nsxt_transport_switch_profile_desc: "Transport Node Profile for {{ site_name }} Cloud Resource Cluster." nsxt_transport_switch_pnic_1: vmnic2 nsxt_transport_switch_pnic_2: vmnic3 nsxt_transport_switch_uplink_1: uplink-1 nsxt_transport_switch_uplink_2: uplink-2
Configure Credentials
I configure all credentials in their own files, which I then encrypt with Ansible Vault. For this deployment example, 2 credential files need to be updated. Note, that I have not encrypted these files as they are just examples. Do not store production passwords in plain text and at the very least, use Ansible Vault.
The first is a site-specific credentials file in ‘group_vars/site_a/site_a_vcenter_creds.yml‘. Here I like to add the administrator accounts for all vCenter Servers. This allows me to use these across multiple plays where admin access to vCenter is required. In this case, the administrator details to deploy the NSX-T ova.
--- ## Management vCenter Admin Credentials mgmt_vcenter_admin_username: administrator@vsphere.local mgmt_vcenter_admin_password: VMwar3!! ## CMP vCenter Admin Credentials cmp_vcenter_admin_username: administrator@vsphere.local cmp_vcenter_admin_password: VMwar3!!
The second is an NSX-T specific credentials file in ‘group_vars/site_a_cmp_nsxt/site_a_cmp_nsxt_creds.yml‘. In this file, I add the NSX-T manager ‘admin‘ and ‘cli‘ accounts and the service account that will be used to connect with vCenter Server.
--- # CMP NSX-T User Accounts nsxt_admin_username: "admin" nsxt_admin_password: "Admin!23Admin" nsxt_cli_password: "Admin!23Admin" ## Compute Manager User Permissions. # This account is used to connect NSX-T to the compute manager nsxt_cm_username: svc_nsx nsxt_cm_password: VMwar3!!
Configure License
The last thing to configure is the license that will be assigned to the NSX-T instance, which is specified in the file ‘group_vars/site_a_cmp_nsxt/site_a_cmp_nsxt_lic.yml‘.
--- # NSX-T License nsxt_license: XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
That should be everything that is needed to complete the deployment of NSX-T. The examples above cover the deployment of NSX-T at one site. My downloadable examples will include details for 2 sites. If you require more, it should be easy to expand on the existing playbooks.
In my next post, I will go through the actual run of the playbook and deploy NSX-T.
I hope this has been helpful. If you discover any bugs or require some help, then please drop me a message via the Drift app.