We will perform certain tasks to get some grip on Ansible, this is run on the CITC platform (Black slate) offered by Cumulus Linux

They have a 2 tier Clos topology, with 2 spines(01,02) and 4 leaves(01-04) and access to all the devices is achieved from the central OOB management server

Let's login to that server first, and modify the /etc/ansible/hosts file where we can keep all the inventory info that needs to be managed

cumulus@oob-mgmt-server:~$ cat /etc/ansible/hosts
[spines]
spine01
spine02

[leaves]
leaf01
leaf02
leaf03
leaf04

Here, [spines] and [leaves] refer to the group names under which respective device names are mentioned, We have used the hostnames here instead of IPs as the DNS entries are already available.

Let's run an Ansible ad-hoc command to get the 'OS Version' details from the spine nodes

cumulus@oob-mgmt-server:~$ ansible spines -m command -a "uname -v"
spine02 | SUCCESS | rc=0 >>
#1 SMP Debian 4.1.33-1+cl3u14 (2018-07-05)

spine01 | SUCCESS | rc=0 >>
#1 SMP Debian 4.1.33-1+cl3u14 (2018-07-05)

In the command above, -m means module and we are using the 'command' module which is optional as it's the default module, -a for ad-hoc, uname -v is the linux command to get the OS version

Let's try the same command this time on the leaf nodes with out the -m flag

cumulus@oob-mgmt-server:~$ ansible leaves -a "uname -v"
leaf02 | SUCCESS | rc=0 >>
#1 SMP Debian 4.1.33-1+cl3u14 (2018-07-05)

leaf04 | SUCCESS | rc=0 >>
#1 SMP Debian 4.1.33-1+cl3u14 (2018-07-05)

leaf03 | SUCCESS | rc=0 >>
#1 SMP Debian 4.1.33-1+cl3u14 (2018-07-05)

leaf01 | SUCCESS | rc=0 >>
#1 SMP Debian 4.1.33-1+cl3u14 (2018-07-05)

there is a shell module which also like the command module with extra functionalities such as piping output, grep. We'll use -m shell this time on all the nodes, by specifying 'all', the following command would not work if we use -m command.

cumulus@oob-mgmt-server:~$ ansible all -m shell -a "ip addr | grep 192.168"
leaf02 | SUCCESS | rc=0 >>
inet 192.168.0.12/16 brd 192.168.255.255 scope global eth0

spine01 | SUCCESS | rc=0 >>
inet 192.168.0.21/16 brd 192.168.255.255 scope global eth0

leaf01 | SUCCESS | rc=0 >>
inet 192.168.0.11/16 brd 192.168.255.255 scope global eth0

leaf04 | SUCCESS | rc=0 >>
inet 192.168.0.14/16 brd 192.168.255.255 scope global eth0

leaf03 | SUCCESS | rc=0 >>
inet 192.168.0.13/16 brd 192.168.255.255 scope global eth0

spine02 | SUCCESS | rc=0 >>
inet 192.168.0.22/16 brd 192.168.255.255 scope global eth0

We will start using a playbook instead of executing commands in adhoc fashion, let's write a playbook in yaml format, which begins with three hyphens ---

cumulus@oob-mgmt-server:~$ cat enableOspf.yaml
---
- hosts: all
  tasks:
  - name: Enable OSPF
    become: yes
    command: "sed 's/ospfd=no/ospfd=yes/' /etc/frr/daemo
ns"

this play book would be executed against all hosts , as it says hosts: all

we can have multiple tasks, however the only task that we are doing here is to enable OSPF, which is indicated by - name, this name filed is optional and is only used for user's convenience, become: yes implies this command requires sudo/root privileges on the remote system, command: is equivalent to the module '-m command' that we used in adhoc mode

what the sed command does, by default ospfd (ospf daemon is disabled), in order to enable that we need to replace the appropriate line on the /etc/frr/daemons file which contains ospfd=no to ospfd=yes.

let's run this playbook

cumulus@oob-mgmt-server:~$ ansible-playbook enableOspf.yaml

PLAY [all] *********************************************

TASK [Gathering Facts] *********************************
message.

changed: [spine02]
changed: [spine01]
changed: [leaf01]
changed: [leaf02]
changed: [leaf04]
changed: [leaf03]

PLAY RECAP *********************************************************************
leaf01 : ok=2 changed=1 unreachable=0 failed=0
leaf02 : ok=2 changed=1 unreachable=0 failed=0
leaf03 : ok=2 changed=1 unreachable=0 failed=0
leaf04 : ok=2 changed=1 unreachable=0 failed=0
spine01 : ok=2 changed=1 unreachable=0 failed=0
spine02 : ok=2 changed=1 unreachable=0 failed=0

Let's try to enable all the switch ports on all the nodes, which are disabled by default, the interface information could be gathered for all the nodes one by using the 'netq show interface ' command on the mgmt server or by using 'net show interface all' on each node

the playbook for enabling the interfaces is as follows

cumulus@oob-mgmt-server:~$ cat enableInterfaces.yaml
---
- hosts: spines
  tasks:
  - name: Enable switch ports on spines
    become: yes
    command: "ip link set down"
    with_items: [swp1, swp2, swp3, swp4, swp31, swp32]

- hosts: leaves
  tasks:
  - name: Enable switch ports on leaves
    become: yes
    command: "ip link set down"
    with_items: [swp1, swp2, swp44, swp45, swp46, swp47, swp48, swp49, swp50, swp51, swp52]

We have two set of hosts and one task for each set, is a variable that would loop through each of the values on the list specified by with_items, the interface list on spines and leaves are different in this setup, hence the array would look different

cumulus@oob-mgmt-server:~$ ansible-playbook enableInterfaces.yaml

PLAY [spines] ******************************************
************************

TASK [Gathering Facts] *********************************
************************
ok: [spine01]
ok: [spine02]

TASK [Enable switch ports on spines] *******************
************************
changed: [spine01] => (item=swp1)
changed: [spine02] => (item=swp2)
changed: [spine02] => (item=swp3)
changed: [spine01] => (item=swp3)
changed: [spine02] => (item=swp4)
changed: [spine01] => (item=swp4)
changed: [spine02] => (item=swp31)
changed: [spine01] => (item=swp31)
changed: [spine02] => (item=swp32)
changed: [spine01] => (item=swp32)

PLAY [leaves] ******************************************
************************

TASK [Gathering Facts] *********************************
ok: [leaf02]
ok: [leaf03]
ok: [leaf04]
ok: [leaf01]

TASK [Enable switch ports on leaves] *******************
************************
changed: [leaf01] => (item=swp1)
changed: [leaf02] => (item=swp1)

/* truncated output */

we'll keep a separate post to try the nclu module (-m nclu), which is cumulus specific

Thank you

--end-of-post--