Ansible, Cumulus, Dell & Nutanix: One big happy family :) – Part1

Wow; that’s a lot of vendors in one sentence…

This post is basically to describe the demo I put together for a Nutanix/Cumulus webinar and presented at SF Network Automation meetup last wednesday(2/25/15). It seemed to have a good reaction, and I had fun building and presenting it. So why not share it more broadly? :)

Now, a bit of housekeeping, so I don’t cause anyone heartburn.

  • As full disclosure, I work for one of those vendors: Cumulus.
  • I am a Customer Solutions Engineer, so that makes me pretty close to a ‘pesky sales guy’.
  • At this stage, all of this is purely demo-ware.  So don’t extrapolate any conclusions about support from any of the vendors.

As I said, I’m in presales/architecture/solutions design, so it’s not often I get to sink my teeth into a project like this these days. I had help from Leslie Carr, but my point is that if a loaf like me can do this, it ain’t that hard; you could do it too. Oh, and we put this whole thing together in under a week, part-time… more-or-less from scratch.

The Demo Itself

Let’s start by showing you what I put together, then tear it apart and show you how it was done.

Note: we recorded my screen via a BlueJeans conference and forgot to mute one participant. I stripped off the audio, but the person talking is up in the top corner.

Obviously through some movie-magic a few sections are sped up significantly. :)


I had a few goals for the demo:

  1. Show some ‘out there’ examples of what’s possible on a network switch running Cumulus Linux.
  2. Show integration of Nutanix software with switching fabric (end-end “SDDC”)
  3. Practice/learn automation.
  4. Don’t do too much ‘dumb shit’; this might be used elsewhere and/or made repeatable and available in the Cumulus Workbench at some later stage.

There are a few moving parts to this, so let’s start with the components, then move on to what I chose to do with them.

  • Nutanix Foundation VM – V2.0
  • Cumulus®Linux® – 2.5.0
  • OpenNetworking switch – Dell®S6000-ON x2
  • Nutanix Converged Infrastructure node(s) – N1450 block x1 (4 nodes)
  • Jumphost (wbench VM) – Ubuntu
  • VMware vCenter Server appliance (VCSA) – 5.5u2
  • Dell PowerEdge R620Runing Proxmox
  • Firewall – Ubiquiti EdgeRouter Pro
  • Bunch-o-cables – Most interesting being; QSFP > 4x SFP+ DAC.

Wired up, it looks like this.

Picture of the rack... Guest appearance by #RocketTurtle

Picture of the rack… Guest appearance by #RocketTurtle

Cabling of the environment.

Cabling of the environment.

After some thought and talking with my counterpart at Nutanix, we decided the demo should have a few different phases and we’d reuse code from some of our existing automated demos where possible.

Where a lot of things take shape.. the whiteboard.

Where a lot of things take shape.. the whiteboard.

  • Pre-stage Environment
  • Stage1: Fabric Deploy
  • Stage2: NTNX Foundation
  • Stage3: Deploy VM (VCSA)
  • Stage4: Show component interaction

Pre-stage the Environment

Supporting Infrastructure

Since I was using our ad-hoc lab, we had to first build some of the supporting infrastructure, which would otherwise be available in the CW. Being lazy (or is that efficient?), I chose to just clone a JumpVM from the CW to use for this little project. This would have tmux (multi window terminal), Ansible, DHCP, DNS, etc. all good to go.

Leslie helped install a Proxmox server to host the JumpVM (aka wbench). We also added an NFS server to the hypervisor, which would be mounted to Leaf2 (more on that later), deployed the wbench VM and configured some port forwarding on the firewall to get into the wbenchVM remotely.

Nutanix Foundation VM + KVM on a switch

This part is a bit out there, and mostly done ‘coz we can’, but has potential practical applications. For example, several vendors and VARs are offering integrated whole-rack or half-rack packages. In these packages the top-of-rack (ToR) switches or Out-of-band (OoB) switches provide a convenient place to run discovery/deployment agents or software. Previously missing was a sufficiently modern + open operating system; enter Cumulus Linux :)

That’s what we’re showing here: Nutanix’s discovery + imaging package “Foundation”, running on the ToR as a KVM VM, bridged out to the IPMI + OoB ethernet network.

For the demo we chose the Dell S6000-ON, as it had an x86 CPU on the control plane, which is needed to run KVM in this case.

To get started, we needed to install a few packages and mount an NFS volume, that was presented from our proxmox server.
This is largely a hack, in practice either a switch with more storage a local USB drive etc is a better solution. But for the demo, it would have to do. This is on the list to clean up when making it available to everyone else.

cumulus@leaf2$ sudo apt-get install nfs-common qemu-kvm
cumulus@leaf2$ sudo vi /etc/fstab


Add this line /mnt/nfs/nutanix/ nfs noacl,vers=3 0 0


Then remount.

cumulus@leaf2$ sudo mount -a


To start the KVM image, we created a little script: fvm-runbook/
fvm-runbook/ script

killall v2p-conn
killall -9 v2p-conn
./v2p-conn 1047 1050 eth0 /tmp/ &
sleep 2
killall kvm
killall -9 kvm
#kvm -m 5120 -vnc :0 -hda foundation_vm-2.0.qcow2 -netdev socket,udp=,localaddr=,id=dev0 -device virtio-net-pci,mac=00:02:00:00:00:06,netdev=dev0
kvm -m 5120 -vnc :0 -drive file=/mnt/nfs/nutanix/foundation_vm-2.0.qcow2,if=virtio,boot=on -netdev socket,udp=,localaddr=,id=dev0 -device virtio-net pci,mac=00:02:00:00:00:06,netdev=dev0 -daemonize


So run the script, then check for the process.

cumulus@leaf2$ sudo ./run
Entering promiscuous mode
kvm: no process found
kvm: no process found
qemu-kvm: WARNING: boot=on|off is deprecated and does not exist upstream. Please update your scripts.
cumulus@leaf2$ ps -elf | grep kvm*
1 S root 583 2 0 60 -20 - 0 ? Feb24 ? 00:00:00 [kvm-irqfd-clean]
3 S root 11163 1 77 80 0 - 1536227 ? 07:15 ? 00:00:24 kvm -m 5120 -vnc :0 -drive file=/mnt/nfs/nutanix/foundation_vm-2.0.qcow2,if=virtio,boot=on -netdev socket,udp=,localaddr=,id=dev0 -device virtio-net-pci,mac=00:02:00:00:00:06,netdev=dev0 -daemonize
1 S root 11166 2 0 60 -20 - 0 ? 07:15 ? 00:00:00 [kvm-pit-wq]
0 S cumulus 11268 8512 0 80 0 - 1578 - 07:15 pts/0 00:00:00 grep kvm*


Then look for the DHCP request on wbench VM

cumulus@wbench:~$ tail -f /var/log/syslog
Mar 1 22:45:51 localhost dhcpd: DHCPREQUEST for from 44:38:39:00:68:00 via eth1
Mar 1 22:45:51 localhost dhcpd: DHCPACK on to 44:38:39:00:68:00 via eth1
Mar 1 23:00:03 localhost dhcpd: DHCPREQUEST for from 00:02:00:00:00:06 via eth1
Mar 1 23:00:03 localhost dhcpd: DHCPACK on to 00:02:00:00:00:06 via eth1


Voila! KVM hosting Nutanix foundation VM, running on a 40g switch. Very nice! HighFive!

Stage 1a: Deploying the fabric w/ Ansible

Modifying an Existing Ansible Demo to suit

Once into the wbench VM I grabbed the cldemo-wbench-ospfunnum-2s2lt22s-ansible package from our cldemo and started modifying it to suit my topology. The biggest change was the /etc/network/interfaces file and parameters to suit the different topology.

roles/ntnxbasic/templates/interfaces.j2 roles/ntnxbasic/vars/main-ntnx.yml
administered by ansible
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5), ifup(8)
# Please see /usr/share/doc/python-ifupdown2/examples/ for examples
{% set intvars = interfaces[ansible_hostname] -%}
{% set loopback_ip = intvars.loopback -%}
{% set swbridges = intvars.bridges -%}
{% set svi2 = intvars.svi -%}#{% set svi_list = intvars.svi -%}# The loopback network interface
auto lo
iface lo inet loopback
address {{loopback_ip}}
# The primary network interface
auto eth0
iface eth0 inet dhcp#Nutanix host ports#Nutanix host ports
auto swp32s0
iface swp32s0
mstpctl-portadminedge yes
mstpctl-bpduguard yesauto swp32s1
iface swp32s1
mstpctl-portadminedge yes
mstpctl-bpduguard yesauto swp32s2
iface swp32s2
mstpctl-portadminedge yes
mstpctl-bpduguard yesauto swp32s3
iface swp32s3
mstpctl-portadminedge yes
mstpctl-bpduguard yes#Bonded Inter-Switch-Link (ISL)
auto peerlink
iface peerlink
bond-slaves swp17 swp18
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4#Bonded Inter-Switch-Link (ISL)
auto peerlink
iface peerlink
bond-slaves swp17 swp18
bond-mode 802.3ad
bond-miimon 100
bond-use-carrier 1
bond-lacp-rate 1
bond-min-links 1
bond-xmit-hash-policy layer3+4
#Define the vlan-aware bridge
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp32s0-3 peerlink
bridge-stp on
bridge-vids 100-500#VLAN interfaces (SVIs)
{% if svi2 is defined -%}
{% for svi_name in svi2 -%}
auto {{svi_name}}
iface {{svi_name}}
address {{svi2[svi_name].ip_and_mask}}
address-virtual {{svi2[svi_name].vrr_mac}} {{svi2[svi_name].vrr_ip}}
{% endfor -%}
{% endif -%}interfaces:
loopback: ""
int_unnumbered: ["swp1s0","swp1s1","swp1s2","swp1s3"]
members: ["swp32s0","swp32s1","swp32s2","swp32s3","peerlink"]
pvids: "100-500"
ip_and_mask: ""
vrr_ip: ""
vrr_mac: "44:38:39:ff:00:01"
loopback: ""
int_unnumbered: ["swp1s0","swp1s1","swp1s2","swp1s3"]
members: ["swp32s0","swp32s1","swp32s2","swp32s3","peerlink"]
pvids: ["100-500"]
svi: bridge.100:
ip_and_mask: ""
vrr_ip: ""
vrr_mac: "44:38:39:ff:00:01"

Hold onto your hats, this is now a Series!

So. It’s at this point, I’m at 1250 words and I’m realizing that this probably needs to be broken into bite-size chunks. In part 2 I’ll cover the deployment from foundation and tear apart the rest of the demo, warts and all :)

2 Responses to “Ansible, Cumulus, Dell & Nutanix: One big happy family :) – Part1”

Leave a Reply for Dicko

The opinions expressed on this site are my own and not necessarily those of my employer.

All code, documentation etc is my own work and is licensed under Creative Commons and you are free to use it, at your own risk.

I assume no liability for code posted here, use it at your own risk and always sanity-check it in your environment.