New Home Lab: CniLab 1.0 (Part 1)

Well, since I’m moving on from my current workplace which has a fantastic lab environment, I thought it was probably about time to build myself a testlab at home. I’ve done a fair bit of research into what others have done, as well as looking at a variety of SMB sites. Ultimately, I want to create something that meets my own needs, not anyone elses though.

Why CniLab?
After seeing Simon Gallagher’s vTardis, I thought “damn that’s a cool name for a vm-lab”, not to mention a very nice setup in general, so being a complete geek; I set about thinking up a name for mine.

I settled on CniLab for a few reasons,

  1. Cnidus (Pronounced: Snide-Us) has been my alias for some time and my naming scheme has CNI in everything.
  2. It’s a nice Homonym of SkyLab… which the yanks managed to crash into my hometown of Perth, Western Australia. In fact, a friend of mine’s father actually recovered a piece of it in Esperance.

Design Goals
I had a few ideas of what I wanted when setting out to build this lab.

  • Any components that are on 24/7 need to be useful beyond their use in the lab and ultimately low power.
  • The design should have a large amount of storage capacity for both VM’s and for hosting my home storage needs (videos, photos, music etc). Estimate somewhere around 10-15Tb capacity required,.
  • All existing shared storage should be moved into this environment.
  • I had the opportunity to pick up a Sun Micro Systems rack gratis, so everything should be rackmounted, or at least all fit in the rack.

The design should allow me to deploy and test at least the following: vCloud Director, VMware View, Citrix XenDesktop, SRM, Hyper-v, xen-based hypervisors and some new opensource Cloud directors (DeltaCloud, for example).

Overview of the Lab setup.Design
The design approach I’ve taken is to essentially run 2 environments: a single low-power ESXi box that hosts my home file storage/vm storage and a separate cluster of full-on Sun x4200 servers. The reason for the split comes down to one reason: power use. I’ve got a few uses that require constant access, but for the most part my lab only needs to be live when I’m doing some investigation / testing on it, thus I can power the majority of it off when not in use and power it back on using DPM when required.

Equipment
Largely thanks to my ability to cheaply procure an array of decommissioned servers, this setup resembles a production environment more than a typical home setup. It comprises of the following:

Item Quantity Use Cost
Sun Ultra 20 2 VSA / vCenter and a spare Free
Sun x4200 3 Test Environment ESX Servers $75 each
Acer 3900Pro 5 Broken PSU, used to upgrade the Ultra 20’s and HTPC for old man Free
Dell 1950 1 Not sure yet… $100
Dell 3348 Switch 1 Rack Switch, currently used for all switching, uplinks to a gigabit link. $50 off ebay
Dell 5224 Switch 1 Rack Switch, currently used for all Storage, uplinks to a gigabit link. Free
Cisco 2900XL 1 Another Rack switch, bought ages ago when I was doing cisco certs. $50ish
Cisco 3750G 1 Faulty NVRAM, hoping I can fix and replace the Dell’s with it. free
Belkin Rackmount console 1 Used for configuration, not much else $20
D-Link KVM 2 Used for configuration, not much else Free
APC KVM 1 Had dead batteries, was able to find enough good cells from a few dead KVM’s to make one good one. Free
Sun 3500 FC SAN 1 Not sure yet.. Free

Storage Appliance / vCenter (CNI-ESX-001)
To meet the objectives I set out (and partially ‘coz I can’) I decided that the core of this test environment should be a modifed Sun Ultra 20 workstation. I was allowed to purchase this beast after it was retired from my desk a couple of years ago. It has seen about a year of service as my primary NAS box, but it was noisy and not particularly efficient…. so an upgrade was in order.
Detailed view of my Virtual Storage Appliance.

I was able to pick up a few Acer 3900Pro desktops with broken PSU’s… this fits my need perfectly, a modern dual-core system with lots of SATA ports and up to 8gb of RAM. So this hardware was retrofitted to the Ultra 20 case. After some thought, I also decided to get out the tools, put on my case-modding hat and take to the case with a vengeance… The purpose: add in 10 hot-swap drive bays for some serious storage capacity.

I had to remove the current drive cage, front panel ports and cut a slot in the motherboard tray to pass SATA connectors through. I was then able to mount a 10-bay Hotswap caddy which I cut out of an ancient Sun cube case.

This server is running ESXi 4.1 and hosts 2 VM’s;

  • CNI-VSA-001: A NexentaStor CE storage appliance, shares my storage over iSCSI and NFS.
  • CNI-WIN-001: A windows server 2008R2 instance that runs my PS3 MediaServer, vCenter and windows file shares.

I chose to install my vCenter instance on this server to maintain visibility of the environment and potentially power on/off the rest of the environment using DPM remotely. Normally I’d run vCenter inside the cluster to protect it with HA, but in this case a different route was chosen to meet design requirements.

I grabbed a HP SmartArray e200 off ebay. This card is used in JBOD config for all my storage disks and RAID1 for a pair of 250GB drives.

The 250GB array is used for ESXi install and the OS vmdk’s of the VM’s, I plan to protect this volume with VCB. I opted to present the rest of the disks as RDM’s to the VSA. I chose this route so the drives were formatted with ZFS natively, so if in the future I opted to install Nexenta on hardware, at least the drives shouldn’t need to be reformatted.

I initially had problems creating the RDMs, it failed with an error of “Failed to create virtual disk: The destination file system does not support large files (12)”. Turns out it was due to the default ESXi install formatting the drive with a 1mb blocksize. For a 1-1.5Tb RDM, this had to be increased to 8mb. Once that was changed and rebooted, I could create the RDM’s fine. Mário Simões did a great writeup on how to do this over at VMHelp.com.

I’ve settled on NexentaStor Community Edition as my storage appliance for a few reasons. For a start, it’s OpenSolaris based, with a lot of the development coming from the guys that designed ZFS for Sun back in the day; I’ve always liked Solaris and Sun kit and ZFS seems perfect for my application. A lot of the features that make Nexenta stand out will be really handy in my lab; Acceleration with SSDs, De-Duplication (“In-line” and “In-flight”), ZFS and the strong link to an enterprise grade solution. I came across Nextenta again thanks to Simon Gallagher, in his post “building-a-fast-and-cheap-nas-for-your-vsphere-home-lab-with-nexentastor”. His blog is definitely worth adding to your subscribed feeds!

Since the box has 3 Intel NICs, I’m dedicating one to storage and the other two are assigned to VM / Management traffic. This enables me to conserve my gigabit ports to just storage duties and utilize 100mbit for the rest (which I have in plentiful supply, thanks to a Dell 48-port switch).

I wanted to keep this storage server relatively simple and robust, so I stuck with a configuration of 2x standard vSwitches, one for VM / managment traffic and the other for storage.

All said and done, the server looks something like this:

Hypervisors (CNI-ESX-002 – CNI-ESX-004)
As with a lot of the other equipment, I was fortunate enough to be able to purchase cheaply some retired equipment from a previous employer. These servers aren’t latest-and-greatest, nor is the power-use anything to write home about…. so ultimately they may be sold and whiteboxes purchased instead…. However for the time being, they will do the job.

Initially, the three ESX servers will be Sun x4200 with the following specs:

  • 2x Dualcore AMD Opteron 280‘s
  • 16GB Fully buffered DIMMs
  • 2x 146GB 15k SAS drives
  • 2x 73GB 15k SAS drives
  • 4x gigabit NICs
  • 2x Q-logic HBAs

All 3 of these servers are installed with ESXi 4.1 and will be powered on / off with DPM, which I’m yet to setup. They should do the job nicely…. not to mention look purrrdy! 😛

Networking

Physically, the ESX servers all connect to a Dell 3348 10/100mbit rackmounted switch for VM / Management traffic, Storage(iSCSI) traffic is handled by a 1000bmit Dell 5224. The x4200’s are connected with 5 cables each (3x network, 1x iSCSI and 1x ILO port (lights out management). CNI-ESX-001 is connected with 3 gigabit ports, as it is the hub of the environment. I may end up patching all servers to the 5224, we’ll see how I go.

I expect that as I use the lab, the networking on the ESX hosts will change considerably, however the core of the infrastructure (the management and iSCSI networks) should remain the same.

Wrapping up…
I think that will do for part one, I’ll add more info with a later post. I’ll leave you with some pics of how it all looks as of today, some gear isn’t yet installed.

6 Responses to “New Home Lab: CniLab 1.0 (Part 1)”

Leave a Reply for Michel

The opinions expressed on this site are my own and not necessarily those of my employer.

All code, documentation etc is my own work and is licensed under Creative Commons and you are free to use it, at your own risk.

I assume no liability for code posted here, use it at your own risk and always sanity-check it in your environment.