Recently, a customer asked me, “What are the limitations around vMotion across an L3 Clos?”. That question prompted me to re-raise the issue via a discussion I had on Twitter. This post documents my thought process on why vMotion at the routing layer is a requirement in the modern data center. Read the rest of this entry »
So this is basically another piece of craziness born out of necessity.
I needed to do some testing with the latest release of Zerto virtual replication suite. I didn’t want to do it in Prod (obviously!) and our existing physical lab environment is a bit-too-secure to be useful and has no ability to demo what I build in it to clients…. So, what’s a cloud architect to do….. run in the cloud of course! (OK, that’s a bit wank, but sarcasm doesn’t translate well in text…). Read the rest of this entry »
So this post start came about as a result of me fishing for some information from a fellow Engineer/Architect @ another cloud-provider, Kyle Bader (@mmgaggle). Basically, I’d seen a video about DreamObjects’ Ceph Implementation and picked up on a mention of using Coraid and was intrigued.
Kyle and I exchanged a few tweets and he questioned why I would use Coraid behind an ObjectStore platform…. so I thought I’d put my thoughts together and get some feedback. Read the rest of this entry »
Ok, so it’s been a while, but I’ve been very busy building a few new products Honestly I don’t know how the other bloggers manage to get time to blog if they’re actually doing work as well, but I digress….
The task laid upon me was pretty simple in it’s definition:
“Provide one or more ways for a customers to replicate their on-premise VM’s to a cloud provider in a scalable and secure manner”
… simple, right? Not so much…. Read the rest of this entry »
Well, since I’m moving on from my current workplace which has a fantastic lab environment, I thought it was probably about time to build myself a testlab at home. I’ve done a fair bit of research into what others have done, as well as looking at a variety of SMB sites. Ultimately, I want to create something that meets my own needs, not anyone elses though.
After seeing Simon Gallagher’s vTardis, I thought “damn that’s a cool name for a vm-lab”, not to mention a very nice setup in general, so being a complete geek; I set about thinking up a name for mine. Read the rest of this entry »
Well after three years, I’ve decided its time to throw in the towel @ UWA. This environment has provided me with a fantastic learning platform and helped to accelerate my growth as an IT professional. I had the opportunity to work with some really talented people and hope that we keep in touch. I learned a lot, but it’s time to move on.
I have been offered and accepted a new position of Senior Virtualization Engineer @ ZettaServe. I will be working on developing the next generation hosting platform for their ZettaGrid project. I’m really excited to get started, if a little daunted by the challenge.
Moving from the ‘cruisy’ education sector to the commercial consulting world is sure to be a change, but I’m looking forward to it.
One of the design decisions I’m currently faced with is the network configuration for a new virtualization platform. This has led me to doing some further reading on vDS and its implications to design.
We’re in a similar position to a lot of other org’s: We’ve had a couple of iterations of virtualized resources and are now reaching a maturity point where we’re looking to improve the platform, raise its importance and market as services (PaaS, IaaS) to our internal clients. The next logical step is to improve our management processes, SLA’s etc and start moving to a Hybrid-Cloud type model, but we’re not quite there yet. Read the rest of this entry »
Well today I had an interesting conundrum. Was doing some routine patching of an ESX cluster and suddenly alerts were going off about VM’s being disconnected.
It turns out we hit the default port limit of the vSwitch on the destination ESX host, which is 64 (or 56 usable).
To get services back online quickly, I simply migrated a few machines off the over-allocated host, then re-enabled the interfaces on the affected VM’s. A better monitoring system would’ve been helpful here, or if I were faster with powercli, perhaps finding the disabled interfaces through that… I ended up going through all the VM’s in that cluster to be sure I’d got them all.
The ultimate fix is to carefully juggle the VM’s around so you don’t hit the limit again, then increase the port limit on each vSwitch in the affected cluster….
Between this, a massive spanning tree issue taking down half the campus and an abandoned snapshot…. I think I’ve had enough disaster for one day.
So I’ve been doing a fair bit of thinking lately on what I want my new virtual infrastructure to look like….
I’ve got multiple datacenters, with multiple clusters in each (differing hardware requires that) plus a dedicated VM testlab and I was thinking…. well probably best to have a vcenter in each.
My line of thinking was basically:
- vCenter in each DC (in linked mode?)
- Separate DB’s
- Maybe template it?
- Well the DB should be a VM too
- Need to sortout the startup order…
- Hmm what about a vAPP
Now it seems like a reasonable leap to me, but (correct me if I’m wrong), all the vApp detail is stored in the VCDB, if vCenter is unavailable… will the startup order of the vCenter vAPP work as expected in a HA event?
Time to test it in the testlab I think….
Well I’ve been taking a bit of a break from XML-based powershell code, ‘coz it was doing my head in working with that. I was going through some older blog posts on YellowBricks and stumbled across a few related to Resource Pools, specifically related to how shares work.
Now I was always under the impression that shares were already weighted to account for the number and size of VM’s in them… however I was clearly mistaken. I did actually raise this question during my vSphere training and was assured that was the case…. so it’s definitely something to be aware of. The Resource Pool Priority Pie Paradox Read the rest of this entry »