Thoughts on Networking@Scale event

"it's not an open registration... please send mail to to get a code.

The invite.

When I received an invite to the Networking@Scale (@netatscale on twitter) a couple of weeks ago, I remember my reaction well “Fk yeh, no way! Wow it’s cool to be right-here-right-now” (excuse the profanity). I must confess, as a techy from the middle-of-nowhere (Perth), it was hard to not be a bit star-struck… and the event did not disappoint. It happened to directly conflict w/ another cool event I was looking forward to; Network Field Day 9, which my employer (Cumulus Networks) were part of this time. That was a shame to miss, but luckily following the tweets from both events lead to much hilarity. I’ll probably watch the videos later too.

Highlights (not exhaustive)

  • FaceBook’s big public unveiling of their chassis: 6-pack. Every linecard w/ its own CPU, all loosely coupled; just Clos-in-a-(blue)-box. Have been looking forward to this one for a while now :)
  • FaceBook’s vision for networking: Treat the network as a bunch of servers & Automate Everything! … Where have I seen this before? (@lesliegeek) 😛
  • Fellow-aussie Michael Payne‘s presentation on trading networks, particularly the ‘arms race’ on reducing latency to new-breed trading exchange points/financial-markets and the level of tuning it encourages. Also the tweet-storms it started :)
  • Artur‘s (Fastly) very frank (+hilarious) presentation on how they built a highly efficient Load-Balancing stack on the back of L3 ECMP, some mac/arp trickery + SecretSauce(PatentPending)
  • Netflix’s preso on L0+L1 efficiency. Required to build a global CDN in less than a 18months…
  • Meeting and catching up with a few of my customers/champions from Cumulus, fellow-technologists and having open collaborative discussions over beers.
  • Oh and… Dorking out at FaceBook HQ 😛

At the time of posting, the presentation videos weren’t posted, I nabbed some of the slides (posted at the bottom) and will update w/ links when they’re posted.

OCP to get a modular-switch (chassis) platform! 6-pack.

Plenty of press on this one, but pretty cool to be onsite for it’s first public unveiling :) From my perspective, 6-pack is where things in OCP’s network hardware portfolio start getting really interesting. Facebook committed to releasing via OCP in “months”, that is awesome! Combined with OCP’s open-boot loader (ONIE) and delivered to market via multiple ODM vendors, this gives #OpenNetworking an important new form-factor. But more than that, hopefully it becomes an open platform for future projects. Put another way, Clos network fabrics from fixed-form-factor switches works well in a lot of places, but in some environments, the extra cabling complexity isn’t worth it; 6-pack gives another choice and expands the use-cases where #OpenNetworking (aka, disaggregated HW+SW) can be used. I’d like to take a step back for a moment to really take in what’s just happened; There are loads of engineering challenges in building modular devices, which requires a large up-front engineering investment. Not a lot of end-users have those level of resources or the motivation to use them for this type of thing and then, who gives it away? Then there’s the economy-of-scale requirements to make the end-product economically viable. TL DR; projects like these are typically only viable for the big-name vendors and they have a vested interest in keeping things, just the way they are. I really admire the “be bold” mantra FaceBook’s applied here. I can imagine it going something like this (it’s completely a work of satire/fiction, in case you get any ideas).;

“we’d like a switch like this, mr vendor” – FB. “thats nice kid, but really, you want <WhatWeHaveOnTheTruck> and if not, too bad” – Vendor(s) “fine, we’ll do it ourselves, screw you guys…” – FB <SometimeLater> “hey, you know what, other people probably want something like this too… If we open-source the designs they get what they want and if people use it, any extra unit’s shipped means our unit price goes down… Plus it will royally piss-off that vendor(s). #EverybodyWins…. Ok, let’s do that” – FB

Thinking ahead…

What excites me most is where it could potentially lead next… 6-pack already has a roadmap suited for large DC’s (obviously, look who spent the effort building it); 100g and beyond and thats cool and all… But if the community really gets behind it, what other modules would benefit from being directly connected to 16x40g backplane lanes per slot what use-cases would those modules unlock? Are there hardware startups that would bork at designing/building/bringing to market a whole modular chassis, but be able to deliver a line-card? Projects like this could really open the floodgates (and I hope it does!) <CompletelyUnfoundedSpeculationAlert> Just off the top of my head…

  • Campus/access: A cost-effective copper 1/10g line-card (w/ POE would be really nice). Perhaps Trident+ fabric card, if the economies-of-scale make it worthwhile, otherwise… just stay with the T2 one (which FB didn’t directly say… but come on..)
  • Edge-router: Perhaps a few slots are used w/ a new module to deal w/ full-internet route tables. Or perhaps even a ‘divide+conquer’ approach, by distributing portions of the fairly-large InteretRouteTable to fairly-small TCAMs in each of the ‘leaf’ linecards.
  • DPI or other such things: Pluribus has done something along these lines in fixed-form-factor, maybe this platform gives them (or someone like them) a convenient platform to build on/in.

Introducing 6-Pack IMG_0501 IMG_0502 IMG_0503 IMG_0504 IMG_0505 IMG_0506 IMG_0507

.

.

.

.

.

.

.

.

.

.
.
.
.
.

Other random pics / thoughts from the day…

Google’s OpenConfig project looks cool on face value, I need to look at that one a bit closer… They were careful to claim “its not lowest-common-denominator… note sure on that bit.

Michael Payne’s latency budget and discussion on the finer points of time in the was fun to watch. Never been exposed to that side of town, so quite cool to be given a glimpse.

Also couldn’t resist writing a tweet, physically on “the wall” to my wife, then later tweeting it’s existence. Partially in lieu of the valentines day card I will inevitably forget about 😛 #YesImATotalDork.

Kicking the day off w/ Google. #NiceStart

Kicking the day off w/ Google. #NiceStart

Facebook bandwidth growth

Facebook bandwidth growth

Michael Payne discussing the finer points of time...

Michael Payne discussing the finer points of time…

End-End latency budgeting in trading networks

End-End latency budgeting in trading networks

Fellow Aussie, Michael Payne

Fellow Aussie, Michael Payne

Google's OpenConfig

Google’s OpenConfig

OpenConfig, for operators, by operators.

OpenConfig, for operators, by operators.

Operational Challenges @ Google

Operational Challenges @ Google

FB's Physical Spine Racks, planned for growth: Start small, but plan big. :)

FB’s Physical Spine Racks, planned for growth

Simplifying Clos cabling w/ smart placement

Simplifying Clos cabling w/ smart placement

Out with the old...

Out with the old…

Fabric + physical layout; go hand-in-hand

Fabric + physical layout; go hand-in-hand

BGP fabric for steady-state, automation to inject in changes #KISSPrinciple

BGP fabric for steady-state, automation to inject in changes #KISSPrinciple

Want a simple state machine? stick w/ simple actions

Want a simple state machine? stick w/ simple actions

THIS!!!

THIS!!!

NetFlix: Simplifying L1 w/ custom MTP  cassettes and pre-built looms. #Nice

NetFlix: Simplifying L1 w/ custom MTP cassettes and pre-built looms. #Nice

Help this guy.

Help this guy.

This guy... knows how to build a solid deck.

This guy… knows how to build a solid deck.

Meeting new people, stirring up AM's back in the office ;)

Meeting new people, stirring up AM’s back in the office ;)

Dorking out...

Dorking out…

Dorking out...

Dorking out…

Leave a Reply

The opinions expressed on this site are my own and not necessarily those of my employer.

All code, documentation etc is my own work and is licensed under Creative Commons and you are free to use it, at your own risk.

I assume no liability for code posted here, use it at your own risk and always sanity-check it in your environment.