In the early 2000s, a major shift was underway as a new world of “scale out” distributed computing threatened the “scale-up” status quo. Enterprise infrastructure was moving away from the gigantic and expensive Sun Sparc servers that had ruled for so long to a new form factor. The movement didn’t have a name yet, but it had some critical technology building blocks — the Linux operating system, x86 architecture, cheaper hardware, hypervisors and more.
If you are old enough to have attended events like COMDEX, then the IT industry’s biggest trade show, you remember the early debates on what to call this nascent world of distributed computing. All sorts of impressive-sounding phrases emerged — Grid Computing Utility Computing, Liquid Computing, On-Demand and more — but none ultimately stuck. Still, if nothing else, it was a creative time for technology marketers at systems vendors.
Among this mishmash of hopeful terms, the movement got a name that stuck: Cloud. AWS and VMware became its first vendor posterchildren. And the rules of not only datacenter infrastructure but developer workflow would be completely rewritten as clusters of Linux boxes began running the world’s most popular services.
Another murky juncture emerges
It feels like we’re in a similar spot today, where there’s been a lot of churn around new cloud-native infrastructure pieces, but it’s tough to figure out where it’s all heading. It’s also missing a name, but clearly something big is brewing.
SEE: Hiring Kit: Cloud Engineer (TechRepublic Premium)
We’re nearly 10 years since the release of Docker, eight years since the release of Kubernetes, and there are enough cloud-native graduated and incubating projects to make your head spin. But along the way in this shift in application design to API-driven microservices and the rise of Kubernetes-based platform engineering, networking and security have struggled to keep up.
In Kubernetes adoption speak, we’ve shifted from “Day 1” adoption challenges, to “Day 2” challenges of how to make K8s infrastructure easier for platform teams to operate and scale.
Kubernetes breaks traditional networking and security. And platform teams have been in a near decade-long scramble to piece together bespoke solutions to the explosion of east-west communication, new requirements for workload and API-layer visibility for zero-trust security and observability, and not the least needing to integrate legacy networks and workloads running outside of Kubernetes. It’s basically about services communicating with each other over distributed networks atop a Linux kernel that was never designed for cloud-native in the first place.
This is really hard stuff for platform teams and very expensive for enterprises footing the bill for engineers to figure it all out.
In the absence of a single clean category descriptor, every cloud-native conference is peppered with different terms describing the same basic problem domain: Kubernetes Networking and Security, Service Mesh, Cloud Native Networking, Application Networking, Secure Service Connectivity and more.
“I think a key takeaway is that as applications shift toward being a collection of API-driven services, the security, reliability, observability and performance of all applications becomes fundamentally dependent on this new connectivity layer,” said Dan Wendlandt, CEO and co-founder of Isovalent. “So whatever we eventually call it, it’s going to be a critical layer in the new enterprise infrastructure stack.”
Teaching the Linux kernel new tricks
Wendlandt and his startup Isovalent — which just secured $40 million in Series B funding from lead investor Thomvest and strategic investor Microsoft, joining existing vendors Google, Cisco and Andreessen Horowitz — are all-in on this new connectivity layer as the future of the cloud-native stack.
“We founded Isovalent five years ago because we believed that this new layer would emerge,” said Wendlandt. “Our core bet was that an (at the time) little-known Linux kernel technology called eBPF held the keys to building this new layer ‘the right way.’ eBPF is an incredibly powerful yet complex Linux kernel capability co-maintained by Isovalent and Meta. You can mostly think of eBPF as a way to ‘teach the Linux kernel new tricks,’ in a way that is fully compatible with whatever mainstream Linux distribution you already use.”
Because eBPF operates at lower Linux layers and isn’t tied to specific hardware or hypervisor technologies, it enables a new layer that is universally valuable to cloud-native use cases. eBPF co-creator Daniel Borkmann, who works at Isovalent, describes eBPF as “little helper minions.”
But eBPF is so low level that platform teams without the luxury of Linux kernel development experience need a friendlier interface.
Enter Cilium, created by Isovalent co-founder and CTO Thomas Graf. Cilium bundles eBPF-based networking, security and observability code with easier-to-use constructs, like YAML-based rules, JSON-based observability, and more. All three major cloud providers have singled-out Cilium as the new de facto standard for Kubernetes networking & security.
“eBPF and Cilium are critical technologies in a new infrastructure layer that is emerging,” said Martin Casado, General Partner at Isovalent investor Andreessen Horowitz and co-founder of Software-Defined Networking pioneer Nicira, acquired by VMware in 2012 for $1.26B. “With this new layer, connectivity, firewalling, load-balancing and network monitoring are handled within the Linux kernel itself, allowing for much richer context for both security and observability, and ensuring consistent visibility and control across all types of underlying cloud infrastructure. Isovalent is uniquely well-positioned to be the leading company for this critical new layer.“
If prior history plays out again, eventually this new category of cloud-native connectivity is going to get a name, one or more vendors are going to make investors very rich, and enterprises will have a much easier time making sense of this cloud native future in which they already find themselves.
Disclosure: I work for MongoDB but the views expressed herein are mine.