DEV Community

Cover image for Joining Otterize and the modern network stack
David G. Simmons
David G. Simmons

Posted on • Originally published at otterize.com

Joining Otterize and the modern network stack

Hi! 👋🏼 I'm new here.

I joined Otterize about 3 weeks ago as the Head of Developer Relations and I could not be more excited to be here.

Some of you who know me might be asking the questions "Wait, I thought you were a data nerd and an IoT junkie? What are you doing going into Kubernetes?"

Absolutely fair question! Let me explain a bit.

First, Some history

Way back at the dawn of time, when I first began my career in tech I worked for a US Government research lab. I wrote network intrusion detection software. It was called The NERD (Network Event Recording Device) and it was deployed site-wide in what was (and probably still is) one of the most complicated, and highly secure, computing environments in the world.

We wanted to monitor all traffic, and report anomalies in real-time. Most of the reporting traffic was over UDP (syslogd) but no traffic of any kind was allowed to traverse the network unencrypted. We used Kerberos 2048kb encryption for all TCP traffic, but there was no such beast for UDP. Until I wrote it. I also wrote a bunch of extensions to syslogd to further secure it with ingress and egress filters.

Our group also maintained detailed network maps of all servers and services and what they were connected to, which servers talked to which servers, etc. These network maps were always hopelessly out of date and we had no real way to automatically generate them, so we just sort of hoped we were right most of the time. Invariably we were not.

Fun story: We were absolutely convinced, based on our network topology maps, that one of the most sensitive networks was air-gapped and therefore could not be penetrated from the outside. Given the nature of the data we had on this network, it was absolutely critical that it remain secure.

To test this theory, we asked Tsutomu Shimomura to try to gain access to it. We were certain that he would be unable to do so. Imagine our level of panic when, 5 minutes into the test, he had taken over a terminal we were using on this "air-gapped" network.

It was time to update the network maps! And to rethink our network access policies.

This was all back in the mid 1990s when everything was just migrating from mainframes to client-server architectures. Everything was either a client or a server on the network, and each one had its own firewall, security policies, etc. It was a nightmare to manage. We relied on individual groups to report to us in the NOC (Network Operations Center) when they added a computer, or a service, to the network. They rarely did, so we were always playing catch-up.

The Cloud has entered the chat

As client-server computing, and maintaining your own data center or server farm, became too costly, we saw the migration to the cloud. In the beginning, "The Cloud" really was just "someone else's computers." We trusted someone else to run the data center and the servers, and they sold us time on those servers to run our workloads.

In the early days of cloud computing, this was revolutionary. It was a huge cost savings for companies, and it allowed them to focus on their core business rather than on maintaining their own data centers. They could off-load the expensive part to cloud providers, but they essentially had to manage all of the services — and security for them — themselves. It was better, but still not ideal.

The Cloud has evolved

It turns out that just moving to "other people's computers" wasn't really enough to realize the true power of cloud computing. When you truly want to scale to the massive sizes that a lot of modern applications require, it certainly wasn't enough.

It was time for another revolution in computing. This time, it was the revolution of containers and container orchestration. This is where Kubernetes comes in.

Containers had been around for a long time, and they exploded in popularity with the release of Docker. Docker made it easy to create, manage, and deploy applications in containers, but it was still a lot of work to manage them all. You had to manage the container images, the container registries, the container runtimes, the container networks, etc. It was still a lot of work.

Kubernetes was the answer to this problem. (Well, mostly.) It was a way to manage all of the containers, all of the services, all of the networks, all of the storage, all of the security, and all of the everything else. It was a way to manage all of the things all at once. It was also the way to introduce scaling to the container architectures, allowing you to scale both up and down as demand required.

We have now moved from client-server to cloud architectures, and have virtualized the data centers and services using Kubernetes. Progress!

But here's the thing: The basics of it all have not really changed. The physical machines have been virtualized. The networking between them has been virtualized. The storage has been virtualized. The services have been virtualized. The security has been virtualized. But it's all still there, and it all still has to be managed. Somehow.

And guess what? All the problems with all of those things are still there as well. Manging the network access between servers and services. Managing the permissions and properties for databases, streaming data services (like Kafka) and everything. Yes, it's easier now, and there are a lot better tools for doing it, but the problems are still there.

So why am I here?

Yes, we're finally getting to that. First and foremost, I'm here at Otterize because, after talking to the founders, I had such enormous respect and admiration for them that I really wanted to join them. Second, I had been wanting to work with Uri Sarid for a very long time. Finally, I was thoroughly convinced that what they were building was both technologically brilliant and absolutely necessary.

One of the things that absolutely sealed the deal for me was the origin story of the company name Otterize. The founders were brainstorming names and Uri suggested "Otterize" as a joke. But when they looked up the meaning of the word "otterize" they found that it actually means "to make something more delightful, more fun, or more charming." And that really resonated with me.

Remember, I started with some history about my experience in the networking and data services area. So while I don't understand Kubernetes well (yet) the underlying principles are much the same as my previous experience, and almost all of the problems I identified in that history are addressed by Otterize and Intent Based Access Control (IBAC). In fact, it goes further and solves several additional problems that I hadn't covered before. It's a brilliant solution to a set of very real problems.

So here I am, stepping out of my tech comfort zone yet again. But I'm stepping sort of back into my own history as well. I started my career in networking and security, so in a way Otterize is bringing me full circle. And I'm excited to learn from and work with Uri and the other brilliant folks here. I'm confident that together we can build something truly transformational that makes managing data services and security in the cloud era much easier and more secure.

What now?

For me, it's on to learning more about Kubernetes (since it's not been an area of expertise for me). I'll also be building more demos and tutorials (and we already have some great ones, so go check those out!). And I'll be back on the road, coming to a DevOps, Platform Engineering, or Kubernetes conference near you soon!

If you're looking for a speaker, I'd love to hear from you via email, twitter, LinkedIn, or mastodon.

Top comments (0)