Skip to content

Takeaways from KubeCon EU 2023 – part 1

Are you ready to be transported into the world of cloud-native computing? Look no further than KubeCon, the ultimate tech conference about Kubernetes and its ecosystem, yearly organised in another city in Europe. Get the inside scoop on the excitement from day one of this year's event 18-21 April 2023 in Amsterdam, brought to you by our global Devoteam G Cloud CTO, Jason Quek.

✏️ Written by Jason Quek, Global CTO at Devoteam G Cloud

It’s that time of the year again where Kubernetes enthusiasts in Europe meet up to learn about new Cloud Native Computing Foundation (CNCF) projects, new startups which offer managed versions of the CNCF projects and companies which have used and battle tested these projects talk about their experiences. It is quite amazing how the KubeCon has now evolved to have more talks on the ecosystem around Kubernetes rather than Kubernetes itself. 

Solving similar challenges & learning from each other

That speaks to the maturity of the orchestrator, but also at the same time points to a world where there are multiple projects within CNCF aiming to solve a similar thing, and creates an inflection point for CTOs and architects who are looking to incorporate the right technology into their tech stack. This conference is where they gather to understand what to try and test out, and learn lessons to save time relearning the same mistakes.

For Devoteam G Clouders like myself the goal is to 

  • Understand the roadmap for Kubernetes. What is the community aiming to build in 2023 and beyond, the slowdown in releases and what is up for deprecation? This is important to properly advise customers who may have a hard time navigating the many different ways to isolate workloads, run a multi-tenant cluster and when to transition to Gateway API for example.
  • Look at the CNCF incubating projects, understand the industry adoption and which ones to trial ourselves for recommendation in our Devoteam TechRadar. Even though Devoteam G Cloud has a preference for Google Cloud technologies, it is getting more common with a hybrid, multi-cloud approach for some of our larger customers. Often that comes with an open source strategy to manage that hybrid, multi-cloud environment, so that workloads can move seamlessly across different clouds. And that open-source software is taken more seriously when it is accepted into CNCF as an incubating project, as that is where there will be an active community debating and stewarding the projects towards CNCF graduation. This makes it very interesting for us to get on the cutting edge of technology.
  • Meet up with customers at our yearly KubeCon party, where we discuss their plans for the year to take their business to the next step, as well as any other interesting projects they might want to try out to meet a certain need.
  • Check out the managed options of the different CNCF projects and the startups and large enterprises who had a booth to showcase their product. It was interesting for example with Datadog who is a well established name within the observability space through the datadog agent, and Groundcover who is an up and coming observability provider through eBPF at the kernel level. Very often customers will ask for recommendations of what to buy, as having a managed version comes with support, and that is vital for any SLA agreements to make sure any underlying services can have a SLA as well.
  • 5) Meet up with our Devoteam G Cloud colleagues, like Anis, who is our GCVE expert in France, and this being his first KubeCon, it was great meeting up and soaking in the atmosphere.

It was great meeting and catching up with the rest of the team from Belgium and Sweden as well.

The battle of Service Meshes

With this in mind, the first theme of KubeCon for me was the battle of Service Meshes. With Istio coming from Google and now an official CNCF incubating project, as well as it having a managed version called Anthos Service Mesh, it was to be a no-brainer. This is also listed in our Devoteam Techradar as a technology to be adopted. If you would like to have a simpler service mesh without the extra features such as Circuit Breaking or rate limiting, built-in ingress or egress, you could go with Linkerd which has much less custom resource definitions to worry about.

However, they both rely on the concept of sidecars. So just to bring up everyone to speed, a side car is a container attached to each pod when starting up, which will listen for all configurations made to Istio or Linkerd and react accordingly to implement the configuration. This means 1 pod = 1 sidecar. 

Now sidecars take resources and for complicated configurations can also impact the speed of interservice communications. 

In comes the idea of sidecar-less service meshes, which Cilium Mesh was supposed to bring with the Service Mesh leveraging eBPF to provide service discovery and routing at the kernel level. Link to the high-level presentation here by Liz Rice, who presented “Keeping It Simple: Cilium Networking for Multicloud Kubernetes” (see this PDF

The idea is that Cilium already knows where all the services are in all the clusters with eBPF, so letting each service know about the routing is the logical next step, and any policy can be also programmed into the eBPF layer to prevent any pods from talking to each other like what a Service Mesh typically is provisioned to do.

What was interesting was the rebuttal with Buoyant’s Zachari Dichev, who presented “Life Without Sidecards – Is eBPF’s Promise Too Good to  Be True?” (see more info here)

He raises a good point, which is about identity-based policies which is a concept which exists above the networking layer, as well as the advantages of the sidecar which provides blast radius cover, maintenance being less risky due to it limited to the sidecars and not the kernel and the security boundary being very clear, and it is a single source of truth in and out of the container. 

This was furthered by the debate by IBM, Solo,io, Google and Microsoft in a session named Future of Service Mesh – Sidecar or Sidecarless or Proxyless.

Ground concept in parallel

For me, the discussions were great but a few points were very clear to me. From a software engineering background, with experience in distributed systems, it is important to build a ground concept and run that ground concept in parallel. That ground concept should also be as repeatable and parallelizable as possible, and understand instructions on its own from a golden source of truth. That ground concept is the sidecar, and its ability to pull configuration and interpret that to perform in a deterministic manner. This also gives me the ability to debug into it and find out issues at a higher abstraction level and not at the kernel level. Even today, upskilling developers to debug envoy filters to understand what is missing from a service mesh configuration takes a steep learning curve from vanilla Kubernetes. 

Istio is also coming out with the concept of the ambient mesh, which uses a node level agent for communication rather than sidecars. This would enable a path to possibly migrate over to a sidecar-less service mesh eventually, but not anytime soon.

Stay tuned for the next part of this blog series, where I will write about the next takeaway from Kubecon EU 2023, one of the most exciting projects at the moment to organise a Cloud Center of Excellence – Backstage.