This article is written by Jason Quek, Global CTO at Devoteam G Cloud
Kubernetes autoscaling
HPA (Horizontal Pod Autoscaler) and VPA (Vertical Pod Autoscaler) right? The next part got kinda excited. Expanders! This honestly is the reason I go to Kubecon. To hear of these things which honestly is really hard to follow due to the crazy advancements in open source and Google Cloud which is extremely hard to keep up to date all the time.
Expanders are an addition to the Horizontal Pod Autoscaler API, so that your Kubernetes clusters can scale up, and also choose based on code and KPIs, what to scale up on. Not just memory, or RAM, instance sizes, but cost! This brings FinOps to a whole new level. Before the introduction of expanders, the only way to really do this is to use taints and tolerations to point to nodes which were labelled in a certain way to choose where to deploy on. This concept allows you to codify your own user-specified expansion behavior. Of course this requires Golang, but there’s a whole “Should DevOps code?” topic that is a separate discussion. Imagine the following to have a checklist of
- First choose a node closest to my user spike location to scale up (for latency optimization of course)
- Next choose a node which has a committed use discount on it ( to optimize on my Committed Use Discounts on Google Cloud)
- Then choose a Spot VM, because my workload is fault tolerant.
- Finally choose a VM in a location with the lowest CO2 consumption.
With custom gRPC expanders this is now possible, and DevOps engineers are now able to optimize running workloads for user experience, FinOps and sustainability. I cannot wait for this release and start building custom expanders for our customers over at Devoteam G Cloud to make use of this feature.
Cilium and eBPF
What can I say about this that everyone doesn’t already know? What if you want observability of your services, but you don’t need all the features in a services mesh and all the other training needed for that. With Cilium, eBPF and Hubble, you can get that from the kernel.
Of course there are more features such as Cilium Service Mesh (possible to do Sidecar-free!), and Cilium + Istio which were covered by the talk by the awesome Thomas Graf (founder of Cilium and co-founder of Isovalent) and Liz Rice (Chief Open Source Officer at Isovalent). By the way that could be needed eventually at Devoteam G Cloud (Chief Open Source Officer) to build a good open sourcing strategy.
With GKE Dataplane v2, Google is also contributing more to the Cilium project, moving faster than their competitors and seeing the future. We have customers that specifically chose Google Cloud due to the availability of the GKE Dataplane v2, to be able to get that as a managed service. We at Devoteam G Cloud have also been experimenting and developing a process to utilize Cilium as a way to migrate on-prem K8s to GKE in a seamless, no downtime manner. More on that from our accelerators coming in the future.
Swag and Kubecon booths
My final stop was to the Kubecon booths to understand the vast and growing ecosystem of different SAAS services being offered to Kubernetes operators. This is essential to understand what is the best in the market to advise our customers appropriately as well. It has been amazing to see Kasten by Veeam becoming the de-facto standard for K8s backup and recovery, as well as other cool partners such as CockroachDB, InfluxDB, Aqua, Snyk really growing and covering many interesting use cases. As CTO of Devoteam G Cloud, it is important for me to identify which partners to partner with to make our customers as efficient, secure and cloud native as possible.
Do you want to talk to me or one of our experts?