Cilium on AKS Too?!? And the MVP CNI of the Year is…

There’s significant momentum around expanding cloud-native networking, and the use of CNI opens up new ways to enhance the networking stack. Solo has been working with Cilium as one of our CNI options built into the Gloo Platform for our users. 

Using the CNI method is also expanding to help our customers enhance their security footprint across the multi-cloud environment. For those multi-cloud customers, we’re especially excited to see CNI adoption grow with the most recent launch of the Cilium CNI supported on Azure Kubernetes Service!

So what does that mean for you? 🤔 YOU GET TO BUZZ AROUND WITH eBPF IN KUBERNETES 🎉🎉🎉

The Cilium CNI is built on top of eBPF and provides enhanced and optimized networking through the usage of eBPF-based programs running in the Kernel.

Gloo Network is the platform for Multi-cloud

Gloo Network provides a powerful Cilium CNI (based on eBPF technology) for Kubernetes clusters. This enables companies to leverage powerful network filtering and observability either at the Kubernetes layer or as part of a broader service mesh, application networking architecture. 

Gloo Network is a part of the Gloo Platform which is a Network stack covering CNI to Service Mesh and API Gateways! You can find more about Gloo Network here!

The benefit of using the Cilium CNI with AKS

One of the largest benefits of using the Cilium CNI is gaining the eBPF magic in Cloud Kubernetes. More specifically, packet processing is improved upon, and latency is reduced significantly, allowing for more performant workloads.

Directly from the AKS Documentation, the specific benefits they call out are:

  • Functionality equivalent to existing Azure CNI and Azure CNI Overlay plugins
  • Faster service routing
  • More efficient network policy enforcement
  • Better observability of cluster traffic
  • Support for larger clusters (more nodes, pods, and services)

Getting started with Cilium on Kubernetes in the Cloud

AKS

Before getting into it, here’s a visual representation of an AKS Node with the Cilium CNI.

visual representation of an AKS Node with the Cilium CNI
Since we’re mainly here for AKS, let’s start here. Be aware that this is a preview and that means it’s not meant for production use. You can read more about this at the beginning of this Azure document on AKS. The process is a bit involved as this is a preview feature and requires updating your Azure CLI to make this work. Also this only works for new AKS clusters running Linux worker nodes, which you read more about here.

#Check your azure CLI version, which must be 2.41.0 or later
az version

Check your azure CLI version, which must be 2.41.0 or later

#Add the aks-preview extensions and update and then register the CiliumDataPlanePreview and verify

az extension add --name aks-preview

az extension update --name aks-preview

az feature register --namespace "Microsoft.ContainerService" --name "CiliumDataplanePreview"

az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/CiliumDataplanePreview')].{Name:name,State:properties.state}"

az provider register --namespace Microsoft.ContainerService
#Create a resource group and pick a location

az group create --name ciliumresourcegroup --location eastus2
#Create a VNET in your resourcegroup in the location you specified earlier and provide it with a name, and pick a subnet CIDR, and then proceed to create the node and pod CIDRs from that VNET subnet

az network vnet create -g ciliumresourcegroup --location eastus2 --name ciliumvnet --address-prefixes 10.0.0.0/8 -o none 

az network vnet subnet create -g ciliumresourcegroup --vnet-name ciliumvnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none 

az network vnet subnet create -g ciliumresourcegroup --vnet-name ciliumvnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
#Create the AKS cluster and ensure you specify the --enable-cilium-dataplane flag as well, and specify your subscription ID

az aks create -n ciliumsolo01 -g ciliumresourcegroup -l eastus2 \

  --max-pods 250 \

  --node-count 3 \

  --network-plugin azure \

  --enable-cilium-dataplane \

  --vnet-subnet-id "/subscriptions/YOURSUBID/resourceGroups/ciliumresourcegroup/providers/Microsoft.Network/virtualNetworks/ciliumvnet/subnets/nodesubnet" \

  --pod-subnet-id "/subscriptions/YOURSUBID/resourceGroups/ciliumresourcegroup/providers/Microsoft.Network/virtualNetworks/ciliumvnet/subnets/podsubnet"

After deploying the AKS Cluster, you’ll see a massive JSON output detailing the cluster-setup information. If you look at it closely, you’ll notice that the eBPF dataplane is set to Cilium.

eBPF dataplane is set to Cilium

However, we can further verify this by running kubectl get pods -A which will show us all the Cilium pods running in the Kube-System, indicating that Cilium is the CNI running in this cluster!

running kubectl get pods -A which will show us all the Cilium pods running in the Kube-System

Civo

With Civo Cloud, a very Kubernetes and Compute focused cloud-platform, you can easily spin up Kubernetes clusters in under 90 seconds! To get it up and running with Cilium, run the following commands:

#use Civo CLI v1.0.41

civo kubernetes create ciliumcivo -p cilium

GKE

GKE has had Cilium CNI capabilities for some time now and running the following commands can have it up and in your cluster in no-time.

gcloud container clusters create ciliumk8s \

    --enable-dataplane-v2 \

    --enable-ip-alias \

    --release-channel rapid \

    {--region us-central1 | --zone us-central1-a}

Take our Cilium Workshop!

Interested to learn more about how the Cilium CNI works? Check out our Cilium workshop over at Solo Academy.

Final Thoughts

While it might be easy enough to manually deploy each cloud Kubernetes cluster with Cilium, each method means manual and individual management of these clusters and their respective networks. At some point, this may not scale well for production usage. Additionally, these are inconsistent versions and feature sets for each cloud provider! For example, you can use Hubble with Civo K8s but not with AKS.

So how would we solve this problem? Gloo Network!

As mentioned previously, Gloo Network provides a powerful Cilium CNI (based on eBPF technology) for Kubernetes clusters. It’s a consistent experience and makes it easier to manage, consume, control and lifecycle Cilium across your multi-cloud Kubernetes strategy.

Want to learn more? Come chat with us!