What is Cilium?

Cilium is an open source project that enables networking, security, and observability for Kubernetes clusters and other containerized environments.

Cilium is based on a technology called eBPF, which can inject network control logic, security controls, and observability features directly into the Linux kernel. Cilium uses eBPF to provide high-performance networking, multi-cluster and multi-cloud capabilities, encryption, load balancing, and network security features.

What Does Cilium Provide in a Kubernetes Cluster?

Most Cilium capabilities are based on the Cilium container networking interface (CNI) plugin. It is fully compatible with existing kube-proxy models. The plugin provides an identity-based implementation of the Kubernetes NetworkPolicy resource, which decouples connections between pods at Layers 3 and 4 of the network.

The Cilium plugin extends the Kubernetes NetworkPolicy resource with:

  • CustomResourceDefinition, enabling policy control, with enforcement of Layer 7 policies for ingress and egress of HTTP and Kafka application protocols.
  • Egress support for classless inter-domain routing (CIDR) to enable secure access to external services.
  • Policy enforcement for external headless services to limit the set of Kubernetes endpoints configured for the service.

In addition, the Cilium plugin provides an implementation of ClusterIP which enables distributed load balancing of traffic between pods.

Enabling communication with pods

In Kubernetes, containers are deployed in units called pods. A pod contains one or more containers that can be accessed through a single IP address. In Cilium, each pod gets its IP address from the node prefix of the Linux node the pod is running on. Cilium lets you define a network security policy to ensure that pods can only communicate with pods they require access to.

Managing traffic to services

Kubernetes provides the service abstraction, which allows users to load-balance network traffic between pods. This abstraction allows pods to access other pods through a single virtual IP address that represents a service, without having to know every individual pod.

Without Cilium, kube-proxy can be installed on each node to monitor the addition and removal of endpoints and services from kube-master, and manage iptables to apply the necessary enforcement. Traffic to and from pods is routed to a node and port with a pod providing the service.

When implementing ClusterIP, Cilium behaves the same as kube-proxy, observing service additions or removals, but instead of doing it in iptables, it updates eBPF map entries for each node. This is more efficient and enables production-grade security.

Read our ebook on Getting Started with Cilium

Cilium on Kubernetes Example

Setting up Kubernetes and Cilium

To set up Kubernetes and install Cilium CLI:

1.  Use the following commands to create a Kubernetes cluster with Google Kubernetes Engine:

export CLUSTER_NAME="$(whoami)-$RANDOM"
gcloud container clusters create "${CLUSTER_NAME}" \
--node-taints node.cilium.io/agent-not-ready=true:NoExecute \
--zone us-west2-a
gcloud container clusters get-credentials "${CLUSTER_NAME}" --zone us-west2-a

2. Use the following commands to install the Cilium CLI:

CILIUM_CLI_INSTALLATION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
MACHINE_PROCESSOR=amd64
if [ "$(uname -m)" = "aarch64" ]; then MACHINE_PROCESSOR=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_INSTALLATION}/cilium-linux-${MACHINE_PROCESSOR}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${MACHINE_PROCESSOR}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${MACHINE_PROCESSOR}.tar.gz /usr/local/bin
rm cilium-linux-${MACHINE_PROCESSOR}.tar.gz{,.sha256sum}

The Cilium CLI helps users install Cilium, check the installation’s status, and enable or disable its different features like Cluster Grid.

Install Cilium on Kubernetes

The Cilium installer tries to install a package with the best configuration option.

To install Cilium on the Kubernetes cluster:

1. Use the following command to install Cilium on the cluster obtained through the existing kubectl context and check the status of the deployment:

cilium install
cilium status

2. Use the following to verify Cilium got installed properly:

cilium status --wait

3. Use the following command to verify the strength of Cilium’s network connection:

cilium connectivity test

Cilium and Gloo Network

Gloo Network enables Enterprise Kubernetes networking, via a modular CNI (Container Networking Interface) architecture. By enabling advanced routing and observability, Gloo Network extends robust application networking for Platform Engineering and DevOps teams.

To deliver this level of security, Solo has integrated the capabilities of the open source Cilium project, Linux kernel-level eBPF security and the Kubernetes CNI layer. These features enable platform teams to gain advanced management and monitoring capabilities for their networking stack as a turn-key operation with Gloo Network.

As a result, Gloo Network customers gain deeper control over connection handling, load balancing, and redirect even before traffic reaches the workloads. This can provide a more efficient enforcement of policies in accordance with your organizational security profiles.

When using Gloo Network with an Istio service mesh that is managed by Gloo Mesh, it creates a multi-layer defense mechanism that will help protect cloud-native applications from being compromised. This includes auto-detection of clusters and cluster services, as well as multi-tenancy across the service mesh.

Scaling application network policy and observability across multi-cluster deployments is also enhanced with Gloo Network. This combination will not only enable networking, security and observability, but it also inherits all of the advanced functionality that Gloo Mesh delivers above and beyond standard Istio.

Learn more about Gloo Network for Cilium today

BACK TO TOP