What is Cilium?

Cilium provides eBPF-based networking, observability, and security for container workloads. Cilium enables you to secure the network connectivity between application services deployed using Linux container management platforms like Kubernetes.

At the core of Cilium is eBPF, which enables the dynamic insertion of control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration.

Cilium architecture and components

A Hubble and Cilium deployment includes the following components running in the cluster:

Cilium architecture and components

Cilium

Here is a high-level description of Cilium components: 

The Cilium agent

The cilium-agent component runs on every node in the cluster. It accepts the configuration via APIs or Kubernetes, which describes requirements for networking, network policies, load balancing, visibility, and monitoring.

The agent waits for events from the orchestration system (i.e., Kubernetes) to indicate when workloads or containers start or stop. It manages eBPF programs that allow the Linux kernel to control network access in and out of the containers.

The Cilium CLI client

The CLI client is a command-line tool installed alongside the Cilium agent on the same node, interacting with the agent’s REST API. The CLI enables the inspection of the local agent’s state and status. It also offers tools to access and validate the state of eBPF maps directly.

The Cilium operator

The operator handles tasks that require one-time handling for the whole cluster instead of for every node. The Cilium operator is not critical for making network policy decisions or forwarding – clusters can generally function when the operator becomes unavailable.

The CNI plugin 

Kubernetes invokes the cilium-cni plugin when it schedules or terminates a pod on the node. The plugin interacts with the node’s Cilium API to trigger the right datapath configurations for the pod’s networking, policy, and load balancing needs. 

Hubble

The Hubble server runs on every node, retrieving visibility based on eBPF from Cilium. The server is built into the Cilium agent to enable low overhead and high performance, providing a gRPC service that retrieves Prometheus metrics and flows. Hubble components include:

Hubble relay

This component is a standalone relay that maintains awareness of every running Hubble server. It provides visibility throughout the cluster by connecting to each server’s gRPC API and creating an API representing all the servers in the cluster.

The Hubble CLI

This command-line tool can connect to the Hubble relay’s gRPC API or a local server to retrieve flows.

The graphical user interface 

This UI component uses the visibility from the Hubble relay to provide graphical service dependencies and map connectivity.

What does Cilium provide in a Kubernetes cluster?

Cilium offers capabilities based on the CNI (container networking interface) plugin. The plugin is compatible with all existing kube proxy models and provides an identity-driven implementation of the network policy resource in Kubernetes. This resource decouples the connections between Layer 3 and Layer 4 pods. 

The Cilium CNI plugin uses the following to extend the Kubernetes network policy resource:

  • Custom resource definition (CRD) – enables policy control and enforces Layer 7 policies for the egress and ingress of Kafka and HTTP application protocols.
  • Classless inter-domain routing (CIDR) – provides egress support and enables secure access to services outside Kubernetes.
  • Policy enforcement – limits the number of Kubernetes endpoints set up for the service to enforce policies for external services.

The Cilium plugin also provides a ClusterIP implementation to enable the distributed load balancing of inter-pod traffic.

Allowing communication between pods

Kubernetes containers run in units known as pods – each pod has one or multiple containers accessible via a single IP address. In Cilium, pods get their IP addresses from the Linux node’s prefix. Cilium allows you to define network security policies to restrict pod communication and ensure pods can only talk to pods they must access.

Managing service traffic 

Kubernetes abstracts services to enable the load balancing traffic between the pods in a network. This abstraction enables pods to reach other pods via a virtual IP address representing each service – there is no need to know every single pod.

If you don’t have Cilium, you can install the kube proxy on every node to monitor the removal and addition of services and endpoints from the kube-master. It can also manage iptables and enforce network traffic policies. Without Cilium, the kube proxy routes all inbound and outbound pod traffic to a port and node with a pod that proves the service.

If you implement ClusterIP, Cilium will behave the same way as the kube proxy – it monitors the removal and additions of services but updates each node’s eBPF maps instead of using iptables. This approach is more secure and efficient. 

Cilium 1.13 - New features

Cilium announced the general availability of Cilium 1.13 in February 2023. The release includes the following new features:

  • Gateway API and Ingress improvements – This release brings a fully-conformant Gateway API implementation to Cilium. Gateway API supports North-South Load-Balancing and Traffic Routing and is the long-term successor to the Ingress API in Cilium. L7 Load-Balancing for Kubernetes Services is also available where Cilium Ingress can also be deployed in Shared LoadBalancer mode. Cilium 1.13 is also introducing mTLS support at the data path level.
  • Networking enhancements – For organizations dealing with larger traffic loads, you can now use BIG TCP on your cluster and benefit from enhanced performance. IPv4/IPv6 Dual Stack support on Cilium also benefits from improvements to NAT46/64. Cilium 1.13 also includes support for the SCTP transport layer protocol.
  • Observability – Cilium 1.13 includes updates to Hubble’s Layer 7 HTTP visibility feature effectively linking Hubble metrics to the traceID from the application. The Hubble Datasource Plugin will enable Grafana users to get detailed insights into the network traffic from Cilium. The plugin also integrates with Prometheus, Tempo and Hubble Timescape.
  • Cilium Tetragon – With Cilium 1.13, Tetragon introduced File Integrity Monitoring (FIM). FIM is a feature that monitors and detects file changes that could be indicative of malicious activity. The L3/L4 Network Observability was improved by the Process Socket Statistics feature to allow collecting information about socket statistics. Finally, Tetragon now introduces interface metrics that will allow users to observe all network interfaces on a particular node.
  • Security – Starting with Cilium 1.13, all Cilium & Tetragon container images are signed using cosign. Also included is a Software Bill of Materials (SBOM) which includes generated lists of all the software dependencies. Support for Server Name Indication (“SNI”) in Network Policies will allow operators to restrict the allowed TLS SNIs in their network and provide a more secure environment.

Other features – a great deal of work has gone into improving resilience and refactoring Cilium CI for running tests.

Getting started with Cilium using Istio

The Cilium Istio integration enables Cilium to enforce Layer 7 HTTP network policies for traffic protected with mTLS in Istio sidecar proxies. You can also deploy Istio without integrating Cilium if you run a standard istioctl version. In such cases, Cilium enforces Layer 7 policies outside the Istio sidecar proxy, although this only works if you don’t use mTLS.

The first step to enabling Cilium with Istio is to install the Cilium CLI (it should be the current version). You can use the Cilium CLI to install Cilium, check the state of the Cilium installation, and enable or disable features such as Hubble and ClusterMesh. To install the CLI:

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

Cilium is installable on any cluster – you install Cilium in the Kubernetes cluster that the current kubectl context points to:

cilium install

Ensure that Cilium is running in the cluster before continuing to download the Cilium-enhanced istioctl version (1.10.6-1):

curl -L 
https://github.com/cilium/istio/releases/download/1.10.6-1/cilium-istioctl-1.10.6-1-linux-amd64.tar.gz | tar xz

Next, deploy Istio’s default configuration profile in Kubernetes:

./cilium-istioctl install -y

Now you can add a namespace label that instructs Istio to inject an Envoy sidecar proxy automatically during your application’s deployment:

kubectl label namespace default istio-injection=enabled

Istio and Cilium are now deployed.

Cilium networking in Gloo Mesh and Gloo Network

The Cilium add-on module for Gloo Mesh brings together Istio and Cilium for a more cohesive, secure and performant Layer 2 – Layer 7 application networking architecture. This paves the way for a smoother, simplified enterprise cloud journey.

Integrated application networking throughout the entire stack

Companies using Kubernetes have two choices for OSI (Open Systems Interconnection) model Layer 3-4 networking through the CNI (container native interface); iptables-based solutions and eBPF-based solutions. While iptables-based solutions are well established in the market, eBPF-based solutions bring new innovations, and require a new level of expertise to take advantage of performance, security, and observability capabilities.

Next generation of cloud-native application networking

Istio, Envoy Proxy, eBPF, Kubernetes, and containers will provide the foundation for the next generation of cloud native application networking by enabling new innovations that will improve performance and simplify the management of networking for cloud native applications.

Plugable CNI (container native interface) architecture

Gloo Mesh supports Cilium and 3rd-party CNI implementations in a batteries included by pluggable manner. This approach gives our customers the flexibility they need on their cloud journey.

Learn more about Gloo Network for Cilium

BACK TO TOP