What is Cilium?
Cilium provides eBPF-based networking, observability, and security for container workloads. Cilium enables you to secure the network connectivity between application services deployed using Linux container management platforms like Kubernetes.
At the core of Cilium is eBPF, which enables the dynamic insertion of control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration.
Cilium architecture and components
A Hubble and Cilium deployment includes the following components running in the cluster:
Cilium
Here is a high-level description of Cilium components:
The Cilium agent
The cilium-agent component runs on every node in the cluster. It accepts the configuration via APIs or Kubernetes, which describes requirements for networking, network policies, load balancing, visibility, and monitoring.
The agent waits for events from the orchestration system (i.e., Kubernetes) to indicate when workloads or containers start or stop. It manages eBPF programs that allow the Linux kernel to control network access in and out of the containers.
The Cilium CLI client
The CLI client is a command-line tool installed alongside the Cilium agent on the same node, interacting with the agent’s REST API. The CLI enables the inspection of the local agent’s state and status. It also offers tools to access and validate the state of eBPF maps directly.
The Cilium operator
The operator handles tasks that require one-time handling for the whole cluster instead of for every node. The Cilium operator is not critical for making network policy decisions or forwarding – clusters can generally function when the operator becomes unavailable.
The CNI plugin
Kubernetes invokes the cilium-cni plugin when it schedules or terminates a pod on the node. The plugin interacts with the node’s Cilium API to trigger the right datapath configurations for the pod’s networking, policy, and load balancing needs.
Hubble
The Hubble server runs on every node, retrieving visibility based on eBPF from Cilium. The server is built into the Cilium agent to enable low overhead and high performance, providing a gRPC service that retrieves Prometheus metrics and flows. Hubble components include:
Hubble relay
This component is a standalone relay that maintains awareness of every running Hubble server. It provides visibility throughout the cluster by connecting to each server’s gRPC API and creating an API representing all the servers in the cluster.
The Hubble CLI
This command-line tool can connect to the Hubble relay’s gRPC API or a local server to retrieve flows.
The graphical user interface
This UI component uses the visibility from the Hubble relay to provide graphical service dependencies and map connectivity.
What does Cilium provide in a Kubernetes cluster?
Cilium offers capabilities based on the CNI (container networking interface) plugin. The plugin is compatible with all existing kube proxy models and provides an identity-driven implementation of the network policy resource in Kubernetes. This resource decouples the connections between Layer 3 and Layer 4 pods.
The Cilium CNI plugin uses the following to extend the Kubernetes network policy resource:
- Custom resource definition (CRD) – enables policy control and enforces Layer 7 policies for the egress and ingress of Kafka and HTTP application protocols.
- Classless inter-domain routing (CIDR) – provides egress support and enables secure access to services outside Kubernetes.
- Policy enforcement – limits the number of Kubernetes endpoints set up for the service to enforce policies for external services.
The Cilium plugin also provides a ClusterIP implementation to enable the distributed load balancing of inter-pod traffic.
Allowing communication between pods
Kubernetes containers run in units known as pods – each pod has one or multiple containers accessible via a single IP address. In Cilium, pods get their IP addresses from the Linux node’s prefix. Cilium allows you to define network security policies to restrict pod communication and ensure pods can only talk to pods they must access.
Managing service traffic
Kubernetes abstracts services to enable the load balancing traffic between the pods in a network. This abstraction enables pods to reach other pods via a virtual IP address representing each service – there is no need to know every single pod.
If you don’t have Cilium, you can install the kube proxy on every node to monitor the removal and addition of services and endpoints from the kube-master. It can also manage iptables and enforce network traffic policies. Without Cilium, the kube proxy routes all inbound and outbound pod traffic to a port and node with a pod that proves the service.
If you implement ClusterIP, Cilium will behave the same way as the kube proxy – it monitors the removal and additions of services but updates each node’s eBPF maps instead of using iptables. This approach is more secure and efficient.
Cilium 1.12 - New features
Cilium announced the general availability of Cilium 1.12 in July 2022. The release includes the following new features:
- Ingress controller – directly embedded in Cilium and fully compliant with Kubernetes Ingress controller. The controller is based on Envoy and uses eBPF for improved security and observability.
- ClusterMesh enhancements – ClusterMesh, Cilium’s service mesh functionality, can now combine services from multiple clusters into a single global service, with affinity capabilities. Services can be configured to prefer endpoints in the local or remote cluster.
- Egress Gateway – previously a beta feature, Egress Gateway is now ready for production use. It lets you forward connections to external workloads through specific Gateway nodes. Cilium provides predictable IP addresses, enabling integration with firewalls that require static IP addresses.
- Cilium Tetragon – enables eBPF-based security, observability, and runtime policy enforcement. This new component detects and reacts to security events, such as anomalous process execution, system call activity, and I/O activity with network and file access.
Other features – the new release provides improved network visibility controls, a NAT supporting IPv4 or IPv6, ability to run as non-privileged containers, dynamic allocation of CIDRs, and prefix delegation for AWS ENI.
Getting started with Cilium using Istio
The Cilium Istio integration enables Cilium to enforce Layer 7 HTTP network policies for traffic protected with mTLS in Istio sidecar proxies. You can also deploy Istio without integrating Cilium if you run a standard istioctl version. In such cases, Cilium enforces Layer 7 policies outside the Istio sidecar proxy, although this only works if you don’t use mTLS.
The first step to enabling Cilium with Istio is to install the Cilium CLI (it should be the current version). You can use the Cilium CLI to install Cilium, check the state of the Cilium installation, and enable or disable features such as Hubble and ClusterMesh. To install the CLI:
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt) CLI_ARCH=amd64 if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum} sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
Cilium is installable on any cluster – you install Cilium in the Kubernetes cluster that the current kubectl context points to:
cilium install
Ensure that Cilium is running in the cluster before continuing to download the Cilium-enhanced istioctl version (1.10.6-1):
curl -L https://github.com/cilium/istio/releases/download/1.10.6-1/cilium-istioctl-1.10.6-1-linux-amd64.tar.gz | tar xz
Next, deploy Istio’s default configuration profile in Kubernetes:
./cilium-istioctl install -y
Now you can add a namespace label that instructs Istio to inject an Envoy sidecar proxy automatically during your application’s deployment:
kubectl label namespace default istio-injection=enabled
Istio and Cilium are now deployed.
Cilium networking in Gloo Mesh and Gloo Network
The Cilium add-on module for Gloo Mesh brings together Istio and Cilium for a more cohesive, secure and performant Layer 2 – Layer 7 application networking architecture. This paves the way for a smoother, simplified enterprise cloud journey.
Integrated application networking throughout the entire stack
Companies using Kubernetes have two choices for OSI (Open Systems Interconnection) model Layer 3-4 networking through the CNI (container native interface); iptables-based solutions and eBPF-based solutions. While iptables-based solutions are well established in the market, eBPF-based solutions bring new innovations, and require a new level of expertise to take advantage of performance, security, and observability capabilities.
Next generation of cloud-native application networking
Istio, Envoy Proxy, eBPF, Kubernetes, and containers will provide the foundation for the next generation of cloud native application networking by enabling new innovations that will improve performance and simplify the management of networking for cloud native applications.
Plugable CNI (container native interface) architecture
Gloo Mesh supports Cilium and 3rd-party CNI implementations in a batteries included by pluggable manner. This approach gives our customers the flexibility they need on their cloud journey.
