Achieve Compliance, Zero Trust with Istio Ambient Mesh

READ THE WHITE PAPER

Cilium and eBPF

Powering Observability for Cloud Native

Series: Cilium

What is Cilium?

Cilium is a popular open source project that provides observability, networking, and security in cloud native environments. Its most common use is for container orchestration platforms like Kubernetes.

The extended Berkeley Packet Filter (eBPF) is the core of Cilium. This emerging technology enables teams to dynamically insert logic for robust visibility, networking, and security controls into Linux kernels. eBPF helps provide multi-cloud and multi-cluster capabilities, high-performance networking, dynamic load balancing, network security, transparent observability, and encryption.

How Cilium works with eBPF

Cloud native applications are built for cloud environments, typically supported by technologies like a microservices architecture, container orchestration platforms, auto scaling features, and many APIs and services. This ecosystem is highly dynamic, complex, and distributed, and the architecture does not revolve around machines. 

Cilium provides a high-level abstraction on top of eBPF to help address various networking, visibility, and security requirements of container workloads. Typically, only Linux kernel developers can use eBPF as a low-level technology. Cilium provides access to eBPF, enabling you to leverage this technology for cloud native applications.

Here are the main benefits of eBPF:

  • Programmability – eBPF enables you to quickly adjust evolving cloud-native requirements and scale up with ease. While this programmability already exists in user-space networking, eBPF is unique in that it remains in the kernel. A user-space networking framework remains transparent to the application by traversing the Linux kernel’s socket layer, but eBPF avoids this because it remains in the kernel.
  • Generic – eBPF is not networking-specific or tied to a certain domain, attracting a bigger community to innovate and avoiding making premature assumptions about the building blocks required to solve future problems. 
  • Safety – eBPF programs must pass through a verification process, making them more secure than loading a kernel module. A Just in Time (JIT) compiler ensures the native execution speed of eBPF bytecode to ensure eBPF programs remain efficient.

Cilium features and technologies

Here are some of the main features and technologies of Cilium. 

Cilium agent

This component—cilium-agent—runs on all nodes in a cluster. At a high level, the Cilium agent takes configurations from Kubernetes or an API that describes visibility, networking, network policies, load balancing of services, and monitoring requirements.

The agent can listen for events from an orchestration system like Kubernetes to determine when workloads or containers start and stop. It manages eBPF programs used by the Linux kernel to control network access inside and outside the containers.

Tetragon

Tetragon is a new component in Cilium that helps provide real-time security observability and runtime enforcement based on eBPF.

Thanks to the kernel-level collector’s built-in aggregation logic and intelligent kernel filtering, it achieves granular visibility without application changes. Thus, the eBPF-based collector offers high visibility with low overhead. A built-in runtime enforcement layer can control access for system calls and other execution levels.

Hubble

Hubble is a distributed networking and security visibility platform. Built on BPF and Cilium, it fully and transparently shows how services and network infrastructure communicate and operate.

Building on Cilium allows Hubble to leverage BPF for observability. BPF makes all visibility programmable, allowing for a dynamic management approach with minimal overhead and providing deep, detailed visibility based on user needs. Hubble’s design is specifically suited to take full advantage of these new BPF capabilities.

Example: Detecting a container escape with Cilium and eBPF

A container escape enables threat actors to break the isolation boundary between the container and its host, escaping into a worker node or a Kubernetes control plane. The threat actors can then perform the following malicious activities: 

  • See containers running on the host and collect their secrets.
  • Attack the kubelet and escalate privileges.
  • Read or write data on the host file system.
  • Exploit a Kubernetes bug and deploy an invisible pod to persist in the environment.

You can implement security best practices within your Kubernetes environment to limit these attacks, but you also need to achieve observability to truly handle a container escape.

A data-driven approach to observability

By implementing a data-driven approach, you can continuously make data-driven decisions to protect your Kubernetes environment. It involves collecting data from Kubernetes workloads and hosts, observing feedback, and making continuous data-driven security decisions.

eBPF enables you to get visibility directly into your Kubernetes workloads, including pods. Since pods share a single kernel on a node, the processes within the pod are visible to an eBPF program. It offers full visibility into all processes running on the node, including long-running and short-lived processes.

Observability with eBPF and Cilium

Cilium employs eBPF to monitor network and process behavior inside Kubernetes workloads and externally on the host, providing visibility into these behaviors. It involves deploying Cilium as a DaemonSet into a Kubernetes environment. 

A Cilium agent runs on each Kubernetes node and communicates with the Kubernetes API server to understand network policies, Kubernetes pod identities, and services. According to the workload’s identity, Cilium installs an eBPF program to perform connectivity, observability, and security.

Cilium can observe and enforce the behavior inside a Linux system, collecting and filtering out security observability data directly within the kernel. It can export the data as JSON events or store it in a certain log file using the Hubble-enterprise DaemonSet. These JSON events include: 

  • Kubernetes Identity Aware Information such as services, namespaces, labels, containers, and pods. 
  • OS Level Process Visibility data such as process binaries, uids, pids, and parent binaries with the entire process ancestry tree. 

You can export this information in various formats and send it to external systems like a security information and event management (SIEM) solution. This real-time data from the kernel lets you see all processes executed in your Kubernetes environment, ensuring you can continuously identify and remediate security issues, including container escapes.

What is the Cilium Fundamentals certification?

The Cilium software provides, secures, and monitors network connectivity for container workloads. It is cloud native and powered by eBPF, an innovative Linux kernel technology. The Cilium Fundamentals certification, provided by Solo.io and Credly, confirms that you have the basic skills for deploying the Cilium CNI on test Kubernetes clusters, collecting metrics, and enforcing network policies.

Cilium and Gloo Network

Gloo Network enables Enterprise Kubernetes networking via a modular CNI (Container Networking Interface) architecture. By enabling advanced routing and observability, Gloo Network extends robust application networking for Platform Engineering and DevOps teams.

To deliver this level of security, Solo has integrated the capabilities of the open source Cilium project, Linux kernel-level eBPF security, and the Kubernetes CNI layer. These features enable platform teams to gain advanced management and monitoring capabilities for their networking stack as a turn-key operation with Gloo Network.

As a result, Gloo Network customers gain deeper control over connection handling, load balancing, and redirect even before traffic reaches the workloads. This can provide a more efficient enforcement of policies in accordance with your organizational security profiles. 

When using Gloo Network with an Istio service mesh that is managed by Gloo Mesh, it creates a multi-layer defense mechanism that will help protect cloud native applications from being compromised. This includes auto-detection of clusters and cluster services, as well as multi-tenancy across the service mesh.

Scaling application network policy and observability across multi-cluster deployments is also enhanced with Gloo Network. This combination will not only enable networking, security, and observability, but it also inherits all of the advanced functionality that Gloo Mesh delivers above and beyond standard Istio.

Learn more about Gloo Network today.

Sections