What is Cilium?
Cilium is an open source networking and security solution that provides network visibility and security for containers and microservices in cloud native environments. It allows users to enforce network policies at the application layer using Linux kernel primitives, and provides visibility into network traffic using a combination of eBPF (extended Berkeley Packet Filter) and XDP (eXpress Data Path).
Cilium uses eBPF to monitor and manipulate network traffic at the kernel level, and XDP to perform high-speed packet processing in the Linux kernel. It is designed to work with container orchestration platforms such as Kubernetes, and can be used to secure and monitor communication between containers, services, and networks.
What are the benefits of Cilium Service Mesh?
Cilium offers a service mesh solution that provides a way to manage and secure communication between microservices in a distributed application. A service mesh is a layer of infrastructure that sits between the application services and the underlying network and handles the communication between these services. It provides features such as load balancing, service discovery, and security to the application services, and abstracts the complexity of the underlying network.
There are several benefits to using the Cilium service mesh:
- Network visibility and security: Cilium provides visibility into network traffic and allows users to enforce network policies at the application layer, helping to secure communication between microservices.
- Cloud native: Cilium is designed to work with container orchestration platforms such as Kubernetes and can be used in cloud native environments.
- Ecosystem support: Cilium integrates with a wide range of tools and platforms, including container orchestration platforms, load balancers, and monitoring tools.
- Extensibility: Cilium provides a set of APIs and tools for managing and securing communication between microservices, which allows users to customize and extend the service mesh to meet their specific needs.
How Cilium Service Mesh handles network layer 7
Cilium can handle network layer 7, also known as the application layer, by providing visibility into the traffic flowing through the service mesh and allowing users to enforce network policies at this layer. Cilium integrates with a wide range of protocols and services that operate at the application layer, such as HTTP, FTP, and SMTP.
Cilium CNI (Container Network Interface) allows users to use Cilium as a network plugin for container runtime environments. It is easy to install the Cilium CNI plugin, but it’s important to ensure it is the latest stable version.
To create a Cilium L7 policy:
Once you’ve installed Cilium CNI, you can create a Cilium L7 policy and add HTTP rules to an existing
- Create a
CiliumNetworkPolicyresource. This can be done using the
kubectlcommand-line tool or by creating a YAML file that defines the resource.
- Add the L7 policy to the CiliumNetworkPolicy resource. To create an L7 policy, you will need to add a
spec.ingressfield to the CiliumNetworkPolicy resource and specify the L7 rules that you want to apply.
- Add HTTP rules to the L7 policy. To add HTTP rules to the L7 policy, you will need to use the
httpfield in the
spec.ingressfield and specify the rules that you want to apply. For example, you can use the
methodfield to specify the HTTP method that should be allowed, or the
pathfield to specify the path that should be matched.
- Apply the CiliumNetworkPolicy resource. Once you have created and configured the CiliumNetworkPolicy resource, you can apply it to your cluster using the
kubectl applycommand or by using the
kubectl createcommand. This will create the L7 policy and apply the HTTP rules that you have specified.
Here is an example of a YAML configuration with an L7 policy, which restricts any HTTP traffic except on the path
apiVersion: cilium.io/v2 kind: CiliumNetworkPolicy metadata: name: service-account spec: endpointSelector: matchLabels: null io.cilium.k8s.policy.serviceaccount: server ingress: - fromEndpoints: - matchLabels: null io.cilium.k8s.policy.serviceaccount: client toPorts: - ports: null - port: "3555" protocol: TCP rules: null http: - method: GET path: /mypath
How Cilium Service Mesh handles identity
Cilium provides a range of features to help you manage and secure the identity of the components in your containerized environment.
One key feature of Cilium is identity-based authorization, which allows you to define and enforce policies that control access to resources based on the identity of the requesting entity. In Cilium, identities can be based on a variety of factors, including labels attached to pods or services, Kubernetes namespaces, and IP addresses.
Cilium also provides support for identity-based encryption, which allows you to secure communication between components by encrypting traffic based on the identity of the sender and receiver. This can help to prevent unauthorized access to sensitive data and ensure that communication within the cluster is secure.
Here is an example of a
CiliumNetworkPolicy resource that restricts access based on identity:
apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "restrict-access" spec: # Allow ingress traffic only from pods with the "frontend" label ingress: - fromEndpoints: - matchLabels: app: "frontend" # Allow egress traffic only to pods with the "backend" label egress: - toEndpoints: - matchLabels: app: "backend"
This policy allows ingress traffic (traffic incoming to the cluster) only from pods with the app: frontend label, and allows egress traffic (traffic leaving the cluster) only to pods with the app: backend label.
To apply this policy, you would need to attach the appropriate labels to the pods in your cluster. For example, if you wanted to apply this policy to a group of pods running a frontend service, you would label those pods with
CiliumNetworkPolicy resources use the Kubernetes label selectors to specify which pods and services the policy should apply to. You can use a variety of different match criteria in the label selectors, including labels, namespaces, and IP addresses, to create more complex and fine-grained policies.
Cilium Service Mesh extends eBPF for cloud native
eBPF is a Linux kernel feature that allows users to attach custom programs to various kernel functions, such as packet filtering and tracing, to perform a wide range of tasks. Cilium extends eBPF by providing a set of APIs and tools for managing and securing communication between microservices in a distributed application.
By using eBPF, Cilium eliminates the inefficiencies of sidecar proxying typically found in service meshes. Cilium 1.12 provides the following service mesh enhancements:
- Cilium Tetragon: This feature allows users to segment their network into multiple logical domains, or “tetragons,” which can be used to isolate different parts of the network. This can be used to enforce network policies and provide additional security.
- Support for external workloads: This allows Cilium to be used to secure communication between external services and workloads running in a cluster. Egress gateways provide a way to control and secure communication between the cluster and external networks.
- Cluster mesh enhancements: These improve the performance and scalability of the service mesh by allowing users to create multiple, independent meshes within a single cluster.
- Support for the Kubernetes Ingress controller: This feature allows Cilium to integrate with the Kubernetes Ingress resource, which provides a way to control access to services in a cluster. This allows users to use Cilium to secure and monitor communication between containers, services, and networks in a Kubernetes cluster.
Cilium Service Mesh with Istio
Cilium and Istio are both service mesh solutions that provide a way to manage and secure communication between microservices in a distributed application. They can be used together to provide a robust and flexible service mesh for cloud native environments. This is possible because Cilium is available as a networking plugin.
Istio can enrich the functionality of the Cilium service mesh by providing additional features and capabilities:
- Istio Auth allows users to secure communication between microservices using mutual TLS. It provides an easy way to establish trust between microservices and can be used to enforce security policies at the application layer.
- Istio telemetry export allows users to collect and export telemetry data from the Istio service mesh. This data can be used to monitor the health and performance of the application, and can be exported to a variety of monitoring and observability tools.
- Istio allows users to enforce policies and perform actions based on the traffic flowing through the service mesh. It provides a way to enforce policies at runtime, and can be used to perform tasks such as rate limiting and request tracing.
Cilium with Istio: Architecture
In a setup where Cilium is used with Istio, userspace proxies such as Envoy can be used as the sidecar proxy to intercept and monitor communication between microservices. The Envoy proxy is a high-performance, scalable proxy that can be used to route, balance, and secure communication between microservices.
The golang orchestration agent is a component of Cilium that is written in Go and is responsible for managing the eBPF (extended Berkeley Packet Filter) programs that are used to monitor and manipulate network traffic. It communicates with the BPF datapath component to apply policies and monitor network traffic.
The BPF datapath component is a kernel-level component of Cilium that uses eBPF and XDP (eXpress Data Path) to monitor and manipulate network traffic in the Linux kernel. It is responsible for enforcing policies and providing visibility into network traffic.
The diagram below demonstrates how Kubernetes and Istio can use the Cilium datapath simultaneously. Both Kubernetes and Istio can be used for collaborative orchestration.
Image Source: Cilium
The graph below provides latency measurements (in microseconds) for different high-performance proxies, ranking these latencies by percentile. The setup is straightforward – two containers run in separate pods communicate with each other via a proxy. There are no policy rules, routing rules, or iptables.
Image Source: Cilium
It is important to note that these latency measurements should be used as a guide, rather than a definitive benchmark. However, they are useful for determining where to perform datapath operations given the large difference. The evaluation should provide an idea of whether it is worthwhile to continue down a specific path.
Resolving network problems
There are different options for approaching networking problems. For example, the Linux kernel’s socket redirect feature allows users to redirect network traffic from one socket to another. It has the potential to be a useful tool for managing and securing network communication in cloud native environments.
With socket redirect, two processes can communicate directly in a similar way to UNIX domain socket communication, but without the need to create TCP packets. The TCP stack may have different costs based on whether communication is in “no proxy” or “Cilium in-kernel” mode.
Cilium and Gloo Network
Gloo Network enables Enterprise Kubernetes networking, via a modular CNI (Container Networking Interface) architecture. By enabling advanced routing and observability, Gloo Network extends robust application networking for platform engineering and DevOps teams.
To deliver this level of security, Solo has integrated the capabilities of the open source Cilium project, Linux kernel-level eBPF security and the Kubernetes CNI layer. These features enable platform teams to gain advanced management and monitoring capabilities for their networking stack as a turn-key operation with Gloo Network. As a result, Gloo Network customers gain deeper control over connection handling, load balancing, and redirect even before traffic reaches the workloads. This can provide a more efficient enforcement of policies in accordance with your organizational security profiles.
When using Gloo Network with an Istio service mesh that is managed by Gloo Mesh, it creates a multi-layer defense mechanism that will help protect cloud native applications from being compromised. This includes auto-detection of clusters and cluster services, as well as multi-tenancy across the service mesh. Scaling application network policy and observability across multi-cluster deployments is also enhanced with Gloo Network. This combination will not only enable networking, security, and observability, but it also inherits all of the advanced functionality that Gloo Mesh delivers above and beyond standard Istio.