Technical

Cilium Week Recap: Preparing for Cilium Certification

CNCF announced that Cilium Certified Associate (CCA) certification will be released in 2024. CCA is an entry-level certification for platform and cloud engineers interested in networking, security, and observability. The exam will consist of multiple-choice questions and will be 90 minutes long.

To prepare the community for the certification and give them a set of resources to use, we’ve created three live streams on YouTube, which cover the majority of topics from the CCA certification curriculum, and called it Cilium Week.

Cilium Architecture and Installation

In the first session, we covered the topics of architecture and installation. We started the stream by explaining how networking in Kubernetes works – we covered the four networking problems we’re trying to solve in Kubernetes and how kube-proxy and CNI solve these problems.

  1. Container-to-container communication
    Container-to-container communication is solved by pods through the use of Linux network namespaces. The network namespaces allow us to have a separate network of interfaces and routing tables from the host system.
  2. Pod-to-pod communication
    Each node in a Kubernetes cluster uses a different CIDR range from where they assign IP addresses. This guarantees a unique IP address for every pod in the cluster. Pods are connected to the node network namespace through a virtual ethernet pair and together with a virtual bridge on the host, the two pods can communicate.
  3. Pod-to-service communication
    Services in Kubernetes give us a unique virtual IP address, as opposed to pods’ ephemeral IP address. These virtual IP addresses are backed by one or more pods and the services are, together with kube-proxy and iptables/IPVS (IP virtual server), load balancing traffic to backing pods.
  4. External communication (ingress/egress)
    The external communication tries to answer the question on how we route the traffic outside of the cluster (for example, from a Pod/Service to the internet) and traffic entering the cluster (for example, from the internet to Kubernetes Service). Egress is solved by the gateway that performs network address translation (NAT) and changes the internal IP address of the node to the public IP address so it can receive the response. On the way back, the same translation happens, but in the reverse direction. For traffic entering the cluster we need a public or external IP address. A Kubernetes LoadBalancer service gives us an IP address where we can send the request. As traffic reaches the LoadBalancer service it gets routed to one of the nodes in the cluster, and with the help of iptables rules and NAT, the packets are directed to one of the pods that backs the service.

We explained how eBPF allows us to run sandboxed programs in the kernel, which enables Cilium to provide, secure, and observe network connectivity between workloads.

The session concluded with a high-level overview of Cilium’s features and a demo showing how to install Cilium on your Kubernetes cluster.

Cilium Network Policy

In the second session, we focused on the network policy and network observability. Cilium separates security from network addressing, and for that reason, security in Cilium is based on the identity of the pod, which gets derived from the labels.

The NetworkPolicy CRD is available in Kubernetes; however, it doesn’t have any implementation – the implementation is provided by Cilium. The NetworkPolicy allows us to control traffic at L3 and L4, but it doesn’t support L7.

In addition to implementing the NetworkPolicy CRD, Cilium also brings in two new CRDs called CiliumNetworkPolicy and CiliumClusterwideNetworkPolicy. These resources have L7 support using Envoy proxy, and support HTTP, gRPC, and Kafka protocols. The resource includes enhanced ingress and egress policies unavailable in the traditional NetworkPolicy resource.

We wrapped up this session with a demo of network policies and observability using Hubble CLI and UI.

BGP, Service Mesh, and Cluster Mesh

In this final session, we discussed advanced features that Cilium offers, specifically around service mesh, cluster mesh, and external networking. There are situations where pods need to communicate with other pods in other clusters, with external non-containerized workloads. Sometimes they require some form of encryption.

In the first part of the session, we reviewed some external networking features like how BGP can be used to exchange PodCIDR and ServiceCIDR (and associated IPAM Pools) networks with upstream/northbound BGP routers, and saw this in action with a demo.

We discussed workload onboarding into the Kubernetes Network through an external Cilium agent (on that workload), which establishes VXLAN connectivity back to Cilium, putting it on the same network. You can also integrate a VTEP-ready switch with Cilium so it can be a part of the Kubernetes network via Cilium. Finally, there is Egress networking, which offers a NAT-like functionality.

In the second part, we discussed Cilium service mesh ]and reviewed service mesh use cases:

  • Resiliency
  • L7 traffic management
  • Identity-based security
  • Observability & tracing
  • Transparency

And key capabilities:

  • Kubernetes Ingress
  • Gateway API
  • Mutual-authentication and encryption
  • L7-aware traffic management
  • Sidecar-less

Finally, we reviewed cluster mesh, a technology that connects multiple Kubernetes clusters together and allows for full pod-routing across all cluster mesh-based clusters.

Further Preparation

Want to delve into understanding Cilium even further? Take our free Introduction to Cilium course, and learn more about Gloo Network for Cilium.