Exploring Cilium Layer 7 Capabilities Compared to Istio

My mental model has always been that CNI focuses on Layer 3 and 4, so I was surprised to learn that Cilium CNI supports Layer 7 (L7) policies. I decided to dig in to learn more about its L7 policy support and how it compares with Istio’s L7 policies. Below are the top four things I learned:

Cilium’s Layer 7 policy is simple to use with its own Envoy filter

Installing Cilium CNI is very straightforward, I love the `cilium status` command! I installed the latest stable version, which is v1.12. I found it pretty easy to create Cilium’s L7 policy, where I can simply add HTTP rules towards the end of my existing L4 CiliumNetworkPolicy resource:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "service-account"
spec:
  endpointSelector:
	matchLabels:
  	io.cilium.k8s.policy.serviceaccount: helloworld
  ingress:
  - fromEndpoints:
	- matchLabels:
    	io.cilium.k8s.policy.serviceaccount: sleep
	toPorts:
	- ports:
  	- port: "5000"
    	protocol: TCP
  	rules:
    	http:
    	- method: GET
      	path: "/hello"

It worked nicely when I tested in my local environment, my client pod (sleep) can no longer call the server pod (helloworld) on any other path other than `/hello` on port `5000`.

kubectl exec -it $(k get po -lapp=sleep -ojsonpath='{.items[0].metadata.name}') -- curl helloworld:5000/hello
kubectl exec -it $(k get po -lapp=sleep -ojsonpath='{.items[0].metadata.name}') -- curl helloworld:5000/hello3

Hello version: v1, instance: helloworld-v1-cross-node-55446d46d8-d8qm5
Access denied

So, how does this work? Cilium is installed as a DaemonSet that runs on each Kubernetes worker node. Inside of the Cilium pod, there is an Envoy proxy running to mediate any traffic into the pods (on the same node as the Cilium pod) that have L7 policies. In the example above, when the sleep pod calls the helloworld pod, the Envoy proxy inside of the Cilium pod on the node where the helloworld pod runs intercepts the traffic and checks if the traffic can be allowed based on any L7 policy applied to the helloworld pod.

Digging into the Envoy configuration on the Cilium pod:

node=kind-worker
pod=$(kubectl -n kube-system get pods -l k8s-app=cilium -o json | jq -r ".items[] | select(.spec.nodeName==\"${node}\") | .metadata.name" | tail -1)
k exec -n kube-system -it $pod -- curl -s --unix-socket /var/run/cilium/envoy-admin.sock http://localhost/config_dump

You’ll find Cilium’s own extension (Cilium.L7Policy) to its Envoy proxy, inserted as an HTTP filter in the `cilium-HTTP-ingress:11055` listener, right before the router filter.

           "http_filters": [
            {
             "name": "cilium.l7policy",
             "typed_config": {
              "@type": "type.googleapis.com/cilium.L7Policy",
              "access_log_path": "/var/run/cilium/access_log.sock"
             }
            },
            {
             "name": "envoy.filters.http.router"
            }
           ],

In this case, Envoy proxy uses xDS to obtain its normal config and the Cilium’s L7 policies from its xDS control plane. Cilium has its own custom L7 Envoy filter in their Envoy distribution, which evaluates the policies and applies them to traffic to determine if the traffic should be allowed or disallowed. From the xDS responses from the control plane, it contains the network policy for endpointId 69 (helloworld-v1-cross-node-55446d46d8-d8qm5) with the ingress policy to only allow the `/hello` path on the GET request for pod IP 10.244.2.232 (helloworld-v1-cross-node-55446d46d8-d8qm5 pod).

{
  "versionInfo": "23",
  "resources": [
…
{"@type":"type.googleapis.com/cilium.NetworkPolicy","conntrackMapName":"global","egressPerPortPolicies":[{}],"endpointId":"69","ingressPerPortPolicies":[{"port":5000,"rules":[{"httpRules":{"httpRules":[{"headers":[{"name":":method","safeRegexMatch":{"googleRe2":{},"regex":"GET"}},{"name":":path","safeRegexMatch":{"googleRe2":{},"regex":"/hello"}}]}]}}]}],"name":"10.244.2.232"}
  ],
  "typeUrl": "type.googleapis.com/cilium.NetworkPolicy",
  "nonce": "23"
}

Formatting it for an easy read, you can see it contains the HTTP rule from the L7 policy applied earlier:

{
  "@type": "type.googleapis.com/cilium.NetworkPolicy",
  "conntrackMapName": "global",
  "egressPerPortPolicies": [
	{}
  ],
  "endpointId": "69",
  "ingressPerPortPolicies": [
	{
  	"port": 5000,
  	"rules": [
    	{
      	"httpRules": {
        	"httpRules": [
          	{
            	"headers": [
              	{
                	"name": ":method",
                	"safeRegexMatch": {
                  	"googleRe2": {},
                  	"regex": "GET"
                	}
              	},
              	{
                	"name": ":path",
                	"safeRegexMatch": {
                  	"googleRe2": {},
                  	"regex": "/hello"
                	}
              	}
            	]
          	}
        	]
      	}
    	}
  	]
	}
  ],
  "name": "10.244.2.232"
}

Display the endpoint ID 69 details:

kubectl get ciliumendpoint
NAME                                    	ENDPOINT ID   IDENTITY ID   INGRESS ENFORCEMENT   EGRESS ENFORCEMENT   VISIBILITY POLICY   ENDPOINT STATE   IPV4       	IPV6
helloworld-v1-cross-node-55446d46d8-d8qm5   69        	32225                                                                    	ready        	10.244.2.232

kubectl get ciliumendpoint helloworld-v1-cross-node-55446d46d8-d8qm5 -o yaml
apiVersion: cilium.io/v2
kind: CiliumEndpoint
metadata:
  creationTimestamp: "2022-07-20T15:11:43Z"
  generation: 1
  labels:
	app: helloworld
	pod-template-hash: 55446d46d8
	version: v1
  name: helloworld-v1-cross-node-55446d46d8-d8qm5
  namespace: default
  ownerReferences:
  - apiVersion: v1
	kind: Pod
	name: helloworld-v1-cross-node-55446d46d8-d8qm5
	uid: 684420c7-db2b-4a6e-ab5d-7fe0917fadbc
  resourceVersion: "65306"
  uid: 09caf26b-403c-4e8d-9427-77d2d0cb58e6
status:
  encryption: {}
  external-identifiers:
	container-id: dce3321c28b6a67b6509cccc49d64e241ae928ea96790be574cc963d209af605
	k8s-namespace: default
	k8s-pod-name: helloworld-v1-cross-node-55446d46d8-d8qm5
	pod-name: default/helloworld-v1-cross-node-55446d46d8-d8qm5
  id: 69
  identity:
	id: 32225
	labels:
	- k8s:app=helloworld
	- k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
	- k8s:io.cilium.k8s.policy.cluster=default
	- k8s:io.cilium.k8s.policy.serviceaccount=helloworld
	- k8s:io.kubernetes.pod.namespace=default
	- k8s:version=v1
  networking:
	addressing:
	- ipv4: 10.244.2.232
	node: 172.18.0.3
  state: ready

The above approach using the Cilium.L7Policy filter for L7 access control is quite different from how Istio enforces L7 policy. For example, Istio uses RBAC filters from Envoy upstream to authorize actions by identified clients.

Cilium vs Istio: How identities are generated?

Given Cilium supports L7 policies, if I am already using Cilium as a CNI for L3/L4 policies, can I use Cilium’s L7 policies to achieve a zero trust network? Identity is what it claims and it is critical to proving that the source and target pods have the correct identity. Let us dive into how identity is derived by comparing Cilium and Istio.

From the generated CiliumEndpoint custom resource for my helloworld-v1-cross-node pod, the identity of the pod is 32225. Use the command below to display the identity 32225 details:

kubectl get ciliumidentity 32225 -o yaml
apiVersion: cilium.io/v2
kind: CiliumIdentity
metadata:
  creationTimestamp: "2022-07-20T21:19:37Z"
  generation: 1
  labels:
	app: helloworld
	io.cilium.k8s.policy.cluster: default
	io.cilium.k8s.policy.serviceaccount: helloworld
	io.kubernetes.pod.namespace: default
	version: v1
  name: "32225"
  resourceVersion: "4130"
  uid: b942b33c-741f-48a8-b294-e0028501043c
security-labels:
  k8s:app: helloworld
  k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name: default
  k8s:io.cilium.k8s.policy.cluster: default
  k8s:io.cilium.k8s.policy.serviceaccount: helloworld
  k8s:io.kubernetes.pod.namespace: default
  k8s:version: v1

By default in Cilium without IPSec or Wireguard, the Kubernetes pods’ information is stored in the eBPF map, and enforcing L3/L4 policies are executed in eBPF. The map correlates pod IPs to their identities, which are essentially integers as you see above, generated from pod labels and pod properties such as namespaces, etc. When Cilium receives an incoming connection, it looks up the pod IP to find out the corresponding identity in the eBPF map, then uses that identity to check if the incoming connection is allowed based on the relevant network policies. The source of identity is not cryptographic primitive. It is based on network identity, e.g. the IP of the pod which could have issues with eventual consistency and less strong guarantees. If you are concerned with someone in the cluster spoofing your pod IP addresses or your pods may go up and down a lot where pod IP addresses are reused (which is typical in Kubernetes), you may be concerned with identity generated from the network.

Let us walk through how Istio creates service identity for pods in the Istio service mesh. A service account token that is provisioned by k8s, mounted to the pod, and Istio agent exchanges the service account token for client certificate via the Certificate Signing Request (CSR) to the Istio CA (or an external CA).

When connecting from client to server, the client asks the server to show certs and server also asks for client certs, for example:

kubectl exec -it $(k get po -lapp=sleep -ojsonpath='{.items[0].metadata.name}') -c istio-proxy -- openssl s_client -connect helloworld:5000 -showcerts
CONNECTED(00000003)
Can't use SSL_get_servername
depth=1 O = cluster.local
verify error:num=19:self signed certificate in certificate chain
verify return:1
depth=1 O = cluster.local
verify return:1
depth=0
verify return:1
---
Certificate chain
 0 s:
   i:O = cluster.local
-----BEGIN CERTIFICATE-----
MIIDQzCCAiugAwIBAgIQPox+VZtC7n3i+B9PhHsqnzANBgkqhkiG9w0BAQsFADAY
MRYwFAYDVQQKEw1jbHVzdGVyLmxvY2FsMB4XDTIyMDcxNDE5MTMzMVoXDTIyMDcx
NTE5MTUzMVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJTTlVl7
rtZOvyy6CVoiK1lxu9vDI19a5jT6AMMwx5SHsgWLzM/PI7nbt8d3F75kMzYlk3Wi
to0El0HD/LGkwZmjf5dmzmySZYS2FUVa+BxgSA6n6bj6wubAQotJYi6rBIML+2zr
DPi/7Z9HdiUphOeLCfkxE9IlStR3/6+LfpOL51jH+Ibnz5nR7fOkA1iyg+6YA3eh
l1oesFosltHaUawPn4qKgZiyN3Lrjw3UgcJ+xGgL8GSZWV09ffcRJzquazRPPy3G
LDo6isXaqNtlJoQa/W3aiuGnNmeUP4G3aPJGz8adWjC2GPxQYh3vlRAbADf3W1mR
CqB6bu7S1DFJj20CAwEAAaOBoDCBnTAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYw
FAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAU
TE+MDVW7GkIT/RaO8b4xmAPXCpwwPQYDVR0RAQH/BDMwMYYvc3BpZmZlOi8vY2x1
c3Rlci5sb2NhbC9ucy9kZWZhdWx0L3NhL2hlbGxvd29ybGQwDQYJKoZIhvcNAQEL
BQADggEBAFlUuLMjtEKUB/VbyBSPJPfLwLmVEDb/lOVrM3Ny4kN2dxXFn3xmb71c
WGwlzX6dk6cF663ClXnxEpySG2qoRRDV4flF4poRgMczrhtv6BE+60bfod0rvRxT
yiiQRSb8oT5xGoAWx6O6vJELdHLhdFXMxW1OrfHyFisZlysxavPTwG9+0ifmS+yJ
HHgl1etQZ16xuWTbpSxwuqbFBg4et7qSFi7y/onJxNps1PYOpsOh1k6DWPX+r+/C
nCNLd/3mONR5yHegHYXtHA3FFJyOo7wEJSOFT+qd7JpniWSGh2smSHmITEjM3bnC
9TD+Q2tAf0cMQfHcauSs8ixxeMGdmvE=
-----END CERTIFICATE-----
 1 s:O = cluster.local
   i:O = cluster.local
-----BEGIN CERTIFICATE-----
MIIC/TCCAeWgAwIBAgIRAJhHLsuxTx3IFWtB8GpXH2MwDQYJKoZIhvcNAQELBQAw
GDEWMBQGA1UEChMNY2x1c3Rlci5sb2NhbDAeFw0yMjA3MTQxODQxNTJaFw0zMjA3
MTExODQxNTJaMBgxFjAUBgNVBAoTDWNsdXN0ZXIubG9jYWwwggEiMA0GCSqGSIb3
DQEBAQUAA4IBDwAwggEKAoIBAQCwvjRMrYcQM0yDisCissbwsr/U72NFMWeMwM5Y
l4UuGvwqopbihcX9dujchga/FXZVlZxcSbj0VHK/QziklA7cSsffalS9tr7ZZxBv
uBcyN6Uyw/w0UI7g+lpLfL5FehnXpDXzVGZzJAqOcOLHOCE7K7z+uLyIbpZlT88J
ROI6ealK0uair9yk3Y38WfPIUl3KXGioBzNub/OAFjLqjEheNJbVPvyxtWXK3fIp
tK/g2MGqO/QvlgnuiW2ZTrY5zSX/xDs+LWY02KzJq0PKy+0j76K8rIbeo6hJVsVZ
sAxic8/Y5brRwkAzE5uxd/L5IEMB9PD1NcX9CoAFyVsh6PH5AgMBAAGjQjBAMA4G
A1UdDwEB/wQEAwICBDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRMT4wNVbsa
QhP9Fo7xvjGYA9cKnDANBgkqhkiG9w0BAQsFAAOCAQEAf1JofCaG/S0v1l/0RlqK
3qXbm68QFJTv1blZ98f8LWRfcgTw7kxR0LLNq9L0TCeRhmfQJXqxsz8v4bqFdWqH
fTdIHJLe3uABpu00L23JV9P/Xtz1edQ+m/gS047P7D6zaiV1R5oyyTVgm1hrWWYX
G4TPEBqyqQ53DpeIH9fvRj0sfqULkN7ZuF9Gmoc995+Qc15qbiIjBOXSI0jaO0X+
ESHRRiVvZBuq5ePObHReAY0wcdfmXhIDRi4P0kmq3CkcLcItDRgHL/605ltl8rTE
AZ3J6CczzDtt/CDMhiVNqMg8MIdU8PwYj0s3sPHjKQBeZ5WPnnDYiTYprq5kez2G
bg==
-----END CERTIFICATE-----
---
Server certificate
subject=

issuer=O = cluster.local

---
Acceptable client certificate CA names
O = cluster.local
Requested Signature Algorithms: ECDSA+SHA256:RSA-PSS+SHA256:RSA+SHA256:ECDSA+SHA384:RSA-PSS+SHA384:RSA+SHA384:RSA-PSS+SHA512:RSA+SHA512:RSA+SHA1
Shared Requested Signature Algorithms: ECDSA+SHA256:RSA-PSS+SHA256:RSA+SHA256:ECDSA+SHA384:RSA-PSS+SHA384:RSA+SHA384:RSA-PSS+SHA512:RSA+SHA512
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2168 bytes and written 393 bytes
Verification error: self signed certificate in certificate chain
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 19 (self signed certificate in certificate chain)
---
140555461989696:error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required:../ssl/record/rec_layer_s3.c:1543:SSL alert number 116
command terminated with exit code 1

If you step through the first certificate, you can see the client x509 certificate contains the issuer, SAN, SPIFFE ID, and validity (expires in 24 hours!):

echo "-----BEGIN CERTIFICATE-----
MIIDQzCCAiugAwIBAgIQPox+VZtC7n3i+B9PhHsqnzANBgkqhkiG9w0BAQsFADAY
MRYwFAYDVQQKEw1jbHVzdGVyLmxvY2FsMB4XDTIyMDcxNDE5MTMzMVoXDTIyMDcx
NTE5MTUzMVowADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJTTlVl7
rtZOvyy6CVoiK1lxu9vDI19a5jT6AMMwx5SHsgWLzM/PI7nbt8d3F75kMzYlk3Wi
to0El0HD/LGkwZmjf5dmzmySZYS2FUVa+BxgSA6n6bj6wubAQotJYi6rBIML+2zr
DPi/7Z9HdiUphOeLCfkxE9IlStR3/6+LfpOL51jH+Ibnz5nR7fOkA1iyg+6YA3eh
l1oesFosltHaUawPn4qKgZiyN3Lrjw3UgcJ+xGgL8GSZWV09ffcRJzquazRPPy3G
LDo6isXaqNtlJoQa/W3aiuGnNmeUP4G3aPJGz8adWjC2GPxQYh3vlRAbADf3W1mR
CqB6bu7S1DFJj20CAwEAAaOBoDCBnTAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYw
FAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHwYDVR0jBBgwFoAU
TE+MDVW7GkIT/RaO8b4xmAPXCpwwPQYDVR0RAQH/BDMwMYYvc3BpZmZlOi8vY2x1
c3Rlci5sb2NhbC9ucy9kZWZhdWx0L3NhL2hlbGxvd29ybGQwDQYJKoZIhvcNAQEL
BQADggEBAFlUuLMjtEKUB/VbyBSPJPfLwLmVEDb/lOVrM3Ny4kN2dxXFn3xmb71c
WGwlzX6dk6cF663ClXnxEpySG2qoRRDV4flF4poRgMczrhtv6BE+60bfod0rvRxT
yiiQRSb8oT5xGoAWx6O6vJELdHLhdFXMxW1OrfHyFisZlysxavPTwG9+0ifmS+yJ
HHgl1etQZ16xuWTbpSxwuqbFBg4et7qSFi7y/onJxNps1PYOpsOh1k6DWPX+r+/C
nCNLd/3mONR5yHegHYXtHA3FFJyOo7wEJSOFT+qd7JpniWSGh2smSHmITEjM3bnC
9TD+Q2tAf0cMQfHcauSs8ixxeMGdmvE=
-----END CERTIFICATE-----" | step certificate inspect -
Certificate:
	Data:
    	Version: 3 (0x2)
    	Serial Number: 83141619664914625682796259868677974687 (0x3e8c7e559b42ee7de2f81f4f847b2a9f)
	Signature Algorithm: SHA256-RSA
    	Issuer: O=cluster.local
    	Validity
        	Not Before: Jul 14 19:13:31 2022 UTC
        	Not After : Jul 15 19:15:31 2022 UTC
    	Subject:
    	Subject Public Key Info:
        	Public Key Algorithm: RSA
            	Public-Key: (2048 bit)
            	Modulus:
                	94:d3:95:59:7b:ae:d6:4e:bf:2c:ba:09:5a:22:2b:
                	59:71:bb:db:c3:23:5f:5a:e6:34:fa:00:c3:30:c7:
                	94:87:b2:05:8b:cc:cf:cf:23:b9:db:b7:c7:77:17:
                	be:64:33:36:25:93:75:a2:b6:8d:04:97:41:c3:fc:
                	b1:a4:c1:99:a3:7f:97:66:ce:6c:92:65:84:b6:15:
                	45:5a:f8:1c:60:48:0e:a7:e9:b8:fa:c2:e6:c0:42:
                	8b:49:62:2e:ab:04:83:0b:fb:6c:eb:0c:f8:bf:ed:
                	9f:47:76:25:29:84:e7:8b:09:f9:31:13:d2:25:4a:
                	d4:77:ff:af:8b:7e:93:8b:e7:58:c7:f8:86:e7:cf:
                	99:d1:ed:f3:a4:03:58:b2:83:ee:98:03:77:a1:97:
                	5a:1e:b0:5a:2c:96:d1:da:51:ac:0f:9f:8a:8a:81:
                	98:b2:37:72:eb:8f:0d:d4:81:c2:7e:c4:68:0b:f0:
                	64:99:59:5d:3d:7d:f7:11:27:3a:ae:6b:34:4f:3f:
                	2d:c6:2c:3a:3a:8a:c5:da:a8:db:65:26:84:1a:fd:
                	6d:da:8a:e1:a7:36:67:94:3f:81:b7:68:f2:46:cf:
                	c6:9d:5a:30:b6:18:fc:50:62:1d:ef:95:10:1b:00:
                	37:f7:5b:59:91:0a:a0:7a:6e:ee:d2:d4:31:49:8f:
                	6d
            	Exponent: 65537 (0x10001)
    	X509v3 extensions:
        	X509v3 Key Usage: critical
            	Digital Signature, Key Encipherment
        	X509v3 Extended Key Usage:
            	Server Authentication, Client Authentication
        	X509v3 Basic Constraints: critical
            	CA:FALSE
        	X509v3 Authority Key Identifier:
            	keyid:4C:4F:8C:0D:55:BB:1A:42:13:FD:16:8E:F1:BE:31:98:03:D7:0A:9C
        	X509v3 Subject Alternative Name: critical
            	URI:spiffe://cluster.local/ns/default/sa/helloworld
	Signature Algorithm: SHA256-RSA
     	59:54:b8:b3:23:b4:42:94:07:f5:5b:c8:14:8f:24:f7:cb:c0:
     	b9:95:10:36:ff:94:e5:6b:33:73:72:e2:43:76:77:15:c5:9f:
     	7c:66:6f:bd:5c:58:6c:25:cd:7e:9d:93:a7:05:eb:ad:c2:95:
     	79:f1:12:9c:92:1b:6a:a8:45:10:d5:e1:f9:45:e2:9a:11:80:
     	c7:33:ae:1b:6f:e8:11:3e:eb:46:df:a1:dd:2b:bd:1c:53:ca:
     	28:90:45:26:fc:a1:3e:71:1a:80:16:c7:a3:ba:bc:91:0b:74:
     	72:e1:74:55:cc:c5:6d:4e:ad:f1:f2:16:2b:19:97:2b:31:6a:
     	f3:d3:c0:6f:7e:d2:27:e6:4b:ec:89:1c:78:25:d5:eb:50:67:
     	5e:b1:b9:64:db:a5:2c:70:ba:a6:c5:06:0e:1e:b7:ba:92:16:
     	2e:f2:fe:89:c9:c4:da:6c:d4:f6:0e:a6:c3:a1:d6:4e:83:58:
     	f5:fe:af:ef:c2:9c:23:4b:77:fd:e6:38:d4:79:c8:77:a0:1d:
     	85:ed:1c:0d:c5:14:9c:8e:a3:bc:04:25:23:85:4f:ea:9d:ec:
     	9a:67:89:64:86:87:6b:26:48:79:88:4c:48:cc:dd:b9:c2:f5:
     	30:fe:43:6b:40:7f:47:0c:41:f1:dc:6a:e4:ac:f2:2c:71:78:
     	c1:9d:9a:f1

Even if the pod IP changes as the pod goes up and down, you can not mistake the identity for anything else because it is embedded in the connection itself. The connection would not be made with the wrong identity. This is not just a certificate, a pod MUST present a valid service account token to Istio which gets exchanged for a valid certificate via CSR requests. Pods never send private keys over the network. Further, the CSR process continues through the pod lifecycle as the certificate is renewed every 12 hours in Istio.

Cilium vs Istio: How is traffic encrypted?

By default, there is no encryption among nodes for Cilium. Optionally, you could enable pod-to-pod and/or node-to-node encryption via IPSec or Wireguard. I didn’t try either of them because IPSec node-to-node encryption is beta in v1.11 (It graduated out of beta in v1.12 a few days ago but I haven’t had a chance to play with it!), and Wireguard encryption doesn’t support L7 policy enforcement. Per Cilium team, pod-to-pod encryption is the recommended solution for avoiding IP address spoofing and is widely used in large-scale production deployments of Cilium.

Istio automatically encrypts traffic using Mutual TLS whenever possible. Mutual TLS alone is not always enough to fully secure traffic, as it provides only authentication, not authorization. This means that anyone with a valid certificate can still access a service. To fully lock down traffic, it is recommended to configure authorization policies, which allow creating fine-grained policies to allow or deny traffic.

Multi-tenancy for Envoy for Layer 7

With Cilium, the L7 policy is evaluated by Envoy proxy on every node.  Envoy proxy on a node handles L7 processing for multiple pods running on the same node as the Envoy proxy. With Istio, the L7 policy is evaluated on every pod thus you need an Envoy proxy on every pod which might incur more run costs when compared with running Envoy per node in Cilium.  But with Cilium, you have Envoy on the node that is doing Envoy L7 processing for multiple identities. If you look at Envoy CVEs, you’ll see most of the CVEs are L7-related. The probability to have security issues for 1 Envoy to process L7 policies for multiple pods are higher than for 1 Envoy process L7 policies for its own pod. WebAssembly (Wasm) is a great way to provide custom extensions to Envoy based on requirements from individual teams.  A Wasm filter could have a bug (for example, with infinite loop) that makes Envoy proxy hang thus impacting every other team who has pods running on the same node. You have to be extremely careful with L7 for Envoy on the node to minimize impact to pods on the same node.

There is quite a bit of information on why multi-tenancy for Envoy (or other proxies) for Layer 7 has a huge catch, for example, this tweet, summarized issues with the L7 multitenancy proxy in terms of the outage, noisy neighbor, budgeting, and cost-attrition, along with the Envoy team had evaluated that how hard it is to implement multitenancy in Envoy and came to the conclusion that the complexity isn’t worth the effort.

Wrapping up

Mutual TLS (mTLS) is used everywhere. The cryptographic modules used by mTLS can be FIPS 140-2 compliant which is desired by many of our enterprise and government customers. Network cache-based identity may fail when a pod dies, a new pod is created and gets the IP of the old pod but has a different identity. Due to the slow propagation of new pod information to the Cilium agent, the new pod could have a mistaken identity. (To read more on mistaken identity, refer to this blog for details.) Enabling multi-tenant proxies for L7 policies are complicated and can cause outage, noisy neighbor, budgeting, and cost-attrition concerns from one tenant to other tenants on the same node.

To achieve defense in depth, you should consider L3/L4 network policies in addition to L7 security policies from a service mesh that provides cryptographic identity. Combining the two is highly recommended as part of the Istio security best practices. In the event of a compromised pod or security vulnerability in the cluster, defense in depth will limit or stop an attacker’s progress whether it is Man In The Middle Attacks (MITM) or IP address spoofing.