Network Security 4 Ways

With the rise of Cloud Native Networking, there comes a point in time where you must leverage Network Security capabilities in your network stack to thwart malicious attacks and defend your microservices.

There are a number of ways we can achieve this level of network security; more specifically, we’re going to take a look at this through the lens of a few open source technologies that can provide this:

  • Kubernetes, a container orchestration system
  • Cilium, a container networking interface, a Layer 3/4, and partially Layer 7 networking switch
  • Istio, a system for traffic engineering, observability, and security/encryption operating at Layer 7
  • Gloo Mesh, a multi-layer networking abstraction for Cilium, Istio, and Envoy


Kubernetes Network Policy 

Kubernetes by default has a resource called the Network Policy resource. This resource typically acts as a Layer 3/4 firewall, and will match policy against labels assigned to specific resources. If a pod/deployment has a label, and this same label is defined in the network policy resource. Let’s briefly look at an example policy taken from the Kubernetes docs site.

We can see a visual representation of this right below. In this Network Policy definition, we see the following details:

  • A name for the policy and the namespace it’s deployed to
  • The PodSelector which looks for pods with the label “role: db”
  • The policy types which are Ingress and Egress
  • The specific rules in place that are allowing traffic from certain source IPs and Ports, and towards certain CIDRs.

Using the approach of Kubernetes native network policy allows for granularity of allowing traffic based on CIDR and port information however, this is limited to Layer 3/4. The usage of the Network Policy resources still requires a CNI that will support this capability. Virtually all CNIs support the Network Policy resource with the exception of Flannel

As it currently stands, Kubernetes 1.24’s Network Policy resource doesn’t allow you to do anything with TLS, log network events, and comes with an additional list of limitations which can be viewed here.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978


Cilium CNI and Network Policy

What if we want something a little different? It’s time to let Cilium and the CiliumNetworkPolicy shine!!!

The Cilium CNI is an eBPF based Container Network Plugin for Kubernetes, but it fundamentally shifts packet processing away from iptables, and over to eBPF in the Kernel. This further allows for deeper inspection of various processes and Layer 7 header information.

The CiliumNetworkPolicy CRD is a resource that addresses some of the gaps that the native Network Policy has. More specifically, the CiliumNetworkPolicy can define multiple layers of policies in a single YAML configuration. 

Let’s look at a Layer 3 focused example first:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "cidr-rule"
spec:
  endpointSelector:
    matchLabels:
      app: myService
  egress:
  - toCIDR:
    - 20.1.1.1/32
  - toCIDRSet:
    - cidr: 10.0.0.0/8
      except:
      - 10.96.0.0/12

The YAML configuration is simply stating that we are allowing traffic to the following subnet CIDRs, 20.1.1.1/32 (which is a host route 😉 ), and 10.0.0.0/8 with the exception of 10.96.0.0/12. 

10.96.0.0/12 is actually an internally used subnet within the KinD Kubernetes tool, and is known as the “service-CIDR”

This basically looks like a Layer 3 Firewall rule-set, which matches on resources with the label “app: myService”. 

This is great! If you carefully inspect it, it looks very similar to the Kubernetes Network Policy. 

Next up we have a Layer 4 specific rule which targets egress traffic towards a destination port and protocol.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "l4-rule"
spec:
  endpointSelector:
    matchLabels:
      app: myService
  egress:
    - toPorts:
      - ports:
        - port: "80"
          protocol: TCP

 

What’s important here is that, we are going beyond just simple hostnames or CIDR information, and filtering (allowing or denying) based on TCP port information, or specifical protocols. This allows for some further granular traffic filtration, in addition to removing specific CIDRs and only limiting to resources with TCP port 80. 

Both IP/CIDR and TCP type policies are demonstrated here, the direction of the flow of traffic being accepted (packet allowed), with a checkmark, and rejected with a red X:

Moving along, Layer 7 network policies are another function of the Cilium CNI but, interestingly enough, the Envoy proxy is processing these Layer 7 rules or policies. The Envoy proxy is embedded into the Cilium Agent Pod that gets deployed via a Daemonset. When the Layer 7 policy is deployed, this kicks off the cilium-envoy process that gets deployed along with the cilium-agent and cilium-health-responder. Envoy has to process the L7 policy.

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "l7-rule"
spec:
  endpointSelector:
    matchLabels:
      app: myService
  ingress:
  - toPorts:
    - ports:
      - port: '80'
        protocol: TCP
      rules:
        http:
        - method: GET
          path: "/path1$"
        - method: PUT
          path: "/path2$"
          headers:
          - 'X-My-Header: true'

The policy above calls out an Ingress policy towards TCP port 80 on the app: myService application, going towards one of two back-end paths. Depending on the path, only certain HTTP operations are allowed. For example, the GET method can only work with “/path1$” as defined. We can see this visually represented below:

Side thought: Because this is deployed as a Daemonset (one proxy per node in the cluster), this potentially could become a single point of failure with processing Layer 7 traffic. This potentially could be a case for using Envoy as a side-car, as seen with Istio. The flip side is that depending on the needs of your workloads, the leaner approach requiring less over-head and less Layer 7 policies might work for you. 

 

Istio Authentication and Authorization Policies

Istio offers the Authentication Policy resource which works to define whether or not traffic streams are encrypted with mutual-TLS (mTLS for short), and if plain text communication is possible. 

To briefly summarize there are three modes:

  • PERMISSIVE: This mode allows for workloads to communicate via plain text or with mTLS. This is used for migrations
  • STRICT: This mode only allows for workloads with the side-car to communicate only using mTLS
  • DISABLE: While unrecommended unless testing out experimental features, this simply disables mTLS encryption.

The STRICT option allows for alignment to your security posture as it acts as a defense mechanism against unencrypted (or untrusted) workloads communicating. 

Read more about Istio Authentication policies here: https://istio.io/latest/docs/concepts/security/#authentication-policies 

Another security capability that Istio takes advantage of is Secure Production Identity Framework For Everyone, or SPIFFE for short. SPIFFE is a framework for providing AuthN and AuthZ through secure identity in the form of a unique and specific type of x.509 certificate to every workload in production. SPIFFE is also used to authenticate databases and platforms without password or API tokens. Istio implements SPIFFE using SPIFFE Runtime Environment, an extensible system that implements SPIFFE standard. SPIRE manages platform and workload attestation, serves an API for controlling attestation policies and coordinates certificate issuance and rotation. Istio will leverage SPIRE to simplify how services can authenticate and attest each other. 

Let’s also discuss Istio Authorization Policies, primarily because it provides a mechanism to return an appropriate response to a request flow between workloads or services for your application. The side-car resource that’s deployed to a pod and sits alongside the main container is the resource that enforces the intended policy at Layer 7. 

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: allow-myService
  namespace: default
spec:
  selector:
    matchLabels:
      app: myService
  action: ALLOW
  rules:
  - to:
    - operation:
         methods: ["GET"]
         ports: [“80”]
         paths: [“/path1$”]
  - to:
    - operation:
         methods: ["PUT"]
         ports: [“80”]
         paths: [“/path2$”]
    when:
    - key: request.headers[X-My-Header]
      values: ["true"]

The result of this policy is that any HTTP Requests directed towards the app: myService service will be assessed for the following:

  • If the destination is /path1$, the destination port is TCP 80, and the method is GET, the request will be processed, or
  • If the destination is /path2$, the destination port is TCP 80, and the method is PUT, along with the X-My-Header request header, the request will be processed
  • If anything else, the request is not processed.

This looks to achieve the same result as the Cilium Layer 7 Network Policy however, the side-car resource enforces this policy. We can visualize this below:


So that brings up the questions: 

  • Can I do it all with Cilium? 
  • Can I use just Istio?
  • Can I use both without overlapping or competing policies?

You cannot do it all with one or the other, as you will not gain the benefits of granular defense-in-depth. It will really come down to the extensibility of each individual technology gives, along with their current sets of capabilities. But what if we wanted to use both simultaneously without conflict? The answer is Gloo Mesh Security! 

 

Gloo Mesh Access Policies

How is Gloo Mesh solving the conundrum of multiple layers of Network Security? Well, for starters, defense-in-depth AND by stream-lining this with a single set of security resources to address network policy. With fewer CRDs to have to worry about, Gloo Networking is taking care of all your application security needs from blocking packets to verifying authenticating and authorizing services.

The architecture of Gloo Networking is slightly out-of-scope for this post but, there are agents that get deployed to your clusters that streamline the creation of Istio VirtualServices, DestinationRules, and ServiceEntries, AuthorizationPolicies along with the CiliumNetworkPolicy CRD. 

When you proceed to create a Gloo Mesh Access Policy resource, the agents translate this configuration to Cilium L3/L4 Network Policies, Istio PeerAuthentication Policies, and Istio Authorization Policies. One single resource creates this all. 

The added effect is the application of this policy to all resources across multiple clusters where your workload runs!

Here is a sample policy for reference which aims to replicate what Cilium L7 NetPol and Istio L7 AuthZPol do, while additionally creating an L4 Cilium NetPol that 

apiVersion: security.policy.gloo.solo.io/v2
kind: AccessPolicy
metadata:
  name: allow-myService
  namespace: default
spec:
  applyToDestinations:
  - port:
      number: 80
    selector:
      labels:
        app: myService
  config:
    authn:
      tlsMode: STRICT
    authz:
      allowedClients:
      - serviceAccountSelector:
          labels:
            app: myService
      allowedPaths:
      - /path1$*
      allowedMethods:
      - GET 
      allowedClients:
      - serviceAccountSelector:
          labels:
            app: myService
      allowedPaths:
      - /path2$*
      allowedMethods:
      - PUT 
      request:
        headers: X-My-Header

 

The result of this policy is that any HTTP Requests directed towards the app: myService service will be assessed for the following:

  • If the destination is /path1$, the destination port is TCP 80, and the method is GET, the request will be processed, or
  • If the destination is /path2$, the destination port is TCP 80, and the method is PUT, along with the X-My-Header request header, the request will be processed
  • If anything else, the request is not processed.

We can see this represented below:

This is the exact same as above! What’s different? Gloo Mesh is handling the management of Istio and Cilium’s resources. With this architecture, we can allow for co-existence of Cilium to leverage eBPF to process Layer 3/4 traffic and Istio to leverage Envoy for Layer 7 processing. A match made in Cloud Native Heaven.

 

Conclusion

There are many ways to achieve network security within your container and microservices environment. You might want to reconsider doing this in the Firewall of yester—-decade, because the firewall of TODAY lives inside your cluster. In this article we reviewed the four ways we can achieve network security:

  • Using Kubernetes Network Policy which still requires a CNI capable of Network Policy
  • Using Cilium L3, L4, L7 policies
  • Using Istio Layer 7 AuthZ policies
  • Using Gloo Mesh Access Policies

 

To augment your research and investigation into Cilium and Istio security, check out the following blog posts and videos by Lin Sun!

 

If you are looking for the co-existence of Cilium and Istio for full stack network security, Gloo Mesh is where it’s at!