No items found.

Pi in the Sky: Onboarding Edge Workloads Into the Service Mesh with Istio Ambient

February 22, 2024
Nina Polshakova

Istio supports connecting workloads outside of a Kubernetes cluster to the mesh, providing the benefits of a service mesh to workloads running anywhere – from legacy applications running on Amazon EC2 instance to a tiny Raspberry Pi. Connecting workloads with the new Istio ambient model simplifies adding edge devices into the mesh without any messy sidecar.

A Raspberry Pi is useful when prototyping edge computing use cases and running Raspbian so it can be onboarded into the service mesh. Istio in ambient mode can support Edge Compute environments without any changes to the underlying applications and provides unified L3/L4 Network policies, security, and observability.

Onboarding a workload not running on a Kubernetes cluster, such as a Virtual Machine (or in this case, a Raspberry Pi!), into the mesh is a pretty complex topic. Solo provides a simplified way to onboard external workloads into the mesh, but for this demo, we used Istio’s open-source Virtual Machine guide.

Before setting up the demo environment, we need a version of ztunnel that can run on the Raspberry Pi. The ztunnel is the “zero-trust tunnel” that provides L4 policy enforcement in the ambient mesh. The demo repository includes a ztunnel arm64 build that will run on Raspian Bookworm, but you can also build the ztunnel directly on the Raspberry Pi.

In this demo setup, we create a flat network following these steps. This enables the pod and services on the Kind cluster to be reachable from the host running the docker container and enables applications running on the Raspberry Pi to reach pods and services in the Kind cluster via the machine running the cluster.

Once the Kind cluster and docker networking is setup, we can install Istio with the profile: ambient:


# Install Istio
cat <<EOF > pi-cluster.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio
spec:
  meshConfig:
	defaultConfig:
  	proxyMetadata:
    	ISTIO_META_DNS_CAPTURE: "true"
    	ISTIO_META_DNS_AUTO_ALLOCATE: "true"
    	ISTIO_META_DNS_PROXY_ADDR: "127.0.0.1:15053"
  profile: ambient
  values:
	ztunnel:
  	meshConfig:
    	defaultConfig:
      	proxyMetadata:
        	ISTIO_META_DNS_CAPTURE: "true"
        	ISTIO_META_DNS_AUTO_ALLOCATE: "true"
        	ISTIO_META_DNS_PROXY_ADDR: "127.0.0.1:15053"
EOF

# Install Istio 
istioctl install -f pi-cluster.yaml --set values.pilot.env.ISTIOD_SAN="istiod.istio-system.svc"

Then we need to create an East-West gateway and expose the control plane through the gateway. This is how the Raspberry Pi will receive xDS updates from istiod.


# Create East-West Gateway
multicluster/gen-eastwest-gateway.sh --single-cluster | istioctl install -y -f -

# Expose istiod
kubectl apply -f $COMMON_SCRIPTS/multicluster/expose-istiod.yaml

Next, we can create some example applications such as helloworld and sleep on the cluster. There applications can be marked for ambient traffic capture with istio.io/dataplane-mode=ambient or labeled for sidecar injection istio-injection=enabled.

Now we need to create a way to represent the Raspberry Pi in the Mesh. Istio uses WorkloadEntries to represent a single instance of an external workload. You can think of this as similar to a Pod in Kubernetes. Similarly, Istio’s WorkloadGroup represents a group of external workloads that share common properties (labels, ports, service accounts, etc.), similar to a Deployment in Kubernetes.


# Create Pi Namespace 
kubectl create namespace "$PI_NAMESPACE"
 
# Label Pi namespace for ambient mode
kubectl label namespace "$PI_NAMESPACE" istio.io/dataplane-mode=ambient

# Create WorkloadGroup WorkloadEntry will be generated from this WorkloadGroup
cat <<EOF > workloadgroup.yaml
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
  name: "${PI_APP}"
  namespace: "${PI_NAMESPACE}"
spec:
  metadata:
	labels:
  	app: "${PI_APP}"
  template:
	serviceAccount: "${SERVICE_ACCOUNT}"
	network: "${CLUSTER_NETWORK}"
EOF
kubectl --namespace "${PI_NAMESPACE}" apply -f workloadgroup.yaml

# Run istioctl to create the pi files and create WorkloadEntry with a very long lived token for demo purposes
istioctl x workload entry configure -f workloadgroup.yaml -o "${WORK_DIR}" --clusterID "${CLUSTER}" --tokenDuration=86400
# Manually create the WorkloadEntry to ensure the ambient label is present
kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadEntry
metadata:
  labels:
	app: "${PI_APP}"
  name: "${PI_APP}"
  namespace: "${PI_NAMESPACE}"
spec:
  address: "${PI_ADDRESS}"
  labels:
	app: "${PI_APP}"
	ambient.istio.io/redirection: enabled
  serviceAccount: "${SERVICE_ACCOUNT}"
EOF

# Copy over the onboarding configuration to the Raspberry Pi
scp -r pi-files $PI_USERNAME@$PI_ADDRESS:~

The final step to onboard the Raspberry Pi into the mesh requires setting up the Istio configuration on the Raspberry Pi side.


# create istio-proxy user if this does not exist yet
groupadd --system istio-proxy
useradd --system --gid istio-proxy --home-dir /var/lib/istio istio-proxy

# install the prebuilt ztunnel 
sudo dpkg -i ztunnel_0.0.0-1_arm64.deb

# Setup pi files
sudo mkdir -p ./var/run/secrets/tokens ./var/run/secrets/istio ./var/lib/istio/ztunnel
sudo mkdir -p /etc/certs

# Copy provisioned resources to the correct location
sudo cp $PI_FILE_PATH/root-cert.pem /etc/certs/root-cert.pem
sudo cp $PI_FILE_PATH/root-cert.pem ./var/run/secrets/istio/root-cert.pem
sudo cp $PI_FILE_PATH/istio-token ./var/run/secrets/tokens/istio-token

# Config setup for running sidecar
sudo mkdir ./etc/istio/config
sudo cp $PI_FILE_PATH/cluster.env ./var/lib/istio/ztunnel/cluster.env
sudo cp $PI_FILE_PATH/mesh.yaml ./etc/istio/config/mesh

# Add address to /etc/hosts to reach istiod for onboarding PI and xDS updates
echo "${ISTIO_EW_ADDRESS} istiod.istio-system.svc" | sudo tee -a /etc/hosts

# Setup istio-proxy ownership 
sudo mkdir -p ./etc/istio/proxy
sudo chown -R istio-proxy ./var/lib/istio /etc/certs ./etc/istio/proxy ./var/run/secrets /etc/certs/root-cert.pem ./var/run/secrets/istio/root-cert.pem ./etc/istio/config/ ./etc/istio/config/mesh

# Run the ztunnel 
sudo -u istio-proxy PROXY_MODE=dedicated CA_ADDRESS="istiod.istio-system.svc:15012" XDS_ADDRESS="istiod.istio-system.svc:15012" CLUSTER_ID=Kubernetes RUST_LOG=debug ISTIO_META_ENABLE_HBONE=true ISTIO_META_DNS_CAPTURE=true ISTIO_META_DNS_AUTO_ALLOCATE=true ISTIO_META_DNS_PROXY_ADDR="127.0.0.1:15053" ztunnel

Now that the Raspberry Pi is onboarded, we should be able to send traffic from Raspberry Pi to applications in the mesh on the Kind cluster using the hostname instead of the IP address and get a 200 OK response:


curl helloworld.helloworld:5000/hello -v

Hello version: v1, instance: helloworld-v1-867747c89-c85x6
~ $ curl helloworld.helloworld:5000/hello -v
* Host helloworld.helloworld:5000 was resolved.
* IPv6: (none)
* IPv4: 10.96.94.223
*   Trying 10.96.94.223:5000...
* Connected to helloworld.helloworld (10.96.94.223) port 5000
> GET /hello HTTP/1.1
> Host: helloworld.helloworld:5000
> User-Agent: curl/8.6.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: istio-envoy
< Date: Tue, 20 Feb 2024 19:17:54 GMT
< Connection: keep-alive
< Content-Type: text/html; charset=utf-8
< Content-Length: 60
< x-envoy-upstream-service-time: 139
< x-envoy-decorator-operation: helloworld.helloworld.svc.cluster.local:5000/*
<
Hello version: v2, instance: helloworld-v2-7f46498c69-8cx68
* Connection #0 to host helloworld.helloworld left intact

Going in the other direction (application in the cluster to the Raspberry Pi) requires a little more configuration. When you create the WorkloadEntry resource, Istio does not provision or run anything. The WorkloadEntry just serves as a reference that Istio uses to configure the mesh.

For users to reliably call your workload, Istio recommends creating a Service association with the WorkloadEntry. That’s what allows clients to reach a stable hostname (i.e. helloworld.helloworld.svc.cluster.local) instead of an ephemeral IP address. Creating the Service also allows you to use Istio’s advanced routing capabilities via VirtualService and DestinationRule APIs.

Create a Kubernetes Service that selects the WorkloadEntry created earlier:


kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: led-pi
  namespace: pi-namespace
  labels:
	app: led-pi
spec:
  ports:
  - port: 8080
	name: http-pi
	targetPort: 8080
  selector:
	app: ${PI_APP}
EOF

One of the fun things about building a demo with a Raspberry Pi is the hardware you get to play with! As part of our demo, we use NeoPixels and the WS2812b led strip. You can find a great open-source wiring guide here on AdaFruit.

If you don’t have access to an LED strip, the same demo can be run with a simple python server running on port 8080 on the Raspberry Pi:


❯ sudo python3 -m http.server 8080
Serving HTTP on :: port 8080 (http://[::]:8080/) ...

If using the LED strip, the demo wraps WS2812b python library with a simple Flask webserver. To run this server:


❯ sudo python3 ./pi_led_server/led_strip_rainbow.py

This will run on port 8080 and will be reachable via:


http://<PI-IP-ADDRESS>:8080/switch

Since we have Istio running on the pi, we can curl with the led-pi.pi-namespace hostname via from the sleep pod:


❯ curl led-pi.pi-namespace:8080/switch

* Host led-pi.pi-namespace:8080 was resolved.
* IPv6: (none)
* IPv4: 10.96.190.204
*   Trying 10.96.190.204:8080...
* Connected to led-pi.pi-namespace (10.96.190.204) port 8080
> GET /switch HTTP/1.1
> Host: led-pi.pi-namespace:8080
> User-Agent: curl/8.6.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: envoy
< Date: Tue, 20 Feb 2024 19:27:11 GMT
< Connection: keep-alive
< Content-Type: text/html; charset=utf-8
< Content-Length: 155
< x-envoy-upstream-service-time: 1825
<

Now let’s apply an Istio AuthorizationPolicy on the server side. The Istio AuthorizationPolicy is enforced on the server side, so this L4 policy will be enforced on the Raspberry Pi side by the ztunnel:


kubectl apply -f - <<EOF
# Access policy applied to the ztunnel pod
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
 name: hello-l4-pi
 namespace: pi-namespace
spec:
 selector:
   matchLabels:
 	app: hello-pi-192.168.1.178
 action: DENY
 rules:
 - from:
   - source:
   	principals: ["cluster.local/ns/default/sa/sleep"]
EOF

Now let’s try sending the same curl as before from the sleep pod. We expect to be denied because of the L4 policy we just applied:


❯ curl led-pi.pi-namespace:8080/switch -v 
* Host led-pi.pi-namespace:8080 was resolved.
* IPv6: (none)
* IPv4: 10.96.190.204
*   Trying 10.96.190.204:8080...
* Connected to led-pi.pi-namespace (10.96.190.204) port 8080
> GET /switch HTTP/1.1
> Host: led-pi.pi-namespace:8080
> User-Agent: curl/8.6.0
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection
curl: (56) Recv failure: Connection reset by peer

Let’s try running a curl from a different pod that does not use the sleep service account:


kubectl run netshoot --image=nicolaka/netshoot -i --tty --rm

Sending traffic from the netshoot pod should work even though netshoot is in the same namespace as sleep because netshoot is using a different ServiceAccount which is not blocked by the L4 AuthorizationPolicy.


❯ curl led-pi.pi-namespace:8080/switch

* Host led-pi.pi-namespace:8080 was resolved.
* IPv6: (none)
* IPv4: 10.96.190.204
*   Trying 10.96.190.204:8080...
* Connected to led-pi.pi-namespace (10.96.190.204) port 8080
> GET /switch HTTP/1.1
> Host: led-pi.pi-namespace:8080
> User-Agent: curl/8.6.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: envoy
< Date: Tue, 20 Feb 2024 19:29:22 GMT
< Connection: keep-alive
< Content-Type: text/html; charset=utf-8
< Content-Length: 155
< x-envoy-upstream-service-time: 1825
<

Want to try it yourself? Check out the GitHub repo with all the setup code or see it in action in the Hoot episode I did with Peter.

Cloud connectivity done right