Gloo Mesh and Istio service mesh on Azure Kubernetes Service (AKS) with Global Virtual Network Peering
A service mesh on Azure Kubernetes Service (AKS) provides capabilities like resiliency, security, traffic management, strong identity, security, and observability to your workloads. Istio is the top recommended service mesh to use with Azure Kubernetes Service.
Gloo Mesh is a Kubernetes-native management plane that enables configuration and operational management of multiple heterogeneous service meshes across multiple clusters through a unified API that works with Istio. The Gloo Mesh API integrates with the leading service meshes and abstracts away differences between their disparate APIs, allowing users to configure a set of different service meshes through a single API. Gloo Mesh is engineered with a focus on its utility as an operational management tool, providing both graphical and command line UIs, observability features, and debugging tools.
Gloo Mesh can be run in its own cluster (or co-located with an existing mesh) and remotely operates and drives the configuration for specific service mesh control planes. This allows Gloo Mesh to discover meshes and workloads, establish federated identity, enable global traffic routing, load balancing, access control policies, centralized observability, and more.
In this blog we will show you how to create a multi-cluster Gloo Mesh and Istio service mesh setup on Azure Kubernete Service (AKS) with Global Virtual Network Peering across two separate regions.
For this walkthrough we will assume you have an active Azure subscription with the appropriate permissions to create resources. If not, here’s how you can get started with a free trial.
Let’s install the tools we will need to get everything up and running. To install the Azure CLI find the install instructions for your particular operating system. For our example, we are using MacOS so we will install with brew:
brew update && brew install azure-cli
Next, let’s install meshctl, which we will use to install and interact with Gloo Mesh:
curl -sL https://run.solo.io/meshctl/install | sh export PATH=$HOME/.gloo-mesh/bin:$PATH |
The last tool we will need to install on Azure will be be istioctl:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.9.5 TARGET_ARCH=x86_64 sh - cd istio-1.9.5 export PATH=$PWD/bin:$PATH |
Now lets log in to Azure, create our Virtual Networks and enable global peering:
az login az group create --name Lab --location eastus az network vnet create -g Lab -n mgt-vnet \ --address-prefix 10.0.0.0/16 \ --subnet-name mgt-subnet --subnet-prefix 10.0.0.0/24 \ --location eastus az network vnet create -g Lab -n rmt-vnet \ --address-prefix 11.0.0.0/16 \ --subnet-name rmt-subnet --subnet-prefix 11.0.0.0/24 \ --location centralus az network vnet peering create -g Lab -n mgt-to-rmt --vnet-name mgt-vnet \ --remote-vnet rmt-vnet --allow-vnet-access az network vnet peering create -g Lab -n rmt-to-mgt --vnet-name rmt-vnet \ --remote-vnet mgt-vnet --allow-vnet-access |
With our Virtual networks created, and global peering configured, let’s create our two clusters:
az aks create \ --resource-group Lab \ --name mgt-cluster-1 \ --node-count 3 \ --location eastus \ --generate-ssh-keys \ --vm-set-type VirtualMachineScaleSets \ --network-plugin azure \ --enable-managed-identity \ --docker-bridge-address 172.17.0.1/16 \ --dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 \ --vnet-subnet-id $(az network vnet subnet list -g Lab --vnet-name mgt-vnet --query "[?name=='mgt-subnet'].id" --output tsv) az aks create \ --resource-group Lab \ --name rmt-cluster-1 \ --node-count 3 \ --location centralus \ --generate-ssh-keys \ --vm-set-type VirtualMachineScaleSets \ --network-plugin azure \ --enable-managed-identity \ --docker-bridge-address 172.17.0.1/16 \ --dns-service-ip 11.2.0.10 \ --service-cidr 11.2.0.0/24 \ --vnet-subnet-id $(az network vnet subnet list -g Lab --vnet-name rmt-vnet --query "[?name=='rmt-subnet'].id" --output tsv) |
Cluster creation can take anywhere between 5-20 minutes per cluster depending on the region and other factors.
After the clusters are created, we can generate the contexts and set some environment variables to make the context switching a little easier during the cluster registration process.
az aks get-credentials --resource-group Lab --name mgt-cluster-1 az aks get-credentials --resource-group Lab --name rmt-cluster-1 MGMT_CONTEXT=mgt-cluster-1 REMOTE_CONTEXT=rmt-cluster-1 kubectl config use-context $MGMT_CONTEXT |
With all the infrastructure and networking configured, let’s get our hands on Gloo Mesh.
meshctl install community Installing Helm chart Finished installing chart 'gloo-mesh' as release gloo-mesh:gloo-mesh |
Let’s verify the install:
meshctl check Gloo Mesh ------------ ✅ Gloo Mesh pods are running ✅ Gloo Mesh agents are connected for each registered KubernetesCluster. Management Configuration --------------------------- ✅ Gloo Mesh networking configuration resources are in a valid state |
With Gloo Mesh installed and everything up and running, we can begin registering our clusters:
#Remote Cluster Registration meshctl cluster register community remote-cluster \ --remote-context $REMOTE_CONTEXT Finished installing chart 'agent-crds' as release gloo-mesh:agent-crds Finished installing chart 'cert-agent' as release gloo-mesh:cert-agent Successfully registered cluster: remote-cluster #Management Cluster Registration meshctl cluster register community mgmt-cluster \ --remote-context $MGMT_CONTEXT Finished installing chart 'agent-crds' as release gloo-mesh:agent-crds Finished installing chart 'cert-agent' as release gloo-mesh:cert-agent Successfully registered cluster: mgmt-cluster |
With the clusters registered, lets install Istio 1.9.5 using the operator:
cat << EOF | istioctl manifest install -y --context $MGMT_CONTEXT -f - apiVersion: install.istio.io/v1alpha1 kind: IstioOperator metadata: name: example-istiooperator namespace: istio-system spec: profile: minimal meshConfig: enableAutoMtls: true defaultConfig: proxyMetadata: # Enable Istio agent to handle DNS requests for known hosts # Unknown hosts will automatically be resolved using upstream dns servers in resolv.conf ISTIO_META_DNS_CAPTURE: "true" components: # Istio Gateway feature ingressGateways: - name: istio-ingressgateway enabled: true k8s: env: - name: ISTIO_META_ROUTER_MODE value: "sni-dnat" service: type: NodePort ports: - port: 80 targetPort: 8080 name: http2 - port: 443 targetPort: 8443 name: https - port: 15443 targetPort: 15443 name: tls nodePort: 32001 values: global: pilotCertProvider: istiod EOF |
Gloo Mesh can automatically discover service mesh installations on registered clusters using control plane and sidecar discovery, as well as workloads and services exposed through the service mesh. To demonstrate this, let’s first install the Booking Info app in both our clusters.
kubectl config use-context $MGMT_CONTEXT kubectl create ns bookinfo kubectl label namespace bookinfo istio-injection=enabled kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' kubectl config use-context $REMOTE_CONTEXT kubectl create ns bookinfo kubectl label namespace bookinfo istio-injection=enabled kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version in (v3)' kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'service=reviews' kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account=reviews' kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app=ratings' kubectl apply -n bookinfo -f https://raw.githubusercontent.com/istio/istio/release-1.8/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account=ratings' |
Lastly, lets see all the destinations that Gloo Mesh has discovered:
kubectl config use-context $MGMT_CONTEXT kubectl -n gloo-mesh get destinations NAME details-bookinfo-mgmt-cluster istio-ingressgateway-istio-system-mgmt-cluster istio-ingressgateway-istio-system-remote-cluster productpage-bookinfo-mgmt-cluster ratings-bookinfo-mgmt-cluster ratings-bookinfo-remote-cluster reviews-bookinfo-mgmt-cluster reviews-bookinfo-remote-cluster |
Let’s check to see that Istio has been discovered on both the management and remote clusters:
meshctl describe mesh +-------------------------+--------------+----------------------+ | METADATA | VIRTUAL MESH | VIRTUAL DESTINATIONS | +-------------------------+--------------+----------------------+ | Namespace: istio-system | | | | Cluster: mgmt-cluster | | | | Type: istio | | | | Version: 1.9.5 | | | | | | | +-------------------------+--------------+----------------------+ | Namespace: istio-system | | | | Cluster: remote-cluster | | | | Type: istio | | | | Version: 1.9.5 | | | | | | | +-------------------------+--------------+----------------------+ |
As a final step, let’s set up federated trust and identity. Gloo Mesh can help unify the root identity between multiple service mesh installations so any intermediates are signed by the same Root certificate authority (CA) and end-to-end mutual Transport Layer Security (mTLS) between clusters and services can be established correctly. Gloo Mesh will establish trust based on the trust model defined by the user.
Apply the following yaml to both your management plane and remote clusters.
kubectl apply --context $MGMT_CONTEXT -f - << EOF apiVersion: "security.istio.io/v1beta1" kind: "PeerAuthentication" metadata: name: "default" namespace: "istio-system" spec: mtls: mode: STRICT EOF kubectl apply --context $REMOTE_CONTEXT -f - << EOF apiVersion: "security.istio.io/v1beta1" kind: "PeerAuthentication" metadata: name: "default" namespace: "istio-system" spec: mtls: mode: STRICT EOF |
Then we can create our Virtual Mesh:
kubectl apply --context $MGMT_CONTEXT -f - << EOF apiVersion: networking.mesh.gloo.solo.io/v1 kind: VirtualMesh metadata: name: virtual-mesh namespace: gloo-mesh spec: mtlsConfig: autoRestartPods: true shared: rootCertificateAuthority: generated: {} federation: {} meshes: - name: istiod-istio-system-mgmt-cluster namespace: gloo-mesh - name: istiod-istio-system-remote-cluster namespace: gloo-mesh EOF |
When we create the VirtualMesh Certificate Request (CR), we set the trust model to shared, and configure the Root CA parameters, Gloo Mesh will kick off the process to unify the identity to a shared root. First, Gloo Mesh will either create the Root CA specified (if generated is used) or use the supplied CA information.
Then Gloo Mesh will use a CR agent on each of the affected clusters to create a new key/cert pair that will form an intermediate CA used by the mesh on that cluster. It will then create a Certificate Request, represented by the CertificateRequest CR.
Gloo Mesh will sign the certificate with the Root CA specified in the VirtualMesh. At that point, we will want the mesh (Istio in this case) to pick up the new intermediate CA and start using that for its workloads.
Once trust has been established, Gloo Mesh will start federating services so that they are accessible across clusters. Behind the scenes, Gloo Mesh will handle the networking – possibly through egress and ingress gateways, and possibly affected by user-defined traffic and access policies – and ensure requests to the service will resolve and be routed to the right destination. Users can fine-tune which services are federated (and where!) by editing the virtual mesh.
Our Virtual Mesh should now be registered:
meshctl describe mesh +-------------------------+----------------------+----------------------+ | METADATA | VIRTUAL MESH | VIRTUAL DESTINATIONS | +-------------------------+----------------------+----------------------+ | Namespace: istio-system | Name: virtual-mesh | | | Cluster: mgmt-cluster | Namespace: gloo-mesh | | | Type: istio | | | | Version: 1.9.5 | | | | | | | +-------------------------+----------------------+----------------------+ | Namespace: istio-system | Name: virtual-mesh | | | Cluster: remote-cluster | Namespace: gloo-mesh | | | Type: istio | | | | Version: 1.9.5 | | | | | | | +-------------------------+----------------------+----------------------+ |
Once you are done, to cleanup your environment and avoid any ongoing charges, run the following command:
az group delete --name Lab --yes --no-wait |
As you can see in just a short amount of time, you can have a multi-cluster and multi-region gloo mesh with federated trust and identity running in Azure. I encourage you to spend some time in our docs and give Gloo Mesh a try yourself.
Please feel free to reach out to us on Slack anytime, our experts are here to help you be successful faster.
If AKS is interesting to you, you should also read: Gloo Edge on Azure Kubernetes Service (AKS) with Microsoft Windows node pools.
BACK TO BLOG