Multi-cluster global access control for Kubernetes and Service Mesh
In this blog series, we will dig into specific challenge areas for multi-cluster Kubernetes and service mesh architecture, considerations and approaches in solving them.
In a previous blog post we covered Identity Federation for Multi-Cluster Kubernetes and Service Mesh which is the foundation for multi-cluster global access control.
We explained how to setup each Istio cluster with a different trust domain to make sure each service has a unique global identifier.
So, each Service has a different SPIFFE ID with the format spiffe://<trust domain>/ns/<namespace>/sa/<service account>
.
Istio Authorization Policies
Each Envoy proxy runs an authorization engine that authorizes requests at runtime. When a request comes to the proxy, the authorization engine evaluates the request context against the current authorization policies, and returns the authorization result, either ALLOW or DENY.
You can find more information about the way Authorization Policies can be implement in the Istio documentation.
But as you can see, Authorization Policies must be created and maintained separately on each Istio cluster.
Global Access Control with Service Mesh Hub
Service Mesh Hub (SMH) is allowing you to define Access Control Globally using Access Policies.
The Access Policies you create are then translated in Istio Authorization Policies on the corresponding Istio clusters.
Note that the goal of SMH is to support multiple Service Mesh technologies, not only Istio (Open Service Mesh and AppMesh integrations have already started). The Access Policies will be translated in the corresponding resources.
To demonstrate how SMH Global Access Control works, we’ve deployed SMH in a cluster called mgmt
and Istio in 2 clusters (cluster1
and cluster2
. And we’ve deployed the bookinfo
demo application on both clusters.
When you create a Virtual Mesh, you can define if you want to enable Global Access Control:
cat << EOF | kubectl --context mgmt apply -f - apiVersion: networking.smh.solo.io/v1alpha2 kind: VirtualMesh metadata: name: virtual-mesh namespace: service-mesh-hub spec: mtlsConfig: autoRestartPods: true shared: rootCertificateAuthority: generated: null federation: {} globalAccessPolicy: ENABLED meshes: - name: istiod-istio-system-cluster1 namespace: service-mesh-hub - name: istiod-istio-system-cluster2 namespace: service-mesh-hub EOF
In this example, we have created a Virtual Mesh with cluster1
and cluster2
and we have enabled Global Access Control by setting the value of `globalAccessPolicy` to ENABLED
.
SMH has automatically created the following Istio Authorization Policies on both clusters:
The spec: {}
in the first Authorization Policy means that no communications are allowed.
So, currently, the only communications allowed are from the external world to the Istio Ingress Gateways on both clusters.
If we try to access the productpage
service on the first cluster, we get an `RBAC: access denied` response.
We need to create an SMH Access Policy to allow the Istio Ingress Gateway to access the productpage
service.
Here is the corresponding yaml:
apiVersion: networking.smh.solo.io/v1alpha2 kind: AccessPolicy metadata: namespace: service-mesh-hub name: istio-ingressgateway spec: sourceSelector: - kubeServiceAccountRefs: serviceAccounts: - name: istio-ingressgateway-service-account namespace: istio-system clusterName: cluster1 destinationSelector: - kubeServiceMatcher: namespaces: - default labels: service: productpage clusters: - cluster1
As you can see, we specify the cluster cluster1
in the spec.
This SMH Access Policy is translated to the following Istio Authorization Policy on the first cluster:
apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: annotations: labels: cluster.multicluster.solo.io: cluster1 owner.networking.smh.solo.io: service-mesh-hub name: productpage namespace: default spec: rules: - from: - source: principals: - cluster1/ns/istio-system/sa/istio-ingressgateway-service-account selector: matchLabels: app: productpage
You can see that the SPIFFE ID is used to make sure the Policy only allows the Istio Ingress Gateway of the first cluster to access the productpage
service.
We can also use the Service Mesh Hub UI to see the policy we’ve just created:
You can see which Workloads are allowed to access which Traffic Targets.
In this case, we see that the Istio Ingress Gateway of the first cluster is allowed to access the productpage
service of the same cluster.
And if we try to access the application in a web browser, we can see that the productpage
service can’t access the other services.
So next, we should create an SMH Access Policy to allow the productpage
service to reach the details
and reviews
services.
Here is the corresponding yaml:
apiVersion: networking.smh.solo.io/v1alpha2 kind: AccessPolicy metadata: namespace: service-mesh-hub name: productpage spec: sourceSelector: - kubeServiceAccountRefs: serviceAccounts: - name: bookinfo-productpage namespace: default clusterName: cluster1 destinationSelector: - kubeServiceMatcher: namespaces: - default labels: service: details - kubeServiceMatcher: namespaces: - default labels: service: reviews
Let’s have a look at the Workloads and Traffic Targets impacted by this policy.
We didn’t include a clusters
list in the `destinationSelector`, so the `details` and the reviews
services of both clusters are included in the Traffic Targets.
Finally, we should create an SMH Access Policy to allow the reviews
service to communicate with the ratings
service.
Here is the corresponding yaml:
apiVersion: networking.smh.solo.io/v1alpha2 kind: AccessPolicy metadata: namespace: service-mesh-hub name: reviews spec: sourceSelector: - kubeServiceAccountRefs: serviceAccounts: - name: bookinfo-reviews namespace: default clusterName: cluster1 destinationSelector: - kubeServiceMatcher: namespaces: - default labels: service: ratings
Let’s have a look at the Workloads and Traffic Targets impacted by this policy.
If we reload the web page, we can see that the application is now working well.
It allows all the communications we need for the application to work properly on the local cluster.
But if you have read the previous post about Cross-cluster service communication with service mesh, you know that we want to allow the productpage
service of the first cluster to communicate with the reviews
service of the second cluster (this is already the case) and the reviews
service of the second cluster to communicate with the ratings
service of the same cluster.
Let’s update the reviews
policy to allow that.
Here is the corresponding yaml:
apiVersion: networking.smh.solo.io/v1alpha2 kind: AccessPolicy metadata: namespace: service-mesh-hub name: reviews spec: sourceSelector: - kubeServiceAccountRefs: serviceAccounts: - name: bookinfo-reviews namespace: default clusterName: cluster1 - name: bookinfo-reviews namespace: default clusterName: cluster2 destinationSelector: - kubeServiceMatcher: namespaces: - default labels: service: ratings
Let’s have a look at the Workloads and Traffic Targets impacted by this policy.
You can see that the Workloads list now includes the reviews
services of the second cluster.
Here are all the policies we have at the end:
As you can see, Service Mesh Hub allows you to create the Access Policies programmatically while providing a nice User Interface to understand all the communications allowed globally.
Get started
We invite you to check out the project and join the community. Solo.io also offers enterprise support for Istio service mesh for those looking to operationalize service mesh environments, request a meeting to learn more here.
- Learn more about Service Mesh Hub
- Read the docs and watch the demos
- Request a personalized demo
- Questions? Join the community