In the modernization journey to cloud-native architectures, a multi-cloud architecture is oftentimes a natural step in the evolution of your application.
Cost saving, vendor lock-in, versatility… name your reason, this is a reality that we see more and more from the experience of helping our customers in production.
In this blog post, we are excited to share with you how to make an AWS lambda function available to your applications running in GCP, using different Istio meshes and securing the traffic over the wire.
1. Create the lambda function
First, we are going to create a lambda function and expose it with a private ALB, only available inside the AWS VPC.
The code for the lambda is really simple, and you can even deploy it directly from this repo:
After the function is created, you’ll need to create an internal ALB and route all traffic to the lambda. There are many tutorials about this, but here are the basic steps:
Create an internal ALB
On the navigation pane, under LOAD BALANCING, choose Target Groups
Choose Create target group
For Target group name, type a name for the target group
For Target type, select Lambda function
Register the Lambda function that is deployed earlier after you create the target group
Add a listener (or two) to the Load Balancer, forwarding all traffic to the target group
In the picture, you can see an http listener and an https listener, using a certificate in ACM:
As we said, it is not possible to call this function outside the AWS VPC:
$ curl https://alb.jesus2.solo.io -v
* Trying 10.0.1.186:443...
* connect to 10.0.1.186 port 443 failed: Operation timed out
* Trying 10.0.2.117:443...
* connect to 10.0.2.117 port 443 failed: Operation timed out
* Trying 10.0.3.223:443...
* After 74907ms connect time, move on!
* connect to 10.0.3.223 port 443 failed: Operation timed out
* Failed to connect to alb.jesus2.solo.io port 443 after 225096 ms: Couldn't connect to server
* Closing connection 0
curl: (28) Failed to connect to alb.jesus2.solo.io port 443 after 225096 ms: Couldn't connect to server
2. Deploy Istio in an EKS cluster
Next, we are going to use an EKS cluster deployed in the same VPC as the ALB, so it is visible for the workloads running in that cluster.
Let’s start by downloading the istioctl binary and deploy the Istio control plane, with some common options:
After a few seconds, the Istio Ingress Gateway is configured and ready:
istioctl --context ${CLUSTER2} proxy-status
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
istio-ingressgateway-57dc66d6ff-d2r75.istio-ingress Kubernetes SYNCED SYNCED NOT SENT NOT SENT NOT SENT istiod-1-17-6f5f489dd6-l6krg 1.17.2
3. Deploy Istio in a GCP cluster
Following a similar approach, next we are going to use a GKE cluster (not connected in any way to the EKS) and deploy Istio there. Notice that both Istio installations will form independent meshes, not sharing any common root of trust.
export CLUSTER1=gke-lab3
# Istiod control-plane, minimal profile
istioctl --context ${CLUSTER1} install -r 1-17 -y -f -<<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: control-plane
spec:
profile: minimal
components:
pilot:
k8s:
env:
# Pilot will send only clusters that referenced in gateway virtual services attached to gateway
- name: PILOT_FILTER_GATEWAY_CLUSTER_CONFIG
value: "true"
# Certificates received by the proxy will be verified against the OS CA certificate bundle
- name: VERIFY_CERTIFICATE_AT_CLIENT
value: "true"
meshConfig:
defaultConfig:
proxyMetadata:
# Enable basic DNS proxying
ISTIO_META_DNS_CAPTURE: "true"
# Enable automatic address allocation
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
enablePrometheusMerge: true
EOF
# Istiod service (no revision in the selector)
kubectl --context ${CLUSTER1} apply -f -<<EOF
apiVersion: v1
kind: Service
metadata:
labels:
app: istiod
istio: pilot
release: istio
name: istiod
namespace: istio-system
spec:
type: ClusterIP
ports:
- name: grpc-xds
port: 15010
- name: https-dns
port: 15012
- name: https-webhook
port: 443
targetPort: 15017
- name: http-monitoring
port: 15014
selector:
app: istiod
EOF
# Istio egress gateway, empty profile, in a non-default namespace
kubectl --context ${CLUSTER1} create namespace istio-egress
istioctl --context ${CLUSTER1} install -r 1-17 -y -f -<<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
name: istio-gateways
spec:
profile: empty
components:
egressGateways:
- name: istio-egressgateway
namespace: istio-egress
enabled: true
EOF
Let’s also deploy a simple application in GKE, so we can use it to connect with the lambda at the end of this exercise:
After a few seconds, the Istio Egress Gateway and the sleep app are configured and ready:
istioctl --context ${CLUSTER1} proxy-status
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
istio-egressgateway-7c5f5dcfbb-vz5zh.istio-egress Kubernetes SYNCED SYNCED NOT SENT NOT SENT NOT SENT istiod-1-17-6f5f489dd6-vh9zs 1.17.2
sleep-87549b8d9-622db.sleep Kubernetes SYNCED SYNCED SYNCED SYNCED NOT SENT istiod-1-17-6f5f489dd6-vh9zs 1.17.2
This how the system looks like now:
4. Expose the AWS endpoint using Istio mTLS
The next thing that must be done is to include the ALB endpoint in the Istio registry, so we can call that external service from the Istio mesh:
kubectl apply --context ${CLUSTER2} -f -<<EOF
# The external service will be visible only from the Istio ingress gateway, no need to make it available to other workloads for now
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-alb-ext
namespace: istio-ingress
spec:
exportTo:
- .
hosts:
- alb.jesus2.solo.io
ports:
- name: https
number: 443
protocol: HTTPS
resolution: DNS
---
# As we chose to connect using the ALB https listener, make sure you tell Istio it must use TLS for the connection
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: dr-alb-ext
namespace: istio-ingress
spec:
host: alb.jesus2.solo.io
trafficPolicy:
portLevelSettings:
- port:
number: 443
tls:
mode: SIMPLE
EOF
After this, we can expose the external service using an mTLS-protected listener in the gateway:
kubectl apply --context ${CLUSTER2} -f -<<EOF
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: eastwestgateway
namespace: istio-ingress
spec:
selector:
app: istio-ingressgateway
istio: ingressgateway
istio.io/rev: 1-17
servers:
- hosts:
- lambda.external
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: mtls-credential
# Notice that as we are connecting 2 unrelated Istio meshes, ISTIO_MUTUAL mode is not possible and we should be providing the certs in both sides, but this is fine.
mode: MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: vs-lambda
namespace: istio-ingress
spec:
exportTo:
- .
gateways:
- eastwestgateway
hosts:
- lambda.external
http:
- match:
- sourceLabels:
app: istio-ingressgateway
istio: ingressgateway
istio.io/rev: 1-17
uri:
prefix: /
route:
# All traffic sent to lambda.external host will be forwarded to the ALB, in more complex scenarios we can even create a more elaborated routing
- destination:
host: alb.jesus2.solo.io
port:
number: 443
EOF
We can verify that a new route is available in the gateway:
istioctl --context ${CLUSTER2} proxy-config routes deploy/istio-ingressgateway -n istio-ingress
NAME DOMAINS MATCH VIRTUAL SERVICE
https.443.https.eastwestgateway.istio-ingress lambda.external /* vs-lambda.istio-ingress
* /healthz/ready*
* /stats/prometheus*
Remember that we protected the route in the Gateway CR using a referenced secret. Let’s create the mTLS secret pairs for both origin (Istio EgressGateway in GKE) and destination (Istio IngressGateway in EKS).
# create server certificates
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout server.key -out server.crt -subj "/C=NT/ST=Zamunda/O=Solo.io/OU=Solo.io/CN=*"
# create client certificates
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout client.key -out client.crt -subj "/C=NT/ST=Wakanda/O=Solo.io/OU=Solo.io/CN=*"
# make the certs available in origin gateway
kubectl --context ${CLUSTER1} create -n istio-egress secret generic mtls-credential \
--from-file=key=client.key \
--from-file=cert=client.crt \
--from-file=cacert=server.crt
# make the certs available in destination gateway
kubectl --context ${CLUSTER2} create -n istio-ingress secret generic mtls-credential \
--from-file=key=server.key \
--from-file=cert=server.crt \
--from-file=cacert=client.crt
Next, we’ll test it using the certificated directly from a curl (it won’t work without the certificates as the gateway would not accept the request):
HTTP/2 200
server: istio-envoy
date: Wed, 03 May 2023 17:54:54 GMT
content-type: text/html; charset=utf-8
content-length: 258
x-envoy-upstream-service-time: 496
<html><head><title>Hello World!</title><style>
html, body {
margin: 0; padding: 0;
font-family: arial; font-weight: 700; font-size: 3em;
text-align: center;
}
</style></head><body><p>Hello World from Lambda</p></body></html>%
Half of the job is done! Now let’s work in the GKE side to make this mTLS transparent to applications.
5. Connect both clusters using Istio gateways and mTLS
Although it’s possible to connect the GKE workloads directly with the EKS Ingress Gateway, this is not a great way to do it, as we would be introducing too much complexity into the applications. Instead, we’ll isolate this complexity in an Istio Egress Gateway, which will take care of all the mTLS certificates and cross-cluster configuration.
First of all, define the EKS Istio Ingress Gateway as an External service, because from GKE point of view, it is really external to us:
kubectl apply --context ${CLUSTER1} -f -<<EOF
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-lambda-ext
namespace: istio-egress
spec:
endpoints:
# This is the port used to send the traffic using mTLS
- address: ${HOST_GW_CLUSTER2}
ports:
http: 443
hosts:
- lambda.external
ports:
# Inside the cluster, we want to expose a plain http service, so apps won't be forced to talk TLS at all
- name: http
number: 80
protocol: HTTP
resolution: DNS
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: dr-lambda-ext
namespace: istio-egress
spec:
host: lambda.external
trafficPolicy:
portLevelSettings:
- port:
number: 80
# Traffic sent to port 80 will be actually using mTLS behind the scenes
tls:
credentialName: mtls-credential
mode: MUTUAL
sni: lambda.external
EOF
Now, expose the lambda.external service inside the GKE mesh:
This is a picture of the request that we are making:
Note: After this setup, you can protect the datapath even more using Istio AuthorizationPolicies and PeerAuthentication, or establish a common root of trust between the cluster to achieve end-to-end mTLS with the proxy in passthrough mode. I’ll leave that exercise to try for the curious reader.
Learn more
We hope you are more confident about the multi-cloud experience after walking through this, where we easily share resources that are located in different networks.
To participate in our community and learn more about Istio, join our Slack channel.