[Tutorial] How to configure a basic failover using Gloo Edge
Introduction
Failover is the ability to seamlessly and automatically switch to a reliable backup system to make critical systems more fault-tolerant and avoid major business impacts.
The following tutorial demonstrates how to easily configure the failover for your upstreams using Gloo Edge.
Gloo Edge
Gloo Edge is a feature-rich, Kubernetes-native ingress controller, and next-generation API gateway. Gloo Edge is exceptional in its function-level routing; its support for legacy apps, microservices and serverless; its discovery capabilities; its numerous features; and its tight integration with leading open-source projects. Gloo Edge is uniquely designed to support hybrid applications, in which multiple technologies, architectures, protocols, and clouds can coexist.
Prerequisites
For this tutorial, we will deploy Gloo Edge on a local Kubernetes cluster using Kind (kind installation):
1. Install local Kubernetes cluster
Run the following command to install a local Kubernetes cluster.
kind create cluster --name local
A Kubernetes cluster is now installed locally:
Creating cluster "local" ... ✓ Ensuring node image (kindest/node:v1.18.2) ? ✓ Preparing nodes ? ✓ Writing configuration ? ✓ Starting control-plane ?️ ✓ Installing CNI ? ✓ Installing StorageClass ? Set kubectl context to "kind-local" You can now use your cluster with: kubectl cluster-info --context kind-local Thanks for using kind! ?
2. Install Gloo Edge
The next step is to install Gloo Edge. But, first let’s install glooctl, the Gloo Edge cli:
curl -sL https://run.solo.io/gloo/install | sh export PATH=$HOME/.gloo/bin:$PATH
Then, running the following command will install Gloo Edge:
glooctl install gateway
Gloo Edge should be now installed in your cluster:
Creating namespace gloo-system... Done. Starting Gloo Edge installation... Gloo Edge was successfully installed!
Tutorial
Creating the demo services
For this tutorial, we will use two services, service-blue and service-green, which will be used to demonstrate the failover. The first step is to create the two services and to set up a basic routing to the echo-blue service.
First, let’s install the echo-blue service, save the flowing content as service-blue.yaml:
apiVersion: v1 kind: Service metadata: labels: app: bluegreen text: blue name: service-blue namespace: default spec: ports: - name: color port: 10000 protocol: TCP targetPort: 10000 selector: app: bluegreen text: blue sessionAffinity: None type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: bluegreen text: blue name: echo-blue namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: bluegreen text: blue strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: bluegreen text: blue spec: containers: - args: - -text="blue-pod" image: hashicorp/http-echo@sha256:ba27d460cd1f22a1a4331bdf74f4fccbc025552357e8a3249c40ae216275de96 imagePullPolicy: IfNotPresent name: echo resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - args: - --config-yaml - |2 node: cluster: ingress id: "ingress~for-testing" metadata: role: "default~proxy" static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 10000 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: some_service } http_filters: - name: envoy.filters.http.health_check typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck pass_through_mode: true - name: envoy.filters.http.router clusters: - name: some_service connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: some_service endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 0.0.0.0 port_value: 5678 admin: access_log_path: /dev/null address: socket_address: address: 0.0.0.0 port_value: 19000 - --disable-hot-restart - --log-level - debug - --concurrency - "1" - --file-flush-interval-msec - "10" image: envoyproxy/envoy:v1.14.2 imagePullPolicy: IfNotPresent name: envoy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0
Then apply the manifest to create the blue service:
kubectl apply -f service-blue.yaml
Now let’s repeat the same steps to create the green-service. Save the following manifest as service-green:
apiVersion: v1 kind: Service metadata: labels: app: bluegreen name: service-green namespace: default spec: ports: - name: color port: 10000 protocol: TCP targetPort: 10000 selector: app: bluegreen text: green sessionAffinity: None type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: bluegreen text: green name: echo-green namespace: default spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: bluegreen text: green strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: bluegreen text: green spec: containers: - args: - -text="green-pod" image: hashicorp/http-echo@sha256:ba27d460cd1f22a1a4331bdf74f4fccbc025552357e8a3249c40ae216275de96 imagePullPolicy: IfNotPresent name: echo resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File - args: - --config-yaml - |2 node: cluster: ingress id: "ingress~for-testing" metadata: role: "default~proxy" static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 10000 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http codec_type: AUTO route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: some_service } http_filters: - name: envoy.filters.http.health_check typed_config: "@type": type.googleapis.com/envoy.extensions.filters.http.health_check.v3.HealthCheck pass_through_mode: true - name: envoy.filters.http.router clusters: - name: some_service connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: some_service endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 0.0.0.0 port_value: 5678 admin: access_log_path: /dev/null address: socket_address: address: 0.0.0.0 port_value: 19000 - --disable-hot-restart - --log-level - debug - --concurrency - "1" - --file-flush-interval-msec - "10" image: envoyproxy/envoy:v1.14.2 imagePullPolicy: IfNotPresent name: envoy resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 0
Then run the following command to install the green service:
kubectl apply -f service-green.yaml
You should see that the two services have been created:
kubectl get svc # List k8s services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 2d16h service-blue ClusterIP 10.97.187.38 10000/TCP 5s service-green ClusterIP 10.107.217.190 10000/TCP 5s
Basic Routing
In the following section, we will set up a basic routing to the blue service through the Gloo Edge gateway. First, let’s check that Gloo Edge discovered and created an upstream for the blue and the green services:
kubectl get upstreams -n gloo-system | grep service
Here is the expected result:
default-service-blue-10000 1m default-service-green-10000 1m
Now, let’s set up the routing, running the following command will create a virtual service that routes all the traffic hitting the gateway to the default-service-blue-10000 upstream which is the blue service:
glooctl add route --path-prefix / --dest-name default-service-blue-10000
This is the expected result:
{"level":"info","ts":"2021-01-25T10:55:57.792-0700","caller":"add/route.go:156","msg":"Created new default virtual service","virtualService":"virtual_host:{domains:\"*\" routes:{matchers:{prefix:\"/\"} route_action:{single:{upstream:{name:\"default-service-blue-10000\" namespace:\"gloo-system\"}}}}} status:{} metadata:{name:\"default\" namespace:\"gloo-system\" resource_version:\"36577\" generation:1}"} +-----------------+--------------+---------+------+---------+-----------------+----------------------------------------+ | VIRTUAL SERVICE | DISPLAY NAME | DOMAINS | SSL | STATUS | LISTENERPLUGINS | ROUTES | +-----------------+--------------+---------+------+---------+-----------------+----------------------------------------+ | default | | * | none | Pending | | / -> | | | | | | | | gloo-system.default-service-blue-10000 | | | | | | | | (upstream) | +-----------------+--------------+---------+------+---------+-----------------+----------------------------------------+
Then we can test it, first port forward the Gloo Edge API Gateway port:
kubectl port-forward -n gloo-system svc/gateway-proxy 8080:80 &
Then call the API Gateway:
curl localhost:8080
The expected result should show a response from the blue service:
Handling connection for 8080 "blue-pod"
Failover
In this final section, we will set up the failover to the service-green and test it, first let’s add the health check configuration in the service-blue upstream (default-service-blue-10000):
kubectl patch upstream -n gloo-system default-service-blue-10000 --type=merge -p " spec: healthChecks: - timeout: 1s interval: 1s unhealthyThreshold: 1 healthyThreshold: 1 httpHealthCheck: path: /health "
Then, add the failover configuration to the same upstream service-blue (default-service-blue-10000):
kubectl patch upstream -n gloo-system default-service-blue-10000 --type=merge -p " spec: failover: prioritizedLocalities: - localityEndpoints: - lbEndpoints: - address: service-green.default port: 10000 locality: region: local zone: local "
Now it’s time to test our failover configuration, first, let’s trigger a health check issue on the blue service, this is just to simulate a failure on the service blue:
kubectl port-forward deploy/echo-blue 19000 & curl -v -X POST http://localhost:19000/healthcheck/fail
Now, if we do a curl again:
curl localhost:8080
The result of the call should show that the call has been routed to the green service this time because of the failover:
"green-pod"
It failed over to the service-green successfully!
Video Demo
One Step Further
In this tutorial, we explored the failover to a different service in the same Kubernetes cluster, but in some instances, you will need to failover to services in a different cluster.
Gloo Edge Enterprise (EE) provides a feature called Gloo Edge Federation that makes the multi-cluster failover really easy to set up. To learn more about Gloo Edge Federation failover check this documentation: https://docs.solo.io/gloo-edge/latest/guides/gloo_federation/service_failover
BACK TO BLOG