What’s New in Gloo Edge 1.8 (now GA!)
We are thrilled to talk about the Gloo Edge 1.8 release in more technical detail. Like all recent Gloo Edge releases, the focus on enhancements have been on the customers, community, and partners. It is an exciting time for Gloo Edge as our customers move forward on their global production rollouts (with over ten thousand virtual services under management today!) The features in this release are helping customers achieve their goals. This also is the second technical release blog recently, be sure to see our earlier post on Gloo Portal v1.0.0 too.
One of the coolest new features is the ability to see everything that was added to Gloo Edge between the release you are currently using and the release you want to upgrade to. To see this yourself, go to the documentation changelogs and choose “compare versions.” This enhancement gives you tailored release notes for your exact upgrade path.
In this blog we will highlight some of the key features you will find in the Gloo Edge v1.8.0 release notes, but first we want to share a bit about the momentum around Gloo Edge.
Gloo Product Momentum
Gloo product usage continues to soar. First, because of your support we have reached over 3,000 stars on the open source Gloo Edge Github repo. Second, we now have over 3,600 users in the general and 1,800 users in the Gloo Edge channels in our community Slack. There are now hundreds of private channels to support individual customers and teams. Furthermore, between Gloo Portal and Gloo Edge we have had over 550 releases to date! That is a lot of progress since December 2018.
Improved Flagger in Envoy Proxy
Weaveworks introduced Flagger as a tool to manage Gitops, especially traffic routing for A/B testing, blue/green mirroring, and safer “canary” test releases to improve reliability. In the past couple of releases, Solo has worked with Weaveworks to seamlessly integrate Flagger to open source Envoy Proxy and Gloo Edge. Since Gloo Edge 1.6, Flagger has been able to automate canary releases for the Gloo Edge ingress controller.
In the most recent release, we added a couple of noticeable improvements to the Gloo/Flagger Integration:
Non-Auto Discovered Upstreams
Previously, Flagger relied on Gloo’s discovery feature, which is responsible for automatically populating the storage layer with Upstream and functions to facilitate easy routing for users. Kubernetes Service-Based Upstream Discovery is one such method that is supported by Gloo. Some users decided to disable the discovery services however, and now even with Gloo Discovery disabled, you can rely on Flagger to automate a canary release for the Gloo Edge ingress controller.
Enhanced Upstream Configuration
Users may require that Flagger generated upstreams contain non-standard configurations, such as TLS configuration, circuit breakers, or http connection configurations. With this newly supported configuration, Flagger Upstreams can be configured to support mTLS!
Our continued work with Flagger is just another demonstration of Solo’s continued support for and involvement in open source projects.
Support For SOAP/XSLT
One of the many benefits of a service proxy is to enable organizations to bridge applications that have been built with different technologies. [https://www.solo.io/blog/tech-debt-and-the-value-of-api-gateways-and-service-meshes-in-application-modernization/] explores how API Gateways and Service Meshes help organizations move quickly, while maintaining a responsible amount of technical debt. One such example of this technical debt, is having a web service built on the Simple Object Access Protocol (SOAP).
SOAP web services are difficult to modernize. SOAP relies exclusively on XML to provide messaging services, which makes it challenging to integrate with RESTful services. As a result, they are often either run as a monolithic application or an external service, requiring an additional network hop.
Gloo Edge has solved this challenge by supporting the XSLT (Extensible Stylesheet Language Transformations) engine in-process with Envoy. The XSLT engine is responsible for transforming between XML and JSON and is embedded into the Envoy process itself, providing the performance benefits that we’ve come to expect from Envoy. We benchmarked this to also make sure there was little overhead and that this enhancement would allow our customers to move these SOAP services to Gloo Edge. For benchmarking and other details on how it works, please see the in depth blog on this topic.
Helm Usability Improvements
Over 20% of the customer enhancement requests we get are around adding configurations to Helm. Helm, a package manager for Kubernetes, is commonly used to define, install, and upgrade Kubernetes applications. Gloo Edge exposes a robust API for configuring Kubernetes and Gloo resources via Helm. Our product engineers regularly receive requests for particular configurations that customers would like to expose in the Gloo API, and we actively encourage our developer community to tailor the API as they see fit for their exact environment. We decided however, that the Solo.io team shouldn’t become a bottleneck for implementing the customization you want to apply to your Gloo installation. Therefore, we now expose a generic API for each of the Kubernetes resources in our Gloo chart.
Let’s say you wanted to add a label to the following Gloo deployment:
``` apiVersion: v1 kind: Deployment metadata: labels: gloo: gloo name: gloo ```
Previously, you would have to either submit a Pull Request to make this a default configuration of Gloo Edge, or request that the Solo product team prioritize this issue. It could be days or weeks before this feature is completed. Now, you do not need to wait. You can use the `KubeResourceOverride` helm value to add a label to the Deployment.
``` gloo: deployment: kubeResourceOverride: metadata: labels: resource-owner: infrastructure-team ```
The resulting Deployment includes the new label, and the original one:
``` apiVersion: v1 kind: Deployment metadata: labels: gloo: gloo resource-owner: infrastructure-team ```
You may notice that this functionality closely mirrors what Kustomize [link to customize] provides. It is true that both of these solutions enable users to patch resources generated by Helm. However, where Kustomize requires configuration to be distributed between bases and overlays, the Gloo Edge KubeResourceOverride can be maintained directly in the `values.yaml` file, allowing you to leverage this feature without any changes to your CI/CI pipeline. Our feature is also baked into Gloo Edge, and customer feedback around Kustomize was that you didn’t want to download, learn, and rely on yet another tool.
Schema in Gloo Edge CRDs
Admission controllers are Kubernetes-native features which provide modular control over resources applied to a cluster. The controllers work by intercepting requests to the Kubernetes API server, prior to persisting the object. Gloo Edge already supports a Validating Admission Webhook to prevent invalid configuration from being written to Kubernetes.
We extended these protections by supporting structural schemas on our Custom Resource Definitions. Using OpenAPI v3.0 validation [https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.0.md#schemaObject], the specified schema will be validated during creation of and updates to resources.
Below is an outline of the steps a request takes before it is persisted:
Validation schemas allow us to narrow the responsibility of the webhooks, since the schemas can protect against syntax errors, and the webhook can focus on semantic errors. Additionally, they raise errors for bad configuration that may have otherwise been ignored.
Take for example a Gloo Upstream that represents a set of one or more addressable pods for a Kubernetes Service.
``` apiVersion: gloo.solo.io/v1 kind: Upstream metadata: name: default-productpage-9080 namespace: gloo-system spec: kube: serviceSelector: app: productpage serviceName: productpage serviceNamespace: default ```
The `serviceSelector` field is meant to be `selector`. The selector field allows Gloo to select Pods based on their labels. An unsuspecting user, without the existence of CRD Validation Schemas could have configured this Upstream, and never noticed they had improperly configured the selector. With the recent update to include schemas however, creating the resouce would have failed:
``` ValidationError(Upstream.spec.kube): unknown field "serviceSelector" in io.solo.gloo.v1.Upstream.spec.kube; ```