Service Mesh Hub v0.9.1 – Expanded OSM support, AWS App Mesh progress, plus more config and troubleshooting features
Service Mesh Hub is a Kubernetes-native management plane that enables configuration and operational management of multiple clusters of the same service meshes and multiple clusters of heterogeneous service meshes through a unified API. Since the 0.7.2 release in September we’ve shipped more features and fixes leading to the latest release, version 0.9.1
New Features in open source Service Mesh Hub include:
AWS App Mesh: We’re continuing to make progress on providing support for AWS App Mesh in Service Mesh Hub with the implementation of the mesh, workload, and traffic target discovery objects in this release and Traffic Policy translation in progress for a future release (#994).
Debugging Command: New CLI command meshctl debug snapshot exposes the input and output snapshots generated by the Service Mesh Hub controllers at runtime to identify discrepancies between the controllers’ expected state and the actual state of the resources on their clusters (#992).
Validation Error Checks: Added validation to check that a referenced configuration target exists prior to translating the network configuration (#962), report non-existent TrafficTargets to TrafficPolicy status (#963), and if non-existent, display the error message nonexistent mesh error with an INVALID status.
Global Settings Configuration: A new Settings CRD specifies a default mTLS value for all destination rules and extends the TrafficPolicy API to allow overriding the default mTLS value for select services. The Settings object allows modification of the global default at runtime; the Settings resource is used by the networking controller to inform translation behavior similarly to TrafficPolicies, VirtualMeshes, and other API objects (#996). Thanks @antonioberben for contribution and engagement on this feature.
And More… We’ve also added a new troubleshooting section to read and interpret CRD status (#946), included selected workloads in the traffic policy status to understand which workloads are targeted for configuration by each traffic policy (#942), and solve intermittent 503’s from the FailoverService by exposing a field on the TrafficPolicy OutlierDetection message for controlling maxEjectionPercent with the default set to 100%.
A few breaking changes were introduced in recent versions including an update to CRDs with validation schemas and restructuring of TrafficPolicy.FaultInjection (#512), VirtualMeshes now requiring a non-null mtlsConfig.shared.rootCertificateAuthority.generated field (#1021), and the ability to specify subsets when referencing a FailoverService as a traffic shift destination in TrafficPolicies (#953).
We also fixed a number of bugs to address potential panic conditions, renamed objects, improved error handling and validation, plus more to improve the functionality of Service Mesh Hub and developer and admin experience in working with it.