OpenShift vs. Kubernetes: 7 Key Differences

What is OpenShift?

What is Kubernetes?

Red Hat OpenShift is a cloud-based Kubernetes platform that helps developers build distributed applications. It provides automated installation, upgrade, and lifecycle management of the entire container stack (operating system, Kubernetes, cluster services, and applications). 

You can deploy OpenStack in any cloud or on-premises. It lets you easily set up a large-scale development infrastructure with enterprise-grade security.

Self-hosted Kubernetes installations or services such as Amazon EKS, Azure Kubernetes Service, and Google Kubernetes Engine, provide basic Kubernetes functionality, allowing businesses to choose and implement the features that work best for them. OpenShift, by contrast, provides comprehensive multi-tenancy capabilities, advanced security and monitoring, unified storage, and CI/CD pipeline management out of the box.

Kubernetes is an open source platform for managing Linux containers in private, public, and hybrid cloud environments. You can also use Kubernetes to manage a microservices architecture. Containers and Kubernetes can be deployed on most cloud providers.

Application developers, IT system administrators, and DevOps engineers use Kubernetes to automatically deploy, scale, maintain, schedule, and operate multiple application containers on a cluster of nodes. 

Containers run on the host’s common shared operating system (OS), but are isolated from each other unless the user chooses to connect them.

Product vs. Project

OpenShift is a commercial product while Kubernetes is an open source project. 

OpenShift subscriptions allow users to receive paid support. Subscriptions also include a service called CloudForms, which helps organizations manage their private, public, and virtual infrastructures. You will need to renew a subscription and might need to pay more as your cluster grows.

Kubernetes provides an open source support model. You can consult with the large Kubernetes community if you have issues. If you need more help, external experts are available to consult on Kubernetes infrastructure.

Security

OpenShift has stronger security policies than Kubernetes. OpenShift’s security policy restricts the execution of container images – even many official images. OpenShift requires certain permissions to maintain a minimum level of security. Therefore, you need to be familiar with these policies to deploy applications. 

In addition, OpenShift provides an integrated server to enforce authentication. OpenShift also provides the Security Context Constraint (SCC).

Kubernetes has relatively lenient security policies, which you can customize to harden your cluster. Kubernetes supports authorization through role-based access control (RBAC). Setting up and configuring Kubernetes authorization can be a lot of work.

User interface

OpenShift provides an intuitive web-based console with a one-click login page. The OpenShift console provides a simple form-based interface that allows users to easily modify, delete, and add resources. Users can also easily visualize cluster projects, servers, and roles.

Kubernetes provides a complex web-based interface that is generally not recommended for beginners. To access this interface, users must first install the official Kubernetes dashboard and then use kube-proxy to pass the local machine’s port address to the cluster server. However, there is no login page in the dashboard. To authenticate and authorize users, operators must implement a process that allows users to generate their own bearer tokens.

Managed Kubernetes services such as Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), or Rancher provide a more convenient provisioning interface and additional management tools. 

Deployment approach

OpenShift deployment is done with the DeploymentConfig command. This command cannot be implemented using controllers – you have to use dedicated pod logic. It also does not support multiple or concurrent updates. However, the DeploymentConfig command has other benefits such as version control and triggers that facilitate automated deployment.

Kubernetes deployment is done using Deployment objects, which deploy and update pods. Deployments can be implemented internally by a controller, which provides greater flexibility. A Kubernetes deployment object can handle multiple concurrent updates.

CI/CD

Both OpenShift and Kubernetes can be used to build CI/CD pipelines. However, neither platform offers a complete CI/CD solution. To build a complete CI/CD pipeline, both platforms must integrate with other tools such as automated testing and monitoring and CI servers.

OpenShift simplifies this process by providing certified Jenkins containers for use with CI servers. 

Kubernetes does not provide an official CI/CD integration solution. Building a CI/CD pipeline on Kubernetes requires integration with third-party tools.

Updates

OpenShift does not support multiple concurrent updates. Installing the latest version of OpenShift requires access to the Red Hat Enterprise Linux package management system.

Kubernetes allows you to run multiple, simultaneous upgrades. To upgrade Kubernetes, simply call the kubeadm upgrade command to download the latest version. Back up your existing installation files before upgrading Kubernetes.

Service catalog

OpenShift provides a service catalog with two default service proxies. Similar to Kubernetes, you can integrate other service proxies for managed services. OpenShift’s service catalog makes it easy to deploy applications of your choice.

Kubernetes provides a service catalog as optional components that must be installed separately. After installation, you need to connect the catalog to your old service brokers. The service catalog provided by Kubernetes doesn’t have extensive support for services running with a cluster, and is more suitable for managed services.

Route vs. Ingress

OpenShift provides a routing mechanism to enable external access to cluster services. When a Route object is created in OpenShift, it is picked up by the built-in HAProxy load balancer to expose the requested service and make it externally available with the specified configuration. 

OpenShift provides this built-in HAProxy-based load balancer, but provides a pluggable architecture that administrators can replace with an external load balancer such as NGINX (and NGINX Plus) or F5 BIG-IP. 

OpenShift Routes enable TLS re-encryption for increased security, TLS pass-through for increased security, a multi-weighted backend enabling traffic splitting, generated pattern-based hostnames, and wildcards.

Kubernetes offers more basic traffic routing functionality. Pods and services have their own IP addresses, but these IP addresses are only accessible from within the Kubernetes cluster and not from external clients. 

To allow external access to pods and services, Kubernetes uses the Ingress object to specify which services should be externally accessible, what URLs should be externally accessible, and so on. You can specify a customized configuration, and security settings like SSL, via the Ingress object.

Enabling Gloo Platform, Gloo Mesh and Gloo Gateway on OpenShift or Kubernetes

While OpenShift provides a superset of functionality beyond Kubernetes, none of the additional functionality in OpenShift is required by any of the Solo Gloo Platform products. Gloo Platform products, including Gloo Gateway, Gloo Gateway and Gloo Network, interact with the Kubernetes APIs to deliver functionality. 

Gloo Platform,  including Gloo Gateway, Gloo Gateway and Gloo Network, can provide robust API Gateway, Istio-based Service Mesh, or eBPF-based CNI capabilities for either OpenShift or Kubernetes environments. 

Solo customers used both OpenShift and Kubernetes to deliver secure, microservice-based environments for their applications and APIs, across on-premises and public cloud environments.

Learn more about Gloo Platform

BACK TO TOP