Deployment Topology

Prev Next

An overview of the deployment topology when Kubernetes is deployed and configured by SESTEK is provided below.

Knovvu products can be deployed on either bare metal or virtualized servers (VMs). In this guide, terms "server" and "VM" are used interchangeably. Some of the VMs in the topology run containerized applications on Kubernetes. If a VM runs a containerized application on Kubernetes, it is referred to as a "node". More information on Kubernetes nodes can be found at Kubernetes node components.

Number of VMs and the required CPU/memory configurations can be found in the Hardware Sizing Document related to the Knovvu Product to be installed.

DevOps_Customer_Doc_Deployment_K8sbySestek.drawio.png

Server Type Function OS Remarks
Terminal Server Terminal Server is the server which the SESTEK Application Support Engineer connects to in order to bootstrap the installation process. Windows
Origin Server Origin Server is used for running the installation scripts. After the installation process is started, a Kubernetes cluster is created and the cluster nodes are included into this Kubernetes cluster. During deployment, Origin Server pulls the required container images from the Central Knovvu Container Registry. During runtime, it acts as the container registry for other Kubernetes nodes, by caching the container images that were pulled during the deployment process. As the Kubernetes notes may require these container images during runtime, this server should always be accessible to the cluster nodes and can't be shut down after the installation. Linux
Load Balancer Nodes These VMs are used for load balancing the traffic between the external clients and the Kubernetes cluster. Linux
Kubernetes Control Plane Nodes These VMs form the Kubernetes master nodes. They coordinate the cluster. Three VMs are required for high availability. Linux For more information, refer to Kubernetes Control Plane Components.
Kubernetes Worker Nodes These VMs run the workloads, such as Knovvu Applications and Knovvu Platform Applications. Linux Number of worker nodes may be increased depending on the workload, three is the suggested minimum.
License Servers These servers control the consumption of licenses. They run outside the Kubernetes cluster and have a separate installation process from the containerized Knovvu applications. Two VMs are required for high availability. Windows
NFS Server Node Network File System (NFS) server is responsible for hosting the shared file system for the cluster. Containers running in the Kubernetes cluster use NFS as their persistent storage. Linux
Database As relational databases, Knovvu applications work either with Microsoft SQL Server or PostgreSQL. Installation and management of releational databases are always expected to be handled by the customer.

Next topic: Deployment Procedure