Terminal Server
Preferred Option: Dedicated Server/VM
A dedicated Windows server or virtual machine is the preferred option for the terminal server from which the SESTEK Application Support Engineer connects to in order to bootstrap the installation process.
Using a dedicated server/VM ensures better security, reliability, and avoids potential conflicts with other applications during the installation process.
Alternative Option: Employee Workstation
As an alternative option, customers can designate an employee's Windows workstation as the terminal server. This approach is commonly used by customers who prefer not to provision a dedicated server/VM.
When using an employee workstation:
- Ensure the workstation can be exclusively used during the installation period
- The workstation should meet all security and technical requirements
- Consider potential downtime for the employee during installation
Required Applications
Regardless of whether you choose a dedicated server/VM or an employee workstation, the following applications must be installed:
- OpenSSH Client - Required for secure remote connections
- Windows Terminal - Enhanced terminal experience for installation procedures
- Notepad++ - Text editor for configuration file editing
- Postman - API testing tool for verification procedures
- SQL Server Management Studio - Required when MS SQL Server is used as the database
- PgAdmin 4 - Required when PostgreSQL is used as the database
Additional Requirements
- Administrator privileges on the terminal server/workstation
- Stable network connection to reach the customer's infrastructure
- Sufficient disk space for temporary files during installation (minimum 2GB free space recommended)
- Windows Server 2016 or newer, or Windows 10/11 for workstations
The terminal server/workstation must have network access to:
- The Kubernetes cluster nodes
- Database servers
- Any external systems that will be configured during installation
- Registry access to the proxy registry customer provides us with.
Git Repository
A Git repository is the single source of truth for the cluster state and must be in place before installation begins.
- Ownership & hosting: The preferred approach is for SESTEK to create and host the repository, and share access with the customer. If this is not possible due to customer policy (e.g., code must stay within the customer's network or Git platform), the customer creates and hosts the repository instead. The hosting decision should be confirmed during the project kickoff.
- Access for SESTEK: SESTEK engineers need write access (or PR/merge-request access on a working branch) so they can publish manifests, version bumps, and configuration updates.
- Access for the in-cluster consumer: Whatever applies the manifests (a GitOps controller or SESTEK's apply scripts) needs read access to the repository. This is typically provided via:
- HTTPS with a Personal Access Token / Deploy Token, or
- SSH with a dedicated deploy key.
- Network reachability: The Git host must be reachable from the environment that applies the manifests on TCP/443 (HTTPS) and/or TCP/22 (SSH). If a GitOps controller is used, this means reachability from the cluster. If the cluster has no direct internet egress and the repository is SESTEK-hosted, a mirror or self-hosted Git server inside the customer network may be required.
Applying Manifests to the Cluster
Once manifests are in the Git repository, they are applied to the cluster in one of two ways. A GitOps controller is not required.
Option 1 — Existing GitOps Controller (preferred when available)
If the customer already operates a GitOps controller, it can be used to synchronize the Knovvu manifests.
- Supported controllers: Argo CD, Flux CD, OpenShift GitOps (Argo CD-based).
- Permissions: The controller's service account needs admin permissions within the Knovvu namespaces. Cluster-admin is not required.
- CRDs: The controller should be configured to reconcile standard Kubernetes resources plus Helm releases and/or Kustomize overlays as appropriate (
Application,ApplicationSetfor Argo CD;Kustomization,HelmRelease,GitRepositoryfor Flux). - Sync policy: Auto-sync with prune and self-heal is a reasonable default for non-production environments; manual sync gated by approvals is recommended for production.
- Repository connection: The repository must be registered in the controller and reporting a healthy connection before SESTEK pushes the first manifest set.
Option 2 — SESTEK Apply Scripts (no controller required)
If the customer does not use a GitOps controller, SESTEK applies the manifests from the repository using its own custom scripts (standard kubectl/helm based tooling). In this case:
- No controller needs to be installed in the cluster.
- The machine running the apply scripts needs network access to the Kubernetes API and read access to the Git repository.
- A kubeconfig (or equivalent access) scoped to the Knovvu namespaces with admin permissions must be provided to SESTEK.
- The Git repository remains the source of truth; the cluster is updated by re-running the apply process whenever manifests change.
The choice between the two options is made with the customer during the project kickoff.
Kubernetes Distributions
Knovvu applications run on any compatible Kubernetes flavor from different providers like AWS, Azure, RedHat, Rancher, etc with the version restriction specified in the following section.
Knovvu applications also honour and embrace provider specific features to deliver better experience for their users.
Tested/verified distributions and providers are given below however please note that this is not an exhaustive list of all.
| Distro | Provider |
|---|---|
| OpenShift Container Platform | RedHat |
| Rancher | SUSE |
| EKS | AWS |
| AKS | Azure |
| GCP | Google Cloud |
| Anthos | Google Cloud |
| OKE | Oracle Cloud |
| Vanilla | Community |
Kubernetes Versions
Minimum Kubernetes version supported by SESTEK is v.1.22.X and maximum supported version is v1.34.X.
Ingress settings
SSL Redirection should be disabled on ingress.
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Kubernetes Namespaces
Namespaces (OpenShift projects) for common components and each product should be created before the installation. Names of the namespaces will be shared by SESTEK based on the product being deployed.
Network access between namespaces must be enabled for Knovvu products. This allows necessary communication between different components of the Knovvu system that may be deployed in separate namespaces.
Persistent Volumes
Knovvu applications require persistent volumes to store their data. Therefore persistent volume support should be enabled in the cluster. Additionally, Knovvu applications should have the permission to request persistent volumes from the cluster. If this is not the case, customers can create the requested persistent volumes manually. However, this requires communication between SESTEK and the customer and may slow down the deployment process.
Persistent Volume Access Mode
For optimal performance, Knovvu applications should use ReadWriteOnce (RWO) access mode for persistent volumes. This configuration provides faster deployment and better overall performance. While ReadWriteMany (RWM) access mode is supported, using ReadWriteOnce is recommended to ensure efficient operation of the applications.
If NFS-backed Persistent Volumes are used for any stateful applications (MinIO, Git repositories, databases, etc.), ensure that the NFS export policy does NOT have all_squash enabled. This setting causes all file operations to be mapped to an anonymous user (typically nobody:nogroup with UID 65534), leading to permission errors. The export should allow proper UID/GID mapping to match the container's running user.
Cluster Permissions
Customers should provide admin permissions in SESTEK namespaces. SESTEK applications don't require cluster admin permissions. Service accounts will be created by SESTEK during deployment.
EndpointSlice Management Access Requirement
Managing Kubernetes EndpointSlices (creating, updating, or deleting) requires permissions beyond the default project/namespace admin role in OpenShift. By default, project admins do not have write access to endpointslices.discovery.k8s.io.
To enable these operations, a custom Role granting the necessary verbs (get, list, watch, create, update, patch, delete) must be created and bound, within each target namespace, to the identity that applies the Knovvu manifests — the GitOps controller's service account (Option 1) or the identity in the kubeconfig used by SESTEK's apply scripts (Option 2).
OpenShift Resource Quota Sizing
It is recommended to set the quota based on the total resource requests of all workloads in the namespace, plus a 30% buffer to account for rolling deployments and pod restarts.
LimitRanges
Common Namespace
The pod limits for the common namespace should be as follows:
| Resource | Minimum | Maximum |
|---|---|---|
| CPU | 10m |
3 |
| Memory | 6Mi |
6Gi |
STAI, Core, CA and VA Namespace
The pod limits for those namespaces should be as follows:
| Resource | Minimum | Maximum |
|---|---|---|
| CPU | 10m |
16 |
| Memory | 6Mi |
32Gi |
Network
Worker nodes in the cluster, and the environment that applies the manifests (GitOps controller or SESTEK apply scripts), should be able to access the services using the protocols and ports that are provided below.
|
Source |
Destination |
Port |
Protocol |
Conditions |
|---|---|---|---|---|
|
Manifest applier (controller/script) |
Git repository host |
443 / 22 |
HTTPS/SSH |
Always |
|
Manifest applier (controller/script) |
Kubernetes API |
6443 |
TCP |
Always |
|
Cluster / worker nodes |
Container registry |
443 |
HTTPS |
Always |
|
Worker nodes |
License service |
30113 |
TCP |
Always |
|
Worker nodes |
MS SQL Server |
1433 |
TCP |
Required only if MS SQL Server is used as database. |
|
Worker nodes |
PostgreSQL |
5432 |
TCP |
Required only if PostgreSQL is used as database. |
|
Worker nodes |
SESTEK Call Recorder |
2050 |
TCP |
Required only if SESTEK Recorder is used with Knovvu Analytics product. |
|
Worker nodes |
SR Servers |
10097 |
HTTP |
Required only if SESTEK SR Server is used separately. |
Container Registry
Customers are required to proxy their container registries to the Central Knovvu Container Registry (docker.sestek.com). If this is not possible, then customers should pull all the images from Central Knovvu Container Registry and push them to their container registry. If this is the case, image list will be provided by SESTEK.
Customers are responsible for maintaining copies of all container images used in their environments within their own internal container registry. Keeping local copies ensures operational continuity and independence from Sestek registry changes.
The following container registries are supported:
- Docker Hub (also any container registry supporting Docker Registry v2 API)
- Quay.io
- Nexus
- ghcr (GitHub Container Registry)
- ECR (Elastic Container Registry by AWS)
- ACR (Azure Container Registry)
- gcr (Google Container Registry)
- JFrog
The user provided for the docker registry must have the permissions to list and pull images.
Object Storage
If customers already have a managed S3-compatible object store, SESTEK recommends using it instead of deploying MinIO inside the cluster. Based on our experience, customer-managed object stores are typically well-maintained and high-performing, whereas a SESTEK-managed MinIO instance, mounted via a Persistent Volume backed generally by an NFS storage which is by nature slow, may not offer the same level of reliability and performance.
If customers perform regular CVE scanning, MinIO should not be used. The MinIO codebase is in a maintenance-only state, and CVEs, including critical security vulnerabilities, are evaluated and fixed on a case-by-case basis. For such customers, an S3-compatible object storage service must be provided and managed by the customer.
License Server
We will be utilizing the windows license server customer provide us with to setup our license server. We need the below exe to be downloaded to license server machines, so that we can setup the licensing.
Maintenance and Support of Third-Party Applications
When a component is deployed and managed by SESTEK (including open-source third-party applications such as Redis and RabbitMQ), its maintenance and technical support are also provided by SESTEK. For more information, please refer to the Component Responsibility Matrix.
Ownership of certain components may vary depending on customer policies. For these components, ownership must be clearly defined before the start of the project.
Observability
Application Logging
For all deployments, the Elastic Stack is deployed and managed by Sestek within the cluster. For customers who require integration with their own logging solutions (such as Elastic Stack, Splunk, or similar platforms), we support forwarding logs to a customer-managed instance. For this integration, Logstash is used to ship logs, and both data streams and indices are supported.
Metrics and Alerts
Knovvu applications expose metrics in Prometheus format. If the customer already has a Prometheus Operator installed in their cluster, full permissions for the ServiceMonitor, PrometheusRule, and AlertmanagerConfig CRDs must be granted to the identity that applies the manifests. This allows Prometheus to scrape application metrics and apply the associated alert definitions.
Distributed Tracing
All Knovvu applications are instrumented with OpenTelemetry for distributed tracing, using vendor-neutral telemetry standards. Customers may integrate any OpenTelemetry-compatible observability platform (commercial or open source) to collect, store, and visualize these traces. Once configured, traces appear in the selected tool with end-to-end request timelines and cross-service correlation for troubleshooting and performance analysis. By default, Sestek deploys Elastic Stack APM to collect traces and uses Kibana for visualization.
Information expected from customer
- Base domain name
Customers should choose a base domain name that SESTEK applications will use (e.g. knovvu.mycompany.com). Changing the domain address later in the deployment process requires additional actions, that might delay the installation process.
- Preferred database (Microsoft SQL Server or PostgreSQL). Version should be verified based on the product being deployed.
- Database server's IP, port, DNS (if exist) and credentials with dbowner privileges.
- Kubernetes platform and version (e.g. Openshift v4.12.0)
- Kubernetes Engine version (e.g. 1.32.13)
- Storage class name (e.g. my-storage-class)
- Kubeconfig file for the cluster that Knovvu applications will be installed to
- Address and credentials of the container registry
- Git repository:
- Whether the customer can use a SESTEK-hosted repository, or requires the repository to be hosted on their side (and if so, the provider: GitHub, GitLab, Bitbucket, Azure DevOps, self-hosted, etc.)
- Repository URL, once created
- Authentication method for the manifest applier (PAT, deploy token, SSH key)
- Branch/folder layout per environment, if already defined
- Manifest apply method:
- Whether the customer has an existing GitOps controller that can be used (type — Argo CD / Flux / OpenShift GitOps — and version), or
- Whether SESTEK should apply the manifests with its own scripts. In that case, a kubeconfig scoped to the Knovvu namespaces with admin permissions is required.
- Whether or not a service mesh (e.g. Istio) is deployed in the cluster
- Whether or not network policies are enabled and configured in the cluster
- Whether Redis/RabbitMQ will be deployed inside the cluster by Sestek, or if externally provided instances supplied by the customer will be used (If customer-provided, whether they will be configured in cluster/sentinel mode, and if so, the relevant endpoint information)