k8s: update readme

Signed-off-by: Rash419 <rashesh.padia@collabora.com>
Change-Id: I5c145bc2c5b718266caa40f9177e0b0591ab3522
pull/7464/head
Rash419 2023-10-13 18:36:16 +05:30 committed by Rashesh Padia
parent 0b2b048620
commit 40d97df58e
1 changed files with 274 additions and 141 deletions

View File

@ -1,104 +1,209 @@
# Collabora Online for Kubernetes
In order for Collaborative Editing to function correctly on kubernetes, it is vital to ensure that all users editing the same document end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc) for use with this document. Thus load balancing can be done by using WOPISrc ensuring that all URLs that contain the same WOPISrc are sent to the same pod.
In order for Collaborative Editing and copy/paste to function correctly on kubernetes, it is vital to ensure that all users editing the same document and all the clipboard request end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc) for use with this document. Thus load balancing can be done by using WOPISrc -- ensuring that all URLs that contain the same WOPISrc are sent to the same pod.
## Helm chart for deploying Collabora Online in Kubernetes cluster
## Deploying Collabora Online in Kubernetes
How to test this specific setup:
1. Install Kubernetes cluster locally - minikube - https://minikube.sigs.k8s.io/docs/
2. Install helm - https://helm.sh/docs/intro/install/
3. Install HAProxy Kubernetes Ingress Controller - https://www.haproxy.com/documentation/kubernetes/latest/installation/community/kubernetes/
4. Create an `my_values.yaml` for your minikube setup (if you setup differe e.g. take an look in then [`values.yaml`](./collabora-online/values.yaml) of the helmchart - e.g. for annotations using [NGINX Ingress Controller](https://docs.nginx.com/nginx-ingress-controller/) or more komplex setups, see [Nodes](#Notes) ):
1. Install [helm](https://helm.sh/docs/intro/install/)
Here an example `my_values.yaml`:
```yaml
replicaCount: 3
2. Setting up Kubernetes Ingress Controller
ingress:
enabled: true
annotations:
haproxy.org/timeout-tunnel: "3600s"
haproxy.org/backend-config-snippet: |
mode http
balance leastconn
stick-table type string len 2048 size 1k store conn_cur
http-request set-var(txn.wopisrcconns) url_param(WOPISrc),table_conn_cur()
http-request track-sc1 url_param(WOPISrc)
stick match url_param(WOPISrc) if { var(txn.wopisrcconns) -m int gt 0 }
stick store-request url_param(WOPISrc)
hosts:
- host: chart-example.local
A. Nginx:
Install [Nginx Ingress
Controller](https://kubernetes.github.io/ingress-nginx/deploy/)
B. HAProxy:
Install [HAProxy Ingress
Controller](https://www.haproxy.com/documentation/kubernetes-ingress/)
---
**Note:**
**Openshift** uses minimized version of HAproxy called
[Router](https://docs.openshift.com/container-platform/3.11/install_config/router) that doesn\'t support all functionality of HAProxy but for COOL we need advance annotations Therefore it is recommended deploy [HAproxy Kubernetes Ingress](https://artifacthub.io/packages/helm/haproxytech/kubernetes-ingress) in `collabora` namespace
---
3. Create an `my_values.yaml` (if your setup differs e.g. take an look in then `values.yaml ./collabora-online/values.yaml`) of the
helmchart
A. HAproxy:
``` yaml
replicaCount: 3
ingress:
enabled: true
className: "haproxy"
annotations:
haproxy.org/timeout-tunnel: "3600s"
haproxy.org/backend-config-snippet: |
balance url_param WOPISrc check_post
hash-type consistent
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
pathType: ImplementationSpecific
image:
tag: "latest"
pullPolicy: Always
image:
tag: "latest"
autoscaling:
enabled: false
collabora:
aliasgroups:
- host: "https://example.integrator.com:443"
extra_params: --o:ssl.enable=false --o:ssl.termination=true
resources:
limits:
cpu: "1800m"
memory: "2000Mi"
requests:
cpu: "1800m"
memory: "2000Mi"
```
B. Nginx:
``` yaml
replicaCount: 3
ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_WOPISrc"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
image:
tag: "latest"
autoscaling:
enabled: false
collabora:
aliasgroups:
- host: "https://example.integrator.com:443"
extra_params: --o:ssl.enable=false --o:ssl.termination=true
resources:
limits:
cpu: "1800m"
memory: "2000Mi"
requests:
cpu: "1800m"
memory: "2000Mi"
```
---
**Note:**
- **Horizontal Pod Autoscaling(HPA) is disabled for now. Because after scaling it breaks the collaborative editing and copy/paste
Therefore please set replicaCount as per your needs**
- If you have multiple host and aliases setup set aliasgroups in `my_values.yaml`:
``` yaml
collabora:
- host: "<protocol>://<host-name>:<port>"
# if there are no aliases you can ignore the below line
aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"]
# more host and aliases list is possible
```
Important notes:
1. If you have multiple host and aliases setup set aliasgroups in my_values.yaml
```yaml
collabora:
- host: "<protocol>://<host-name>:<port>"
aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"]
```
- Specify `server_name` when the hostname is not reachable directly for example behind reverse-proxy
2. Specify `server_name` when the hostname is not reachable directly for example behind reverse-proxy
```yaml
collabora:
server_name: <hostname>:<port>
```
5. Install helm-chart using below command (with a new namespace collabora)
```bash
helm repo add collabora https://collaboraonline.github.io/online/
helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f my_values.yaml
``` yaml
collabora:
server_name: <hostname>:<port>
```
6. Finally spin the collabora-online in kubernetes
- In **Openshift** , it is recommended to use HAproxy deployment instead of default router. And add `className` in ingress block
so that Openshift uses HAProxy Ingress Controller instead of `Router`:
A. HAProxy service is deployed as NodePort so we can access it with node's ip address. To get node ip
```bash
``` yaml
ingress:
className: "haproxy"
```
---
4. Install helm-chart using below command, it should deploy the collabora-online
``` bash
helm repo add collabora https://collaboraonline.github.io/online/
helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f my_values.yaml
```
5. Follow only if you are using `NodePort` service type in HAProxy and/or using minikube to setup, otherwise skip
A. HAProxy service is deployed as NodePort so we can access it with node's ip address. To get node ip
```bash
minikube ip
```
Example output:
```
192.168.0.106
```
B. Each container port is mapped to a `NodePort` port via the `Service` object. To find those ports
```
kubectl get svc --namespace=haproxy-controller
```
Example output:
```
|----------------|---------|--------------|------------|------------------------------------------|
|NAME |TYPE |CLUSTER-IP |EXTERNAL-IP |PORT(S) |
|----------------|---------|--------------|------------|------------------------------------------|
|haproxy-ingress |NodePort |10.108.214.98 |<none> |80:30536/TCP,443:31821/TCP,1024:30480/TCP |
|----------------|---------|--------------|------------|------------------------------------------|
```
In this instance, the following ports were mapped:
- Container port 80 to NodePort 30536
- Container port 443 to NodePort 31821
- Container port 1024 to NodePort 30480
6. Additional step if deploying on minikube for testing:
1. Get minikube ip:
``` bash
minikube ip
```
Example output:
```
``` bash
192.168.0.106
```
B. Each container port is mapped to a `NodePort` port via the `Service` object. To find those ports
```
kubectl get svc --namespace=haproxy-controller
```
Example output:
```
|----------------|---------|--------------|------------|------------------------------------------|
|NAME |TYPE |CLUSTER-IP |EXTERNAL-IP |PORT(S) |
|----------------|---------|--------------|------------|------------------------------------------|
|haproxy-ingress |NodePort |10.108.214.98 |<none> |80:30536/TCP,443:31821/TCP,1024:30480/TCP |
|----------------|---------|--------------|------------|------------------------------------------|
```
In this instance, the following ports were mapped:
- Container port 80 to NodePort 30536
- Container port 443 to NodePort 31821
- Container port 1024 to NodePort 30480
2. Add hostname to `/etc/hosts`
C. Now in this case to make our hostname available we have to add following line into /etc/hosts:
```
``` bash
192.168.0.106 chart-example.local
```
7. To check if everything is setup correctly you can run:
3. To check if everything is setup correctly you can run:
```bash
``` bash
curl -I -H 'Host: chart-example.local' 'http://192.168.0.106:30536/'
```
It should return a similar output as below:
```
``` bash
HTTP/1.1 200 OK
last-modified: Tue, 18 May 2021 10:46:29
user-agent: COOLWSD WOPI Agent 6.4.8
@ -106,92 +211,120 @@ How to test this specific setup:
content-type: text/plain
```
## Kubernetes cluster monitoring
## Some useful commands to check what is happening :
* Where is this pods, are they ready ?
1. Install [kube-prometheus-stack](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack), a collection of [Grafana](http://grafana.com/) dashboards, and [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with
[Prometheus](https://prometheus.io/) using the [Prometheus Operator](https://prometheus-operator.dev/).
```bash
kubectl -n collabora get pod
```
2. Enable prometheus service monitor, rules and grafana in your
example output :
```
NAME READY STATUS RESTARTS AGE
collabora-online-5fb4869564-dnzmk 1/1 Running 0 28h
collabora-online-5fb4869564-fb4cf 1/1 Running 0 28h
collabora-online-5fb4869564-wbrv2 1/1 Running 0 28h
```
`my_values.yaml`
* What is the outside host that multiple coolwsd servers actually answering ?
```bash
kubectl get ingress -n collabora
```
``` yaml
prometheus:
servicemonitor:
enabled: true
labels:
release: "kube-prometheus-stack"
rules:
enabled: true # will deploy alert rules
additionalLabels:
release: "kube-prometheus-stack"
grafana:
dashboards:
enabled: true # will deploy default dashboards
```
---
**Note:**
example output :
```
|-----------|------------------|---------------------|------------------------|-------|
| NAMESPACE | NAME | HOSTS | ADDRESS | PORTS |
|-----------|------------------|---------------------|------------------------|-------|
| collabora | collabora-online | chart-example.local | | 80 |
|-----------|------------------|---------------------|------------------------|-------|
```
Use `kube-prometheus-stack` as release name when installing [kube-prometheus-stack](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack) helm chart because we have passed `release=kube-prometheus-stack` label in our `my_values.yaml`. For Grafana Dashboards you may need to enable scan in correct namespaces (or ALL), enabled by `sidecar.dashboards.searchNamespace` in [Helmchart of grafana](https://artifacthub.io/packages/helm/grafana/grafana) (which is part of PrometheusOperator, so `grafana.sidecar.dashboards.searchNamespace`)
* To uninstall the helm chart
```bash
helm uninstall --namespace collabora collabora-online
```
---
## Notes:
* For big setups, you maybe NOT want to restart every pod to modify WOPI hosts, therefore it is possible to setup an additional webserver to serve a ConfigMap for using [Remote/Dynamic Configuration](https://sdk.collaboraonline.com/docs/installation/Configuration.html?highlight=remote#remote-dynamic-configuration):
## Dynamic/Remote configuration in kubernetes
```yaml
collabora:
env:
For big setups, you may not want to restart every pod to modify WOPI
hosts, therefore it is possible to setup an additional webserver to
serve a ConfigMap for using [Remote/Dynamic
Configuration](https://sdk.collaboraonline.com/docs/installation/Configuration.html#remote-dynamic-configuration)
``` yaml
collabora:
env:
- name: remoteconfigurl
value: https://dynconfig.public.example.com/config/config.json
value: https://dynconfig.public.example.com/config/config.json
dynamicConfig:
enabled: true
dynamicConfig:
enabled: true
ingress:
ingress:
enabled: true
annotations:
"cert-manager.io/issuer": letsencrypt-zprod
"cert-manager.io/issuer": letsencrypt-zprod
hosts:
- host: "dynconfig.public.example.com"
- host: "dynconfig.public.example.com"
tls:
- secretName: "collabora-online-dynconfig-tls"
hosts:
- secretName: "collabora-online-dynconfig-tls"
hosts:
- "dynconfig.public.example.com"
configuration:
kind: "configuration"
storage:
configuration:
kind: "configuration"
storage:
wopi:
alias_groups:
groups:
- host: "https://nextcloud\\.public\\.example\\.com/"
allow: true
- host: "https://moodle\\.public\\.example\\.com/"
allow: true
aliases:
- "https://moodle3\\.public\\.example2\\.de/"
```
PS: In current state of Collabora needs outside of debuggin for Remove/Dynamic Configuration HTTPS, see [here in wsd/COOLWSD.cpp](https://github.com/CollaboraOnline/online/blob/8591d323c6db99e592ac8ac8ebef0e3a95f2e6ba/wsd/COOLWSD.cpp#L1069-L1096)
alias_groups:
groups:
- host: "https://domain1\\.xyz\\.abc\\.com/"
allow: true
- host: "https://domain2\\.pqr\\.def\\.com/"
allow: true
aliases:
- "https://domain2\\.ghi\\.leno\\.de/"
```
---
* Works well with [Prometheus Operator](https://prometheus-operator.dev/) ([Helmchart](https://artifacthub.io/packages/helm/prometheus-community/kube-prometheus-stack)) and there setup of [Grafana](https://grafana.com/grafana/), by enabling following values:
```yaml
prometheus:
servicemonitor:
enabled: true
labels:
release: "kube-prometheus-stack"
rules:
enabled: true # will deploy alert rules
additionalLabels:
release: "kube-prometheus-stack"
grafana:
dashboards:
enabled: true # will deploy default dashboards
```
PS: The labels `release=kube-prometheus-stack` is setup with the helmchart of the Prometheus Operator. For Grafana Dashboards it maybe need scan enable to scan in correct namespaces (or ALL), enabled by `sidecar.dashboards.searchNamespace` in [Helmchart of grafana](https://artifacthub.io/packages/helm/grafana/grafana) (which is part of PrometheusOperator, so `grafana.sidecar.dashboards.searchNamespace`)
**Note:**
In current state of COOL remoteconfigurl for [Remote/DynamicConfiguration](https://sdk.collaboraonline.com/docs/installation/Configuration.html#remote-dynamic-configuration) only uses HTTPS. see [here in wsd/COOLWSD.cpp](https://github.com/CollaboraOnline/online/blob/8591d323c6db99e592ac8ac8ebef0e3a95f2e6ba/wsd/COOLWSD.cpp#L1069-L1096)
---
## Useful commands to check what is happening
Where is this pods, are they ready?
``` bash
kubectl -n collabora get pod
```
example output :
``` bash
NAME READY STATUS RESTARTS AGE
collabora-online-5fb4869564-dnzmk 1/1 Running 0 28h
collabora-online-5fb4869564-fb4cf 1/1 Running 0 28h
collabora-online-5fb4869564-wbrv2 1/1 Running 0 28h
```
What is the outside host that multiple coolwsd servers actually
answering?
``` bash
kubectl get ingress -n collabora
```
example output :
``` bash
|-----------|------------------|--------------------------|------------------------|-------|
| NAMESPACE | NAME | HOSTS | ADDRESS | PORTS |
|-----------|------------------|--------------------------|------------------------|-------|
| collabora | collabora-online |chart-example.local | | 80 |
|-----------|------------------|--------------------------|------------------------|-------|
```
To uninstall the helm chart
``` bash
helm uninstall collabora-online -n collabora
```