- May 14, 2022
-
-
Sheogorath authored
This patch Upgrades calico to version 3.23.0, which is a complicated endeavour since it switches the helm release namespaces from default to tigera-operator. Besides the regular upgrade tasks, this reqires some explicit adjusting of helm annotations and flux labels, in order to convince the cluster, that's how it always has been. The following tasks need to be done: Before you start --- Disable flux: ``` kubectl scale deployment -n flux-system source-controller --replicas 0 kubectl scale deployment -n flux-system helm-controller --replicas 0 kubectl scale deployment -n flux-system kustomize-controller --replicas 0 ``` The upgrade --- Push/merge this patch. (!!!) Update helm release annotations: ``` kubectl patch installation default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' kubectl patch apiserver default --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"annotations": {"meta.helm.sh/release-namespace": "tigera-operator"}}}' ``` Patch flux labels: ``` kubectl patch installation default --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' kubectl patch apiserver default --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' kubectl patch podsecuritypolicy tigera-operator --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' kubectl patch -n tigera-operator deployment tigera-operator --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' kubectl patch -n tigera-operator serviceaccount tigera-operator --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' kubectl patch clusterrole tigera-operator --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' kubectl patch clusterrolebinding tigera-operator tigera-operator --type=merge -p '{"metadata": {"labels": {"helm.toolkit.fluxcd.io/namespace": "tigera-operator"}}}' ``` Remove flux labels from namespace: ``` kubectl label namespace tigera-operator helm.toolkit.fluxcd.io/namespace- ``` Get values: ``` helm get values -n default calico > values.yaml ``` Install calico: ``` helm repo add projectcalico https://projectcalico.docs.tigera.io/charts helm install calico projectcalico/tigera-operator --version v3.23.0 --namespace tigera-operator --values values.yaml ``` Migrate flux helmrelease: ``` kubectl apply -n tigera-operator -f bootstrap/calico/release.yaml kubectl patch helmrelease calico --type=json -p="[{'op': 'remove', 'path': '/metadata/finalizers'}]" -n default kubectl delete helmrelease -n default calico ``` Delete old helm install: ``` kubectl delete secret -n default -l name=calico -l owner=helm ``` Starting flux again --- ``` kubectl scale deployment -n flux-system source-controller --replicas 1 kubectl scale deployment -n flux-system helm-controller --replicas 1 kubectl scale deployment -n flux-system kustomize-controller --replicas 1 ``` References: https://projectcalico.docs.tigera.io/archive/v3.23/release-notes/
-
Sheogorath authored
-
Sheogorath authored
-
Sheogorath authored
-
Sheogorath authored
-
Botaniker (Bot) authored
-
Botaniker (Bot) authored
-
Botaniker (Bot) authored
-
Botaniker (Bot) authored
-
- May 11, 2022
-
-
Sheogorath authored
-
Sheogorath authored
-
Sheogorath authored
-
- May 10, 2022
-
-
Sheogorath authored
-
Sheogorath authored
-
Sheogorath authored
In order to make the amd drivers deployment SELinux aware, this patch sets SELinux `type` to `spc_t`, which allows the container to access the target paths and install the driver as intended.
-
Sheogorath authored
Add a very basic, shared policy to prevent ingress traffic for a namespace.
-
Sheogorath authored
This patch enables AMD GPU drivers for usage with pods in the cluster.
-
Botaniker (Bot) authored
-
- May 08, 2022
-
-
Sheogorath authored
-
Sheogorath authored
This patch increases the database volume size by 5 times, main goal is to provide more headroom and reduce 80% usage alerts.
-
Sheogorath authored
-
- May 07, 2022
-
-
Sheogorath authored
This patch increases the volume size to accomodate potential groth and provide enought headroom to shut up the alert about the volume being 80% filled up.
-
Botaniker (Bot) authored
-
- May 04, 2022
-
-
Sheogorath authored
-
Sheogorath authored
-
Botaniker (Bot) authored
-
Botaniker (Bot) authored
-
- May 02, 2022
-
-
Sheogorath authored
-
Sheogorath authored
This patch disables the presence on synapse. This is done to reduce the amount of DNS requests when clients are active.as it'll send out presence updates to all homeserver known to that client on a regular basis. This patch therefore improves the DNS request amount, by disabling presence and therefore preventing these events from being sent out.
-
- May 01, 2022
-
-
Sheogorath authored
After digging a bit deeper into the data, it's clear that there is a correlation between large volume of DNS requests and the `GET` requests to the `_matrix` route. When these slow down, the DNS requests suddenly also disappear. It's noteworthy, that `PUT` requests (means incoming Events), remain unchanged. The potential releation became apparent, when on 2022-04-30 during the evening hours, there is a 21:30 to 01:55 CEST time window where this correlation becoming apparant. The `GET` request can be more drilled down using the Synapse Dashboard to be `SyncRestServlet` requests. These requests are client requests, to keep the client state up-to-date and active and explains the lack of requests for a few hours, when Element Desktop was shut down. This needs some further drill down into Synapse's internals to figure out exactly what causes these mass request, but `presence` is a good contender. References: https://github.com/matrix-org/synapse/blob/0922462fc7df951e88c8ec0fb35e53e3cd801b76/synapse/rest/client/sync.py#L52 https://github.com/matrix-org/synapse/tree/8d156ec0ba17d848581f18aa40ebfd76dda763d4/contrib/grafana
-
- Apr 30, 2022
-
-
Sheogorath authored
-
Sheogorath authored
This patch changes the upstream server to quad9, as it seems that dnsproxy doesn't support SNI for DoH or DoT Servers.
-
Sheogorath authored
-
Sheogorath authored
This patch provides some bugfixes as well as fixing the inabbility to launch `/dnsproxy` due to not being statically built.
-
Sheogorath authored
-
Sheogorath authored
It's currently not working as expected and causes more problem that it helps with cert-manager in the cluster.
-
Sheogorath authored
-
Sheogorath authored
This patch refactors the existing setup to use dnsproxy instead of unbound for base DNS. This should improve performance and provide better DNS resolution, since previously there were a lot of failed lookups, which work with the online resolver. References: https://github.com/AdguardTeam/dnsproxy
-
Sheogorath authored
This patch will enable Kubernetes updates, which should enable automatic updates for the DNS container appp on the k8s01 cluster.