From 545d9ed089d040219b845dfdb906c0bc1314cccf Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pawe=C5=82=20Krupa=20=28paulfantom=29?= <pawel@krupa.net.pl>
Date: Tue, 12 Apr 2022 13:29:33 +0200
Subject: [PATCH] *: rework readme
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Signed-off-by: Paweł Krupa (paulfantom) <pawel@krupa.net.pl>
---
 README.md               | 352 +++-------------------------------------
 RELEASE.md              |   4 +-
 docs/access-ui.md       |  29 ++++
 docs/customizing.md     | 169 +++++++++++++++++++
 docs/troubleshooting.md |  49 ++++++
 docs/update.md          |  29 ++++
 6 files changed, 299 insertions(+), 333 deletions(-)
 create mode 100644 docs/access-ui.md
 create mode 100644 docs/customizing.md
 create mode 100644 docs/troubleshooting.md
 create mode 100644 docs/update.md

diff --git a/README.md b/README.md
index c0a33ad4..6b158705 100644
--- a/README.md
+++ b/README.md
@@ -22,44 +22,6 @@ Components included in this package:
 
 This stack is meant for cluster monitoring, so it is pre-configured to collect metrics from all Kubernetes components. In addition to that it delivers a default set of dashboards and alerting rules. Many of the useful dashboards and alerts come from the [kubernetes-mixin project](https://github.com/kubernetes-monitoring/kubernetes-mixin), similar to this project it provides composable jsonnet as a library for users to customize to their needs.
 
-## Warning
-
-If you are migrating from `release-0.7` branch or earlier please read [what changed and how to migrate in our guide](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/migration-guide.md).
-
-## Table of contents
-
-- [kube-prometheus](#kube-prometheus)
-  - [Warning](#warning)
-  - [Table of contents](#table-of-contents)
-  - [Prerequisites](#prerequisites)
-    - [minikube](#minikube)
-  - [Compatibility](#compatibility)
-    - [Kubernetes compatibility matrix](#kubernetes-compatibility-matrix)
-  - [Quickstart](#quickstart)
-    - [Access the dashboards](#access-the-dashboards)
-  - [Customizing Kube-Prometheus](#customizing-kube-prometheus)
-    - [Installing](#installing)
-    - [Compiling](#compiling)
-    - [Apply the kube-prometheus stack](#apply-the-kube-prometheus-stack)
-    - [Containerized Installing and Compiling](#containerized-installing-and-compiling)
-  - [Update from upstream project](#update-from-upstream-project)
-    - [Update jb](#update-jb)
-    - [Update kube-prometheus](#update-kube-prometheus)
-    - [Compile the manifests and apply](#compile-the-manifests-and-apply)
-  - [Configuration](#configuration)
-  - [Customization Examples](#customization-examples)
-  - [Minikube Example](#minikube-example)
-  - [Continuous Delivery](#continuous-delivery)
-  - [Security](docs/security.md)
-  - [Troubleshooting](#troubleshooting)
-    - [Error retrieving kubelet metrics](#error-retrieving-kubelet-metrics)
-      - [Authentication problem](#authentication-problem)
-      - [Authorization problem](#authorization-problem)
-    - [kube-state-metrics resource usage](#kube-state-metrics-resource-usage)
-    - [Error retrieving kube-proxy metrics](#error-retrieving-kube-proxy-metrics)
-  - [Contributing](CONTRIBUTING.md)
-  - [License](#license)
-
 ## Prerequisites
 
 You will need a Kubernetes cluster, that's it! By default it is assumed, that the kubelet uses token authentication and authorization, as otherwise Prometheus needs a client certificate, which gives it full access to the kubelet, rather than just the metrics. Token authentication and authorization allows more fine grained and easier access control.
@@ -72,25 +34,9 @@ This means the kubelet configuration must contain these flags:
 This stack provides [resource metrics](https://github.com/kubernetes/metrics#resource-metrics-api) by deploying the [Prometheus Adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter/).
 This adapter is an Extension API Server and Kubernetes needs to be have this feature enabled, otherwise the adapter has no effect, but is still deployed.
 
-### minikube
-
-To try out this stack, start [minikube](https://github.com/kubernetes/minikube) with the following command:
-
-```shell
-$ minikube delete && minikube start --kubernetes-version=v1.23.0 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0
-```
-
-The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:
-
-```shell
-$ minikube addons disable metrics-server
-```
-
 ## Compatibility
 
-### Kubernetes compatibility matrix
-
-The following versions are supported and work as we test against these versions in their respective branches. But note that other versions might work!
+The following Kubernetes versions are supported and work as we test against these versions in their respective branches. But note that other versions might work!
 
 | kube-prometheus stack                                                                      | Kubernetes 1.19 | Kubernetes 1.20 | Kubernetes 1.21 | Kubernetes 1.22 | Kubernetes 1.23 |
 |--------------------------------------------------------------------------------------------|-----------------|-----------------|-----------------|-----------------|-----------------|
@@ -102,7 +48,7 @@ The following versions are supported and work as we test against these versions
 
 ## Quickstart
 
-> Note: For versions before Kubernetes v1.21.z refer to the [Kubernetes compatibility matrix](#kubernetes-compatibility-matrix) in order to choose a compatible branch.
+> Note: For versions before Kubernetes v1.21.z refer to the [Kubernetes compatibility matrix](#compatibility) in order to choose a compatible branch.
 
 This project is intended to be used as a library (i.e. the intent is not for you to create your own modified copy of this repository).
 
@@ -127,298 +73,42 @@ be created successfully.
 kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
 ```
 
-### Access the dashboards
-
-Prometheus, Grafana, and Alertmanager dashboards can be accessed quickly using `kubectl port-forward` after running the quickstart via the commands below. Kubernetes 1.10 or later is required.
-
-> Note: There are instructions on how to route to these pods behind an ingress controller in the [Exposing Prometheus/Alermanager/Grafana via Ingress](docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md) section.
-
-Prometheus
-
-```shell
-$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
-```
-
-Then access via [http://localhost:9090](http://localhost:9090)
-
-Grafana
-
-```shell
-$ kubectl --namespace monitoring port-forward svc/grafana 3000
-```
-
-Then access via [http://localhost:3000](http://localhost:3000) and use the default grafana user:password of `admin:admin`.
-
-Alert Manager
-
-```shell
-$ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093
-```
-
-Then access via [http://localhost:9093](http://localhost:9093)
-
-## Customizing Kube-Prometheus
-
-This section:
-* describes how to customize the kube-prometheus library via compiling the kube-prometheus manifests yourself (as an alternative to the [Quickstart section](#quickstart)).
-* still doesn't require you to make a copy of this entire repository, but rather only a copy of a few select files.
-
-### Installing
-
-The content of this project consists of a set of [jsonnet](http://jsonnet.org/) files making up a library to be consumed.
-
-Install this library in your own project with [jsonnet-bundler](https://github.com/jsonnet-bundler/jsonnet-bundler#install) (the jsonnet package manager):
-
-```shell
-$ mkdir my-kube-prometheus; cd my-kube-prometheus
-$ jb init  # Creates the initial/empty `jsonnetfile.json`
-# Install the kube-prometheus dependency
-$ jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main # Creates `vendor/` & `jsonnetfile.lock.json`, and fills in `jsonnetfile.json`
-
-$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/example.jsonnet -O example.jsonnet
-$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/build.sh -O build.sh
-```
-
-> `jb` can be installed with `go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest`
-
-> An e.g. of how to install a given version of this library: `jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main`
-
-In order to update the kube-prometheus dependency, simply use the jsonnet-bundler update functionality:
-
-```shell
-$ jb update
-```
-
-### Compiling
-
-e.g. of how to compile the manifests: `./build.sh example.jsonnet`
-
-> before compiling, install `gojsontoyaml` tool with `go install github.com/brancz/gojsontoyaml@latest` and `jsonnet` with `go install github.com/google/go-jsonnet/cmd/jsonnet@latest`
-
-Here's [example.jsonnet](example.jsonnet):
-
-> Note: some of the following components must be configured beforehand. See [configuration](#configuration) and [customization-examples](#customization-examples).
-
-```jsonnet mdox-exec="cat example.jsonnet"
-local kp =
-  (import 'kube-prometheus/main.libsonnet') +
-  // Uncomment the following imports to enable its patches
-  // (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
-  // (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
-  // (import 'kube-prometheus/addons/node-ports.libsonnet') +
-  // (import 'kube-prometheus/addons/static-etcd.libsonnet') +
-  // (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
-  // (import 'kube-prometheus/addons/external-metrics.libsonnet') +
-  // (import 'kube-prometheus/addons/pyrra.libsonnet') +
-  {
-    values+:: {
-      common+: {
-        namespace: 'monitoring',
-      },
-    },
-  };
-
-{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
-{
-  ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
-  for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
-} +
-// { 'setup/pyrra-slo-CustomResourceDefinition': kp.pyrra.crd } +
-// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
-{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
-{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
-{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
-{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
-{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
-{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
-// { ['pyrra-' + name]: kp.pyrra[name] for name in std.objectFields(kp.pyrra) if name != 'crd' } +
-{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
-{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
-{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
-{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
-{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
-```
-
-And here's the [build.sh](build.sh) script (which uses `vendor/` to render all manifests in a json structure of `{filename: manifest-content}`):
-
-```sh mdox-exec="cat build.sh"
-#!/usr/bin/env bash
-
-# This script uses arg $1 (name of *.jsonnet file to use) to generate the manifests/*.yaml files.
-
-set -e
-set -x
-# only exit with zero if all commands of the pipeline exit successfully
-set -o pipefail
-
-# Make sure to use project tooling
-PATH="$(pwd)/tmp/bin:${PATH}"
-
-# Make sure to start with a clean 'manifests' dir
-rm -rf manifests
-mkdir -p manifests/setup
-
-# Calling gojsontoyaml is optional, but we would like to generate yaml, not json
-jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {}
-
-# Make sure to remove json files
-find manifests -type f ! -name '*.yaml' -delete
-rm -f kustomization
-
-```
-
-> Note you need `jsonnet` (`go get github.com/google/go-jsonnet/cmd/jsonnet`) and `gojsontoyaml` (`go get github.com/brancz/gojsontoyaml`) installed to run `build.sh`. If you just want json output, not yaml, then you can skip the pipe and everything afterwards.
-
-This script runs the jsonnet code, then reads each key of the generated json and uses that as the file name, and writes the value of that key to that file, and converts each json manifest to yaml.
-
-### Apply the kube-prometheus stack
-
-The previous steps (compilation) has created a bunch of manifest files in the manifest/ folder.
-Now simply use `kubectl` to install Prometheus and Grafana as per your configuration:
-
-```shell
-# Update the namespace and CRDs, and then wait for them to be available before creating the remaining resources
-$ kubectl apply --server-side -f manifests/setup
-$ kubectl apply -f manifests/
-```
-
-> Note that due to some CRD size we are using kubeclt server-side apply feature which is generally available since
-> kubernetes 1.22. If you are using previous kubernetes versions this feature may not be available and you would need to
-> use `kubectl create` instead.
-
-Alternatively, the resources in both folders can be applied with a single command
-`kubectl apply --server-side -Rf manifests`, but it may be necessary to run the command multiple times for all components to
-be created successfully.
-
-Check the monitoring namespace (or the namespace you have specific in `namespace: `) and make sure the pods are running. Prometheus and Grafana should be up and running soon.
-
-### Containerized Installing and Compiling
-
-If you don't care to have `jb` nor `jsonnet` nor `gojsontoyaml` installed, then use `quay.io/coreos/jsonnet-ci` container image. Do the following from this `kube-prometheus` directory:
-
-```shell
-$ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci jb update
-$ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci ./build.sh example.jsonnet
-```
-
-## Update from upstream project
-
-You may wish to fetch changes made on this project so they are available to you.
-
-### Update jb
+### minikube
 
-`jb` may have been updated so it's a good idea to get the latest version of this binary:
+To try out this stack, start [minikube](https://github.com/kubernetes/minikube) with the following command:
 
 ```shell
-$ go get -u github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
+$ minikube delete && minikube start --kubernetes-version=v1.23.0 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.bind-address=0.0.0.0 --extra-config=controller-manager.bind-address=0.0.0.0
 ```
 
-### Update kube-prometheus
-
-The command below will sync with upstream project:
+The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:
 
 ```shell
-$ jb update
-```
-
-### Compile the manifests and apply
-
-Once updated, just follow the instructions under "Compiling" and "Apply the kube-prometheus stack" to apply the changes to your cluster.
-
-## Configuration
-
-Jsonnet has the concept of hidden fields. These are fields, that are not going to be rendered in a result. This is used to configure the kube-prometheus components in jsonnet. In the example jsonnet code of the above [Customizing Kube-Prometheus section](#customizing-kube-prometheus), you can see an example of this, where the `namespace` is being configured to be `monitoring`. In order to not override the whole object, use the `+::` construct of jsonnet, to merge objects, this way you can override individual settings, but retain all other settings and defaults.
-
-The available fields and their default values can be seen in [main.libsonnet](jsonnet/kube-prometheus/main.libsonnet). Note that many of the fields get their default values from variables, and for example the version numbers are imported from [versions.json](jsonnet/kube-prometheus/versions.json).
-
-Configuration is mainly done in the `values` map. You can see this being used in the `example.jsonnet` to set the namespace to `monitoring`. This is done in the `common` field, which all other components take their default value from. See for example how Alertmanager is configured in `main.libsonnet`:
-
-```
-    alertmanager: {
-      name: 'main',
-      // Use the namespace specified under values.common by default.
-      namespace: $.values.common.namespace,
-      version: $.values.common.versions.alertmanager,
-      image: $.values.common.images.alertmanager,
-      mixin+: { ruleLabels: $.values.common.ruleLabels },
-    },
-```
-
-The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana ), but needed configuration can be customized from the same top level `values` field. For example to allow anonymous access to grafana, add the following `values` section:
-
-```
-      grafana+:: {
-        config: { // http://docs.grafana.org/installation/configuration/
-          sections: {
-            "auth.anonymous": {enabled: true},
-          },
-        },
-      },
+$ minikube addons disable metrics-server
 ```
 
-## Customization Examples
-
-Jsonnet is a turing complete language, any logic can be reflected in it. It also has powerful merge functionalities, allowing sophisticated customizations of any kind simply by merging it into the object the library provides.
-
-To get started, we provide several customization examples in the [docs/customizations/](docs/customizations) section.
+## Getting started
 
-## Minikube Example
+Before deploying kube-prometheus in a production environment, read:
 
-To use an easy to reproduce example, see [minikube.jsonnet](examples/minikube.jsonnet), which uses the minikube setup as demonstrated in [Prerequisites](#prerequisites). Because we would like easy access to our Prometheus, Alertmanager and Grafana UIs, `minikube.jsonnet` exposes the services as NodePort type services.
+1. [Customizing kube-prometheus](docs/customizing.md)
+2. [Customization examples](docs/customizations)
+3. [Accessing Graphical User Interfaces](docs/access-ui.md)
+4. [Troubleshooting kube-prometheus](docs/troubleshooting.md)
 
-## Continuous Delivery
+## Documentation
 
-Working examples of use with continuous delivery tools are found in examples/continuous-delivery.
+1. [Continuous Delivery](examples/continuous-delivery)
+2. [Update to new version](docs/update.md)
+3. For more documentation on the project refer to `docs/` directory.
 
-## Troubleshooting
-
-See the general [guidelines](docs/community-support.md) for getting support from the community.
-
-### Error retrieving kubelet metrics
-
-Should the Prometheus `/targets` page show kubelet targets, but not able to successfully scrape the metrics, then most likely it is a problem with the authentication and authorization setup of the kubelets.
-
-As described in the [Prerequisites](#prerequisites) section, in order to retrieve metrics from the kubelet token authentication and authorization must be enabled. Some Kubernetes setup tools do not enable this by default.
-
-- If you are using Google's GKE product, see [cAdvisor support](docs/GKE-cadvisor-support.md).
-- If you are using AWS EKS, see [AWS EKS CNI support](docs/EKS-cni-support.md).
-- If you are using Weave Net, see [Weave Net support](docs/weave-net-support.md).
-
-#### Authentication problem
-
-The Prometheus `/targets` page will show the kubelet job with the error `403 Unauthorized`, when token authentication is not enabled. Ensure, that the `--authentication-token-webhook=true` flag is enabled on all kubelet configurations.
-
-#### Authorization problem
-
-The Prometheus `/targets` page will show the kubelet job with the error `401 Unauthorized`, when token authorization is not enabled. Ensure that the `--authorization-mode=Webhook` flag is enabled on all kubelet configurations.
-
-### kube-state-metrics resource usage
-
-In some environments, kube-state-metrics may need additional
-resources. One driver for more resource needs, is a high number of
-namespaces. There may be others.
-
-kube-state-metrics resource allocation is managed by
-[addon-resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer/nanny)
-You can control it's parameters by setting variables in the
-config. They default to:
-
-```jsonnet
-    kubeStateMetrics+:: {
-      baseCPU: '100m',
-      cpuPerNode: '2m',
-      baseMemory: '150Mi',
-      memoryPerNode: '30Mi',
-    }
-```
+## Contributing
 
-### Error retrieving kube-proxy metrics
+To contribute to kube-prometheus, refer to [Contributing](CONTRIBUTING.md).
 
-By default, kubeadm will configure kube-proxy to listen on 127.0.0.1 for metrics. Because of this prometheus would not be able to scrape these metrics. This would have to be changed to 0.0.0.0 in one of the following two places:
+## Join the discussion
 
-1. Before cluster initialization, the config file passed to kubeadm init should have KubeProxyConfiguration manifest with the field metricsBindAddress set to 0.0.0.0:10249
-2. If the k8s cluster is already up and running, we'll have to modify the configmap kube-proxy in the namespace kube-system and set the metricsBindAddress field. After this kube-proxy daemonset would have to be restarted with
-   `kubectl -n kube-system rollout restart daemonset kube-proxy`
+If you have any questions or feedback regarding kube-prometheus, join the [kube-prometheus discussion](https://github.com/prometheus-operator/kube-prometheus/discussions). Alternatively, consider joining [the kubernetes slack #prometheus-operator channel](http://slack.k8s.io/) or project's bi-weekly [Contributor Office Hours](https://docs.google.com/document/d/1-fjJmzrwRpKmSPHtXN5u6VZnn39M28KqyQGBEJsqUOk/edit#).
 
 ## License
 
diff --git a/RELEASE.md b/RELEASE.md
index 2959117f..682578f8 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -55,7 +55,7 @@ Kubernetes. We need to make sure that the CI on the main branch is testing the
 kube-prometheus configuration against both of these versions by updating the [CI
 worklow](.github/workflows/ci.yaml) to include the latest kind version and the
 2 latest images versions that are attached to the kind release. Once that is
-done, the [compatibility matrix](README.md#kubernetes-compatibility-matrix) in
+done, the [compatibility matrix](README.md#compatibility) in
 the README should also be updated to reflect the CI changes.
 
 ## Create pull request to cut the release
@@ -87,7 +87,7 @@ make generate
 
 ### Update the compatibility matrix
 
-Update the [compatibility matrix](README.md#kubernetes-compatibility-matrix) in
+Update the [compatibility matrix](README.md#compatibility) in
 the README, by adding the new release based on the `main` branch compatibility
 and removing the oldest release branch to only keep the latest 5 branches in the
 matrix.
diff --git a/docs/access-ui.md b/docs/access-ui.md
new file mode 100644
index 00000000..3bdd9f2b
--- /dev/null
+++ b/docs/access-ui.md
@@ -0,0 +1,29 @@
+# Access UIs
+
+Prometheus, Grafana, and Alertmanager dashboards can be accessed quickly using `kubectl port-forward` after running the quickstart via the commands below. Kubernetes 1.10 or later is required.
+
+> Note: There are instructions on how to route to these pods behind an ingress controller in the [Exposing Prometheus/Alermanager/Grafana via Ingress](customizations/exposing-prometheus-alertmanager-grafana-ingress.md) section.
+
+## Prometheus
+
+```shell
+$ kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090
+```
+
+Then access via [http://localhost:9090](http://localhost:9090)
+
+## Grafana
+
+```shell
+$ kubectl --namespace monitoring port-forward svc/grafana 3000
+```
+
+Then access via [http://localhost:3000](http://localhost:3000) and use the default grafana user:password of `admin:admin`.
+
+## Alert Manager
+
+```shell
+$ kubectl --namespace monitoring port-forward svc/alertmanager-main 9093
+```
+
+Then access via [http://localhost:9093](http://localhost:9093)
diff --git a/docs/customizing.md b/docs/customizing.md
new file mode 100644
index 00000000..fa98e088
--- /dev/null
+++ b/docs/customizing.md
@@ -0,0 +1,169 @@
+# Customizing Kube-Prometheus
+
+This section:
+* describes how to customize the kube-prometheus library via compiling the kube-prometheus manifests yourself (as an alternative to the [README.md quickstart section](../README.md#quickstart)).
+* still doesn't require you to make a copy of this entire repository, but rather only a copy of a few select files.
+
+## Installing
+
+The content of this project consists of a set of [jsonnet](http://jsonnet.org/) files making up a library to be consumed.
+
+Install this library in your own project with [jsonnet-bundler](https://github.com/jsonnet-bundler/jsonnet-bundler#install) (the jsonnet package manager):
+
+```shell
+$ mkdir my-kube-prometheus; cd my-kube-prometheus
+$ jb init  # Creates the initial/empty `jsonnetfile.json`
+# Install the kube-prometheus dependency
+$ jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main # Creates `vendor/` & `jsonnetfile.lock.json`, and fills in `jsonnetfile.json`
+
+$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/example.jsonnet -O example.jsonnet
+$ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/main/build.sh -O build.sh
+```
+
+> `jb` can be installed with `go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest`
+
+> An e.g. of how to install a given version of this library: `jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main`
+
+In order to update the kube-prometheus dependency, simply use the jsonnet-bundler update functionality:
+
+```shell
+$ jb update
+```
+
+## Generating
+
+e.g. of how to compile the manifests: `./build.sh example.jsonnet`
+
+> before compiling, install `gojsontoyaml` tool with `go install github.com/brancz/gojsontoyaml@latest` and `jsonnet` with `go install github.com/google/go-jsonnet/cmd/jsonnet@latest`
+
+Here's [example.jsonnet](../example.jsonnet):
+
+> Note: some of the following components must be configured beforehand. See [configuration](#configuring) and [customization-examples](customizations).
+
+```jsonnet mdox-exec="cat example.jsonnet"
+local kp =
+  (import 'kube-prometheus/main.libsonnet') +
+  // Uncomment the following imports to enable its patches
+  // (import 'kube-prometheus/addons/anti-affinity.libsonnet') +
+  // (import 'kube-prometheus/addons/managed-cluster.libsonnet') +
+  // (import 'kube-prometheus/addons/node-ports.libsonnet') +
+  // (import 'kube-prometheus/addons/static-etcd.libsonnet') +
+  // (import 'kube-prometheus/addons/custom-metrics.libsonnet') +
+  // (import 'kube-prometheus/addons/external-metrics.libsonnet') +
+  // (import 'kube-prometheus/addons/pyrra.libsonnet') +
+  {
+    values+:: {
+      common+: {
+        namespace: 'monitoring',
+      },
+    },
+  };
+
+{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } +
+{
+  ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
+  for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator))
+} +
+// { 'setup/pyrra-slo-CustomResourceDefinition': kp.pyrra.crd } +
+// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready
+{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
+{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } +
+{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } +
+{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
+{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } +
+{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +
+// { ['pyrra-' + name]: kp.pyrra[name] for name in std.objectFields(kp.pyrra) if name != 'crd' } +
+{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
+{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) }
+{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
+{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
+{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) }
+```
+
+And here's the [build.sh](../build.sh) script (which uses `vendor/` to render all manifests in a json structure of `{filename: manifest-content}`):
+
+```sh mdox-exec="cat ./build.sh"
+#!/usr/bin/env bash
+
+# This script uses arg $1 (name of *.jsonnet file to use) to generate the manifests/*.yaml files.
+
+set -e
+set -x
+# only exit with zero if all commands of the pipeline exit successfully
+set -o pipefail
+
+# Make sure to use project tooling
+PATH="$(pwd)/tmp/bin:${PATH}"
+
+# Make sure to start with a clean 'manifests' dir
+rm -rf manifests
+mkdir -p manifests/setup
+
+# Calling gojsontoyaml is optional, but we would like to generate yaml, not json
+jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {}
+
+# Make sure to remove json files
+find manifests -type f ! -name '*.yaml' -delete
+rm -f kustomization
+
+```
+
+> Note you need `jsonnet` (`go install github.com/google/go-jsonnet/cmd/jsonnet@latest`) and `gojsontoyaml` (`go install github.com/brancz/gojsontoyaml@latest`) installed to run `build.sh`. If you just want json output, not yaml, then you can skip the pipe and everything afterwards.
+
+This script runs the jsonnet code, then reads each key of the generated json and uses that as the file name, and writes the value of that key to that file, and converts each json manifest to yaml.
+
+## Configuring
+
+Jsonnet has the concept of hidden fields. These are fields, that are not going to be rendered in a result. This is used to configure the kube-prometheus components in jsonnet. In the example jsonnet code of the above [Generating section](#generating), you can see an example of this, where the `namespace` is being configured to be `monitoring`. In order to not override the whole object, use the `+::` construct of jsonnet, to merge objects, this way you can override individual settings, but retain all other settings and defaults.
+
+The available fields and their default values can be seen in [main.libsonnet](../jsonnet/kube-prometheus/main.libsonnet). Note that many of the fields get their default values from variables, and for example the version numbers are imported from [versions.json](../jsonnet/kube-prometheus/versions.json).
+
+Configuration is mainly done in the `values` map. You can see this being used in the `example.jsonnet` to set the namespace to `monitoring`. This is done in the `common` field, which all other components take their default value from. See for example how Alertmanager is configured in `main.libsonnet`:
+
+```
+    alertmanager: {
+      name: 'main',
+      // Use the namespace specified under values.common by default.
+      namespace: $.values.common.namespace,
+      version: $.values.common.versions.alertmanager,
+      image: $.values.common.images.alertmanager,
+      mixin+: { ruleLabels: $.values.common.ruleLabels },
+    },
+```
+
+The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana ), but needed configuration can be customized from the same top level `values` field. For example to allow anonymous access to grafana, add the following `values` section:
+
+```
+      grafana+:: {
+        config: { // http://docs.grafana.org/installation/configuration/
+          sections: {
+            "auth.anonymous": {enabled: true},
+          },
+        },
+      },
+```
+
+## Apply the kube-prometheus stack
+
+The previous generation step has created a bunch of manifest files in the manifest/ folder.
+Now simply use `kubectl` to install Prometheus and Grafana as per your configuration:
+
+```shell
+# Update the namespace and CRDs, and then wait for them to be available before creating the remaining resources
+$ kubectl apply --server-side -f manifests/setup
+$ kubectl apply -f manifests/
+```
+
+> Note that due to some CRD size we are using kubeclt server-side apply feature which is generally available since
+> kubernetes 1.22. If you are using previous kubernetes versions this feature may not be available and you would need to
+> use `kubectl create` instead.
+
+Alternatively, the resources in both folders can be applied with a single command
+`kubectl apply --server-side -Rf manifests`, but it may be necessary to run the command multiple times for all components to
+be created successfully.
+
+Check the monitoring namespace (or the namespace you have specific in `namespace: `) and make sure the pods are running. Prometheus and Grafana should be up and running soon.
+
+## Minikube Example
+
+To use an easy to reproduce example, see [minikube.jsonnet](../examples/minikube.jsonnet), which uses the minikube setup as demonstrated in [Prerequisites](../README.md#prerequisites). Because we would like easy access to our Prometheus, Alertmanager and Grafana UIs, `minikube.jsonnet` exposes the services as NodePort type services.
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
new file mode 100644
index 00000000..06c91909
--- /dev/null
+++ b/docs/troubleshooting.md
@@ -0,0 +1,49 @@
+# Troubleshooting
+
+See the general [guidelines](community-support.md) for getting support from the community.
+
+## Error retrieving kubelet metrics
+
+Should the Prometheus `/targets` page show kubelet targets, but not able to successfully scrape the metrics, then most likely it is a problem with the authentication and authorization setup of the kubelets.
+
+As described in the [README.md Prerequisites](../README.md#prerequisites) section, in order to retrieve metrics from the kubelet token authentication and authorization must be enabled. Some Kubernetes setup tools do not enable this by default.
+
+- If you are using Google's GKE product, see [cAdvisor support](GKE-cadvisor-support.md).
+- If you are using AWS EKS, see [AWS EKS CNI support](EKS-cni-support.md).
+- If you are using Weave Net, see [Weave Net support](weave-net-support.md).
+
+### Authentication problem
+
+The Prometheus `/targets` page will show the kubelet job with the error `403 Unauthorized`, when token authentication is not enabled. Ensure, that the `--authentication-token-webhook=true` flag is enabled on all kubelet configurations.
+
+### Authorization problem
+
+The Prometheus `/targets` page will show the kubelet job with the error `401 Unauthorized`, when token authorization is not enabled. Ensure that the `--authorization-mode=Webhook` flag is enabled on all kubelet configurations.
+
+## kube-state-metrics resource usage
+
+In some environments, kube-state-metrics may need additional
+resources. One driver for more resource needs, is a high number of
+namespaces. There may be others.
+
+kube-state-metrics resource allocation is managed by
+[addon-resizer](https://github.com/kubernetes/autoscaler/tree/master/addon-resizer/nanny)
+You can control it's parameters by setting variables in the
+config. They default to:
+
+```jsonnet
+    kubeStateMetrics+:: {
+      baseCPU: '100m',
+      cpuPerNode: '2m',
+      baseMemory: '150Mi',
+      memoryPerNode: '30Mi',
+    }
+```
+
+## Error retrieving kube-proxy metrics
+
+By default, kubeadm will configure kube-proxy to listen on 127.0.0.1 for metrics. Because of this prometheus would not be able to scrape these metrics. This would have to be changed to 0.0.0.0 in one of the following two places:
+
+1. Before cluster initialization, the config file passed to kubeadm init should have KubeProxyConfiguration manifest with the field metricsBindAddress set to 0.0.0.0:10249
+2. If the k8s cluster is already up and running, we'll have to modify the configmap kube-proxy in the namespace kube-system and set the metricsBindAddress field. After this kube-proxy daemonset would have to be restarted with
+   `kubectl -n kube-system rollout restart daemonset kube-proxy`
diff --git a/docs/update.md b/docs/update.md
new file mode 100644
index 00000000..f33313d9
--- /dev/null
+++ b/docs/update.md
@@ -0,0 +1,29 @@
+# Update kube-prometheus
+
+You may wish to fetch changes made on this project so they are available to you.
+
+## Update jb
+
+`jb` may have been updated so it's a good idea to get the latest version of this binary:
+
+```shell
+$ go install -a github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest
+```
+
+## Update kube-prometheus
+
+The command below will sync with upstream project:
+
+```shell
+$ jb update
+```
+
+## Compile the manifests and apply
+
+Once updated, just follow the instructions under [Generating](customizing.md#generating) and [Apply the kube-prometheus stack](customizing.md#apply-the-kube-prometheus-stack) from [customizing.md doc](customizing.md) to apply the changes to your cluster.
+
+## Migration from previous versions
+
+If you are migrating from `release-0.7` branch or earlier please read [what changed and how to migrate in our guide](https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/migration-guide.md).
+
+Refer to [migration document](migration-example) for more information about migration from 0.3 and 0.8 versions of kube-prometheus.
-- 
GitLab