diff --git a/README.md b/README.md index ca11dccc9dbc5f0ca001ab6edd8432e8660807d4..8aeb67fd0fa862f70a2c88c878d6d374831af46b 100644 --- a/README.md +++ b/README.md @@ -80,8 +80,8 @@ You will need a Kubernetes cluster, that's it! By default it is assumed, that th This means the kubelet configuration must contain these flags: -* `--authentication-token-webhook=true` This flag enables, that a `ServiceAccount` token can be used to authenticate against the kubelet(s). This can also be enabled by setting the kubelet configuration value `authentication.webhook.enabled` to `true`. -* `--authorization-mode=Webhook` This flag enables, that the kubelet will perform an RBAC request with the API to determine, whether the requesting entity (Prometheus in this case) is allowed to access a resource, in specific for this project the `/metrics` endpoint. This can also be enabled by setting the kubelet configuration value `authorization.mode` to `Webhook`. +* `--authentication-token-webhook=true` This flag enables, that a `ServiceAccount` token can be used to authenticate against the kubelet(s). This can also be enabled by setting the kubelet configuration value `authentication.webhook.enabled` to `true`. +* `--authorization-mode=Webhook` This flag enables, that the kubelet will perform an RBAC request with the API to determine, whether the requesting entity (Prometheus in this case) is allowed to access a resource, in specific for this project the `/metrics` endpoint. This can also be enabled by setting the kubelet configuration value `authorization.mode` to `Webhook`. This stack provides [resource metrics](https://github.com/kubernetes/metrics#resource-metrics-api) by deploying the [Prometheus Adapter](https://github.com/DirectXMan12/k8s-prometheus-adapter/). This adapter is an Extension API Server and Kubernetes needs to be have this feature enabled, otherwise the adapter has no effect, but is still deployed. @@ -116,12 +116,12 @@ The following versions are supported and work as we test against these versions ## Quickstart ->Note: For versions before Kubernetes v1.21.z refer to the [Kubernetes compatibility matrix](#kubernetes-compatibility-matrix) in order to choose a compatible branch. +> Note: For versions before Kubernetes v1.21.z refer to the [Kubernetes compatibility matrix](#kubernetes-compatibility-matrix) in order to choose a compatible branch. This project is intended to be used as a library (i.e. the intent is not for you to create your own modified copy of this repository). Though for a quickstart a compiled version of the Kubernetes [manifests](manifests) generated with this library (specifically with `example.jsonnet`) is checked into this repository in order to try the content out quickly. To try out the stack un-customized run: - * Create the monitoring stack using the config in the `manifests` directory: +* Create the monitoring stack using the config in the `manifests` directory: ```shell # Create the namespace and CRDs, and then wait for them to be available before creating the remaining resources @@ -135,7 +135,8 @@ Alternatively, the resources in both folders can be applied with a single comman `kubectl create -f manifests/setup -f manifests`, but it may be necessary to run the command multiple times for all components to be created successfullly. - * And to teardown the stack: +* And to teardown the stack: + ```shell kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup ``` @@ -173,14 +174,15 @@ Then access via [http://localhost:9093](http://localhost:9093) ## Customizing Kube-Prometheus This section: - * describes how to customize the kube-prometheus library via compiling the kube-prometheus manifests yourself (as an alternative to the [Quickstart section](#quickstart)). - * still doesn't require you to make a copy of this entire repository, but rather only a copy of a few select files. +* describes how to customize the kube-prometheus library via compiling the kube-prometheus manifests yourself (as an alternative to the [Quickstart section](#quickstart)). +* still doesn't require you to make a copy of this entire repository, but rather only a copy of a few select files. ### Installing The content of this project consists of a set of [jsonnet](http://jsonnet.org/) files making up a library to be consumed. Install this library in your own project with [jsonnet-bundler](https://github.com/jsonnet-bundler/jsonnet-bundler#install) (the jsonnet package manager): + ```shell $ mkdir my-kube-prometheus; cd my-kube-prometheus $ jb init # Creates the initial/empty `jsonnetfile.json` @@ -196,6 +198,7 @@ $ wget https://raw.githubusercontent.com/prometheus-operator/kube-prometheus/rel > An e.g. of how to install a given version of this library: `jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@release-0.7` In order to update the kube-prometheus dependency, simply use the jsonnet-bundler update functionality: + ```shell $ jb update ``` @@ -280,6 +283,7 @@ rm -f kustomization This script runs the jsonnet code, then reads each key of the generated json and uses that as the file name, and writes the value of that key to that file, and converts each json manifest to yaml. ### Apply the kube-prometheus stack + The previous steps (compilation) has created a bunch of manifest files in the manifest/ folder. Now simply use `kubectl` to install Prometheus and Grafana as per your configuration: @@ -288,6 +292,7 @@ Now simply use `kubectl` to install Prometheus and Grafana as per your configura $ kubectl apply -f manifests/setup $ kubectl apply -f manifests/ ``` + Alternatively, the resources in both folders can be applied with a single command `kubectl apply -Rf manifests`, but it may be necessary to run the command multiple times for all components to be created successfullly. @@ -297,15 +302,18 @@ Check the monitoring namespace (or the namespace you have specific in `namespace ### Containerized Installing and Compiling If you don't care to have `jb` nor `jsonnet` nor `gojsontoyaml` installed, then use `quay.io/coreos/jsonnet-ci` container image. Do the following from this `kube-prometheus` directory: + ```shell $ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci jb update $ docker run --rm -v $(pwd):$(pwd) --workdir $(pwd) quay.io/coreos/jsonnet-ci ./build.sh example.jsonnet ``` ## Update from upstream project + You may wish to fetch changes made on this project so they are available to you. ### Update jb + `jb` may have been updated so it's a good idea to get the latest version of this binary: ```shell @@ -313,14 +321,16 @@ $ go get -u github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb ``` ### Update kube-prometheus + The command below will sync with upstream project: + ```shell $ jb update ``` ### Compile the manifests and apply -Once updated, just follow the instructions under "Compiling" and "Apply the kube-prometheus stack" to apply the changes to your cluster. +Once updated, just follow the instructions under "Compiling" and "Apply the kube-prometheus stack" to apply the changes to your cluster. ## Configuration @@ -342,6 +352,7 @@ Configuration is mainly done in the `values` map. You can see this being used in ``` The grafana definition is located in a different project (https://github.com/brancz/kubernetes-grafana ), but needed configuration can be customized from the same top level `values` field. For example to allow anonymous access to grafana, add the following `values` section: + ``` grafana+:: { config: { // http://docs.grafana.org/installation/configuration/ @@ -588,7 +599,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + { ### Monitoring all namespaces -In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them. +In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them. ```jsonnet mdox-exec="cat examples/all-namespaces.jsonnet" local kp = (import 'kube-prometheus/main.libsonnet') + @@ -749,7 +760,7 @@ kube-state-metrics resource allocation is managed by You can control it's parameters by setting variables in the config. They default to: -``` jsonnet +```jsonnet kubeStateMetrics+:: { baseCPU: '100m', cpuPerNode: '2m', @@ -759,11 +770,12 @@ config. They default to: ``` ### Error retrieving kube-proxy metrics + By default, kubeadm will configure kube-proxy to listen on 127.0.0.1 for metrics. Because of this prometheus would not be able to scrape these metrics. This would have to be changed to 0.0.0.0 in one of the following two places: 1. Before cluster initialization, the config file passed to kubeadm init should have KubeProxyConfiguration manifest with the field metricsBindAddress set to 0.0.0.0:10249 2. If the k8s cluster is already up and running, we'll have to modify the configmap kube-proxy in the namespace kube-system and set the metricsBindAddress field. After this kube-proxy daemonset would have to be restarted with -`kubectl -n kube-system rollout restart daemonset kube-proxy` + `kubectl -n kube-system rollout restart daemonset kube-proxy` ## Contributing @@ -775,8 +787,8 @@ the following process: 2. Commit your changes (This is currently necessary due to our vendoring process. This is likely to change in the future). 3. Update the pinned kube-prometheus dependency in `jsonnetfile.lock.json`: `jb update` -3. Generate dependent `*.yaml` files: `make generate` -4. Commit the generated changes. +4. Generate dependent `*.yaml` files: `make generate` +5. Commit the generated changes. ## License diff --git a/RELEASE.md b/RELEASE.md index 691cf8370caa76fd199a0b98207d095328d51e90..2959117f09abddaba0e25aaab117d5a0619bb875 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -4,12 +4,12 @@ Kube-prometheus has a somehow predictable release schedule, releases were historically cut in sync with OpenShift releases as per downstream needs. So far there hasn't been any problem with this schedule since it is also in sync with Kubernetes releases. So for every new Kubernetes release, there is a new -release of kube-prometheus, although it tends to happen later. +release of kube-prometheus, although it tends to happen later. # How to cut a new release > This guide is strongly based on the [prometheus-operator release -instructions](https://github.com/prometheus-operator/prometheus-operator/blob/master/RELEASE.md). +> instructions](https://github.com/prometheus-operator/prometheus-operator/blob/master/RELEASE.md). ## Branch management and versioning strategy @@ -53,9 +53,9 @@ failed or because the main branch was already up-to-date. The main branch of kube-prometheus should support the last 2 versions of Kubernetes. We need to make sure that the CI on the main branch is testing the kube-prometheus configuration against both of these versions by updating the [CI -worklow](/.github/workflows/ci.yaml) to include the latest kind version and the +worklow](.github/workflows/ci.yaml) to include the latest kind version and the 2 latest images versions that are attached to the kind release. Once that is -done, the [compatibility matrix](/README.md#kubernetes-compatibility-matrix) in +done, the [compatibility matrix](README.md#kubernetes-compatibility-matrix) in the README should also be updated to reflect the CI changes. ## Create pull request to cut the release @@ -63,9 +63,9 @@ the README should also be updated to reflect the CI changes. ### Pin Jsonnet dependencies Pin jsonnet dependencies in -[jsonnetfile.json](/jsonnet/kube-prometheus/jsonnetfile.json). Each dependency +[jsonnetfile.json](jsonnet/kube-prometheus/jsonnetfile.json). Each dependency should be pinned to the latest release branch or if it doesn't have one, pinned -to the latest commit. +to the latest commit. ### Start with a fresh environment @@ -87,14 +87,14 @@ make generate ### Update the compatibility matrix -Update the [compatibility matrix](/README.md#kubernetes-compatibility-matrix) in +Update the [compatibility matrix](README.md#kubernetes-compatibility-matrix) in the README, by adding the new release based on the `main` branch compatibility and removing the oldest release branch to only keep the latest 5 branches in the matrix. ### Update changelog -Iterate over the PRs that were merged between the latest release of kube-prometheus and the HEAD and add the changelog entries to the [CHANGELOG](/CHANGELOG.md). +Iterate over the PRs that were merged between the latest release of kube-prometheus and the HEAD and add the changelog entries to the [CHANGELOG](CHANGELOG.md). ## Create release branch @@ -111,10 +111,10 @@ the main branch to be in sync with the latest changes of its dependencies. ### Update CI workflow -Update the [versions workflow](/.github/workflows/versions.yaml) to include the latest release branch and remove the oldest one to reflect the list of supported releases. +Update the [versions workflow](.github/workflows/versions.yaml) to include the latest release branch and remove the oldest one to reflect the list of supported releases. ### Update Kubernetes versions used by kubeconform Update the versions of Kubernetes used when validating manifests with -kubeconform in the [Makefile](/Makefile) to align with the compatibility +kubeconform in the [Makefile](Makefile) to align with the compatibility matrix. diff --git a/code-of-conduct.md b/code-of-conduct.md index d1adc78033dad3908328a96aa725fcd2333b20dc..c7c166cba83bed14d2c4a86f2ed8516b169d9f64 100644 --- a/code-of-conduct.md +++ b/code-of-conduct.md @@ -33,8 +33,8 @@ This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported by contacting a project maintainer listed in -https://github.com/prometheus-operator/prometheus-operator/blob/master/MAINTAINERS.md. +reported by contacting a project maintainer listed in +https://github.com/prometheus-operator/prometheus-operator/blob/master/MAINTAINERS.md. This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.2.0, available at diff --git a/developer-workspace/README.md b/developer-workspace/README.md index d92a91f214403822fd38ff048f7d42a6d7b96667..52e244cb2c289d58ad409cb3c68f879d76353f69 100644 --- a/developer-workspace/README.md +++ b/developer-workspace/README.md @@ -20,7 +20,7 @@ After your workspace start, you can deploy a kube-prometheus inside a Kind clust If you are reviewing a PR, you'll have a fully-functional kubernetes cluster, generating real monitoring data that can be used to review if the proposed changes works as described. -If you are working on new features/bug fixes, you can regenerate kube-prometheus's YAML manifests with `make generate` and deploy it again with `make deploy`. +If you are working on new features/bug fixes, you can regenerate kube-prometheus's YAML manifests with `make generate` and deploy it again with `make deploy`. ## Gitpod @@ -31,4 +31,3 @@ You can use the same workflow as mentioned in the [Codespaces](#codespaces) sect To open up a workspace with Gitpod, you can install the [Google Chrome extension](https://www.gitpod.io/docs/browser-extension/) to add a new button to Github UI and use it on PRs or from the main page. Or by directly typing in the browser `http://gitpod.io/#https://github.com/prometheus-operator/kube-prometheus/pull/<Pull Request Number>` or just `http://gitpod.io/#https://github.com/prometheus-operator/kube-prometheus`  - diff --git a/docs/EKS-cni-support.md b/docs/EKS-cni-support.md index bd170714a1598490b46b89fa300c8eb7311ec15c..b0d3f852c6cfbb374e14be11f367af0711c1fea1 100644 --- a/docs/EKS-cni-support.md +++ b/docs/EKS-cni-support.md @@ -4,7 +4,7 @@ AWS EKS uses [CNI](https://github.com/aws/amazon-vpc-cni-k8s) networking plugin One fatal issue that can occur is that you run out of IP addresses in your eks cluster. (Generally happens due to error configs where pods keep scheduling). -You can monitor the `awscni` using kube-promethus with : +You can monitor the `awscni` using kube-promethus with : ```jsonnet mdox-exec="cat examples/eks-cni-example.jsonnet" local kp = (import 'kube-prometheus/main.libsonnet') + { diff --git a/docs/GKE-cadvisor-support.md b/docs/GKE-cadvisor-support.md index 0d763ac0eb2d774b0684e0f1684ec9053d5d6d9d..0ff152c260ea3363eeb70f67c7a02ec02c59e5e1 100644 --- a/docs/GKE-cadvisor-support.md +++ b/docs/GKE-cadvisor-support.md @@ -5,6 +5,7 @@ authentication. Until it does, Prometheus must use HTTP (not HTTPS) for scraping. You can configure this behavior through kube-prometheus with: + ``` local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + (import 'kube-prometheus/kube-prometheus-insecure-kubelet.libsonnet') + diff --git a/docs/blackbox-exporter.md b/docs/blackbox-exporter.md index e6a5272512aec4881a81b10307a5ce4fd1bf4bef..60321e577a2b01dfe097f607d0be5b73f57b0209 100644 --- a/docs/blackbox-exporter.md +++ b/docs/blackbox-exporter.md @@ -1,16 +1,16 @@ --- -title: "Blackbox Exporter" -description: "Generated API docs for the Prometheus Operator" -lead: "This Document documents the types introduced by the Prometheus Operator to be consumed by users." -date: 2021-03-08T08:49:31+00:00 -lastmod: 2021-03-08T08:49:31+00:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 630 toc: true +title: Blackbox Exporter +menu: + docs: + parent: kube +lead: This Document documents the types introduced by the Prometheus Operator to be consumed by users. +lastmod: "2021-03-08T08:49:31+00:00" +images: [] +draft: false +description: Generated API docs for the Prometheus Operator +date: "2021-03-08T08:49:31+00:00" --- # Setting up a blackbox exporter @@ -21,6 +21,7 @@ The `prometheus-operator` defines a `Probe` resource type that can be used to de 1. Override blackbox-related configuration parameters as needed. 2. Add the following to the list of renderers to render the blackbox exporter manifests: + ``` { ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } ``` diff --git a/docs/community-support.md b/docs/community-support.md index 218eaa74c534876444f7b999dc01503c0574b6f9..ff214b3cd807040b450aa5e9edcb6145eb77f02d 100644 --- a/docs/community-support.md +++ b/docs/community-support.md @@ -4,7 +4,7 @@ For bugs, you can use the GitHub [issue tracker](https://github.com/prometheus-o For questions, you can use the GitHub [discussions forum](https://github.com/prometheus-operator/kube-prometheus/discussions). -Many of the `kube-prometheus` project's contributors and users can also be found on the #prometheus-operator channel of the [Kubernetes Slack][Kubernetes Slack]. +Many of the `kube-prometheus` project's contributors and users can also be found on the #prometheus-operator channel of the [Kubernetes Slack](https://slack.k8s.io/). `kube-prometheus` is the aggregation of many projects that all have different channels to reach out for help and support. This community strives at @@ -18,7 +18,7 @@ if applicable. For documentation, check the project's [documentation directory](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation). -For questions, use the #prometheus-operator channel on the [Kubernetes Slack][Kubernetes Slack]. +For questions, use the #prometheus-operator channel on the [Kubernetes Slack](https://slack.k8s.io/). For bugs, use the GitHub [issue tracker](https://github.com/prometheus-operator/prometheus-operator/issues/new/choose). @@ -26,19 +26,19 @@ For bugs, use the GitHub [issue tracker](https://github.com/prometheus-operator/ For documentation, check the Prometheus [online docs](https://prometheus.io/docs/). There is a [section](https://prometheus.io/docs/introduction/media/) with links to blog -posts, recorded talks and presentations. This [repository](https://github.com/roaldnefs/awesome-prometheus) +posts, recorded talks and presentations. This [repository](https://github.com/roaldnefs/awesome-prometheus) (not affiliated to the Prometheus project) has also a list of curated resources related to the Prometheus ecosystem. For questions, see the Prometheus [community page](https://prometheus.io/community/) for the various channels. -There is also a #prometheus channel on the [CNCF Slack][CNCF Slack]. +There is also a #prometheus channel on the [CNCF Slack](https://slack.cncf.io/). ## kube-state-metrics For documentation, see the project's [docs directory](https://github.com/kubernetes/kube-state-metrics/tree/master/docs). -For questions, use the #kube-state-metrics channel on the [Kubernetes Slack][Kubernetes Slack]. +For questions, use the #kube-state-metrics channel on the [Kubernetes Slack](https://slack.k8s.io/). For bugs, use the GitHub [issue tracker](https://github.com/kubernetes/kube-state-metrics/issues/new/choose). @@ -46,7 +46,7 @@ For bugs, use the GitHub [issue tracker](https://github.com/kubernetes/kube-stat For documentation, check the [Kubernetes docs](https://kubernetes.io/docs/home/). -For questions, use the [community forums](https://discuss.kubernetes.io/) and the [Kubernetes Slack][Kubernetes Slack]. Check also the [community page](https://kubernetes.io/community/#discuss). +For questions, use the [community forums](https://discuss.kubernetes.io/) and the [Kubernetes Slack](https://slack.k8s.io/). Check also the [community page](https://kubernetes.io/community/#discuss). For bugs, use the GitHub [issue tracker](https://github.com/kubernetes/kubernetes/issues/new/choose). @@ -54,7 +54,7 @@ For bugs, use the GitHub [issue tracker](https://github.com/kubernetes/kubernete For documentation, check the project's [README](https://github.com/DirectXMan12/k8s-prometheus-adapter/blob/master/README.md). -For questions, use the #sig-instrumentation channel on the [Kubernetes Slack][Kubernetes Slack]. +For questions, use the #sig-instrumentation channel on the [Kubernetes Slack](https://slack.k8s.io/). For bugs, use the GitHub [issue tracker](https://github.com/DirectXMan12/k8s-prometheus-adapter/issues/new). @@ -70,7 +70,7 @@ For bugs, use the GitHub [issue tracker](https://github.com/grafana/grafana/issu For documentation, check the project's [README](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/README.md). -For questions, use #monitoring-mixins channel on the [Kubernetes Slack][Kubernetes Slack]. +For questions, use #monitoring-mixins channel on the [Kubernetes Slack](https://slack.k8s.io/). For bugs, use the GitHub [issue tracker](https://github.com/kubernetes-monitoring/kubernetes-mixin/issues/new). @@ -79,6 +79,3 @@ For bugs, use the GitHub [issue tracker](https://github.com/kubernetes-monitorin For documentation, check the [Jsonnet](https://jsonnet.org/) website. For questions, use the [mailing list](https://groups.google.com/forum/#!forum/jsonnet). - -[Kubernetes Slack]: https://slack.k8s.io/ -[CNCF Slack]: https://slack.cncf.io/ diff --git a/docs/deploy-kind.md b/docs/deploy-kind.md index ea66c59a16e3e9f34e4fe4afebe5a6163236af18..5d85d23a1c497be864e6ba400f5d235d793eef44 100644 --- a/docs/deploy-kind.md +++ b/docs/deploy-kind.md @@ -1,15 +1,15 @@ --- -title: "Deploy to kind" -description: "Deploy kube-prometheus to Kubernets kind." -lead: "Deploy kube-prometheus to Kubernets kind." -date: 2021-03-08T23:04:32+01:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 500 toc: true +title: Deploy to kind +menu: + docs: + parent: kube +lead: Deploy kube-prometheus to Kubernets kind. +images: [] +draft: false +description: Deploy kube-prometheus to Kubernets kind. +date: "2021-03-08T23:04:32+01:00" --- --- diff --git a/docs/developing-prometheus-rules-and-grafana-dashboards.md b/docs/developing-prometheus-rules-and-grafana-dashboards.md index b80c6694cb21581e177215b19367847aef53cf60..c13ccea6124725f1fd258cf3eb624d1f27945a39 100644 --- a/docs/developing-prometheus-rules-and-grafana-dashboards.md +++ b/docs/developing-prometheus-rules-and-grafana-dashboards.md @@ -1,15 +1,15 @@ --- -title: "Prometheus Rules and Grafana Dashboards" -description: "Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus" -lead: "Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus" -date: 2021-03-08T23:04:32+01:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 650 toc: true +title: Prometheus Rules and Grafana Dashboards +menu: + docs: + parent: kube +lead: Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus +images: [] +draft: false +description: Create Prometheus Rules and Grafana Dashboards on top of kube-prometheus +date: "2021-03-08T23:04:32+01:00" --- `kube-prometheus` ships with a set of default [Prometheus rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) and [Grafana](http://grafana.com/) dashboards. At some point one might like to extend them, the purpose of this document is to explain how to do this. @@ -213,6 +213,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + { { ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + { ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) } ``` + ### Changing default rules Along with adding additional rules, we give the user the option to filter or adjust the existing rules imported by `kube-prometheus/main.libsonnet`. The recording rules can be found in [kube-prometheus/components/mixin/rules](../jsonnet/kube-prometheus/components/mixin/rules) and [kubernetes-mixin/rules](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/rules) while the alerting rules can be found in [kube-prometheus/components/mixin/alerts](../jsonnet/kube-prometheus/components/mixin/alerts) and [kubernetes-mixin/alerts](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/alerts). @@ -220,7 +221,9 @@ Along with adding additional rules, we give the user the option to filter or adj Knowing which rules to change, the user can now use functions from the [Jsonnet standard library](https://jsonnet.org/ref/stdlib.html) to make these changes. Below are examples of both a filter and an adjustment being made to the default rules. These changes can be assigned to a local variable and then added to the `local kp` object as seen in the examples above. #### Filter + Here the alert `KubeStatefulSetReplicasMismatch` is being filtered out of the group `kubernetes-apps`. The default rule can be seen [here](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/alerts/apps_alerts.libsonnet). You first need to find out in which component the rule is defined (here it is kuberentesControlPlane). + ```jsonnet local filter = { kubernetesControlPlane+: { @@ -247,7 +250,9 @@ local filter = { ``` #### Adjustment + Here the expression for another alert in the same component is updated from its previous value. The default rule can be seen [here](https://github.com/kubernetes-monitoring/kubernetes-mixin/blob/master/alerts/apps_alerts.libsonnet). + ```jsonnet local update = { kubernetesControlPlane+: { @@ -279,6 +284,7 @@ local update = { ``` Using the example from above about adding in pre-rendered rules, the new local variables can be added in as follows: + ```jsonnet local add = { exampleApplication:: { @@ -323,6 +329,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + { ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) } + { ['exampleApplication-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) } ``` + ## Dashboards Dashboards can either be added using jsonnet or simply a pre-rendered json dashboard. diff --git a/docs/exposing-prometheus-alertmanager-grafana-ingress.md b/docs/exposing-prometheus-alertmanager-grafana-ingress.md index f231e2c09e45fad590cc8cab1e3b44d3748f9e3b..64706c9643018dcd154a4b201c13a1ac11bbb215 100644 --- a/docs/exposing-prometheus-alertmanager-grafana-ingress.md +++ b/docs/exposing-prometheus-alertmanager-grafana-ingress.md @@ -1,15 +1,15 @@ --- -title: "Expose via Ingress" -description: "How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana." -lead: "How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana." -date: 2021-03-08T23:04:32+01:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 500 toc: true +title: Expose via Ingress +menu: + docs: + parent: kube +lead: How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana. +images: [] +draft: false +description: How to setup a Kubernetes Ingress to expose the Prometheus, Alertmanager and Grafana. +date: "2021-03-08T23:04:32+01:00" --- In order to access the web interfaces via the Internet [Kubernetes Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) is a popular option. This guide explains, how Kubernetes Ingress can be setup, in order to expose the Prometheus, Alertmanager and Grafana UIs, that are included in the [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus) project. diff --git a/docs/kube-prometheus-on-kubeadm.md b/docs/kube-prometheus-on-kubeadm.md index 37610593986f3a83d5f82dd0cecdbb7f81ba9f93..d15567e01e13ae5eb9c4037308977bed79424905 100644 --- a/docs/kube-prometheus-on-kubeadm.md +++ b/docs/kube-prometheus-on-kubeadm.md @@ -1,15 +1,15 @@ --- -title: "Deploy to kubeadm" -description: "Deploy kube-prometheus to Kubernets kubeadm." -lead: "Deploy kube-prometheus to Kubernets kubeadm." -date: 2021-03-08T23:04:32+01:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 500 toc: true +title: Deploy to kubeadm +menu: + docs: + parent: kube +lead: Deploy kube-prometheus to Kubernets kubeadm. +images: [] +draft: false +description: Deploy kube-prometheus to Kubernets kubeadm. +date: "2021-03-08T23:04:32+01:00" --- The [kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) tool is linked by Kubernetes as the offical way to deploy and manage self-hosted clusters. kubeadm does a lot of heavy lifting by automatically configuring your Kubernetes cluster with some common options. This guide is intended to show you how to deploy Prometheus, Prometheus Operator and Kube Prometheus to get you started monitoring your cluster that was deployed with kubeadm. @@ -93,7 +93,6 @@ Once you complete this guide you will monitor the following: * kube-scheduler * kube-controller-manager - ## Getting Up and Running Fast with Kube-Prometheus To help get started more quickly with monitoring Kubernetes clusters, [kube-prometheus](https://github.com/coreos/kube-prometheus) was created. It is a collection of manifests including dashboards and alerting rules that can easily be deployed. It utilizes the Prometheus Operator and all the manifests demonstrated in this guide. diff --git a/docs/migration-example/readme.md b/docs/migration-example/readme.md index 5e9def04d79c4924a6456de8893aba8b09af161b..3b0ba18b00b79aa0b8c717512cbb68d66182c50f 100644 --- a/docs/migration-example/readme.md +++ b/docs/migration-example/readme.md @@ -2,9 +2,9 @@ An example conversion of a legacy custom jsonnet file to release-0.8 format can be seen by viewing and comparing this -[release-0.3 jsonnet file](./my.release-0.3.jsonnet) (when the github +[release-0.3 jsonnet file](my.release-0.3.jsonnet) (when the github repo was under `https://github.com/coreos/kube-prometheus...`) -and the corresponding [release-0.8 jsonnet file](./my.release-0.8.jsonnet). +and the corresponding [release-0.8 jsonnet file](my.release-0.8.jsonnet). These two files have had necessary blank lines added so that they can be compared side-by-side and line-by-line on screen. @@ -16,8 +16,9 @@ release-0.3 and also the major migration after release-0.7 as described in The sample files are intended as an example of format conversion and not necessarily best practice for the files in release-0.3 or release-0.8. -Below are three sample extracts of the conversion as an indication of the +Below are three sample extracts of the conversion as an indication of the changes required. + <table> <tr> <th> release-0.3 </th> diff --git a/docs/migration-guide.md b/docs/migration-guide.md index a33a8b6186fe0785b73b2f0476feac21209730dd..04eed3891fecdf2802d712f52fe2c82d88ad19f6 100644 --- a/docs/migration-guide.md +++ b/docs/migration-guide.md @@ -33,14 +33,14 @@ Thanks to our community we identified a lot of short-commings of previous design Those concepts were already present in the repository but it wasn't clear which file is holding what. After refactoring we categorized jsonnet code into 3 buckets and put them into separate directories: - `components` - main building blocks for kube-prometheus, written as functions responsible for creating multiple objects representing kubernetes manifests. For example all objects for node_exporter deployment are bundled in `components/node_exporter.libsonnet` library -- `addons` - everything that can enhance kube-prometheus deployment. Those are small snippets of code adding a small feature, for example adding anti-affinity to pods via [`addons/anti-affinity.libsonnet`][antiaffinity]. Addons are meant to be used in object-oriented way like `local kp = (import 'kube-prometheus/main.libsonnet') + (import 'kube-prometheus/addons/all-namespaces.libsonnet')` +- `addons` - everything that can enhance kube-prometheus deployment. Those are small snippets of code adding a small feature, for example adding anti-affinity to pods via [`addons/anti-affinity.libsonnet`](https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/addons/anti-affinity.libsonnet). Addons are meant to be used in object-oriented way like `local kp = (import 'kube-prometheus/main.libsonnet') + (import 'kube-prometheus/addons/all-namespaces.libsonnet')` - `platforms` - currently those are `addons` specialized to allow deploying kube-prometheus project on a specific platform. ### Component configuration -Refactoring main components to use functions allowed us to define APIs for said components. Each function has a default set of parameters that can be overridden or that are required to be set by a user. Those default parameters are represented in each component by `defaults` map at the top of each library file, for example in [`node_exporter.libsonnet`][node_exporter_defaults_example]. +Refactoring main components to use functions allowed us to define APIs for said components. Each function has a default set of parameters that can be overridden or that are required to be set by a user. Those default parameters are represented in each component by `defaults` map at the top of each library file, for example in [`node_exporter.libsonnet`](https://github.com/prometheus-operator/kube-prometheus/blob/1d2a0e275af97948667777739a18b24464480dc8/jsonnet/kube-prometheus/components/node-exporter.libsonnet#L3-L34). -This API is meant to ease the use of kube-prometheus as parameters can be passed from a JSON file and don't need to be in jsonnet format. However, if you need to modify particular parts of the stack, jsonnet allows you to do this and we are also not restricting such access in any way. An example of such modifications can be seen in any of our `addons`, like the [`addons/anti-affinity.libsonnet`][antiaffinity] one. +This API is meant to ease the use of kube-prometheus as parameters can be passed from a JSON file and don't need to be in jsonnet format. However, if you need to modify particular parts of the stack, jsonnet allows you to do this and we are also not restricting such access in any way. An example of such modifications can be seen in any of our `addons`, like the [`addons/anti-affinity.libsonnet`](https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/addons/anti-affinity.libsonnet) one. ### Mixin integration @@ -63,25 +63,14 @@ All examples from `examples/` directory were adapted to the new codebase. [Pleas ## Legacy migration -An example of conversion of a legacy release-0.3 my.jsonnet file to release-0.8 can be found in [migration-example](./migration-example) +An example of conversion of a legacy release-0.3 my.jsonnet file to release-0.8 can be found in [migration-example](migration-example) ## Advanced usage examples For more advanced usage examples you can take a look at those two, open to public, implementations: -- [thaum-xyz/ankhmorpork][thaum] - extending kube-prometheus to adapt to a required environment -- [openshift/cluster-monitoring-operator][openshift] - using kube-prometheus components as standalone libraries to build a custom solution +- [thaum-xyz/ankhmorpork](https://github.com/thaum-xyz/ankhmorpork/blob/master/apps/monitoring/jsonnet) - extending kube-prometheus to adapt to a required environment +- [openshift/cluster-monitoring-operator](https://github.com/openshift/cluster-monitoring-operator/pull/1044) - using kube-prometheus components as standalone libraries to build a custom solution ## Final note -Refactoring was a huge undertaking and possibly this document didn't describe in enough detail how to help you with migration to the new stack. If that is the case, please reach out to us by using [GitHub discussions][discussions] feature or directly on [#prometheus-operator kubernetes slack channel][slack]. - - -[antiaffinity]: https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/addons/anti-affinity.libsonnet - -[node_exporter_defaults_example]: https://github.com/prometheus-operator/kube-prometheus/blob/1d2a0e275af97948667777739a18b24464480dc8/jsonnet/kube-prometheus/components/node-exporter.libsonnet#L3-L34 - -[openshift]: https://github.com/openshift/cluster-monitoring-operator/pull/1044 -[thaum]: https://github.com/thaum-xyz/ankhmorpork/blob/master/apps/monitoring/jsonnet - -[discussions]: https://github.com/prometheus-operator/kube-prometheus/discussions -[slack]: http://slack.k8s.io/ +Refactoring was a huge undertaking and possibly this document didn't describe in enough detail how to help you with migration to the new stack. If that is the case, please reach out to us by using [GitHub discussions](https://github.com/prometheus-operator/kube-prometheus/discussions) feature or directly on [#prometheus-operator kubernetes slack channel](http://slack.k8s.io/). diff --git a/docs/monitoring-external-etcd.md b/docs/monitoring-external-etcd.md index 24ca7c08fd6fc910c34bdfa399d5f1ce10c51cf7..a49642b53c1403a7376ca6b0099e23e6e00e9e77 100644 --- a/docs/monitoring-external-etcd.md +++ b/docs/monitoring-external-etcd.md @@ -1,23 +1,23 @@ --- -title: "Monitoring external etcd" -description: "This guide will help you monitor an external etcd cluster." -lead: "This guide will help you monitor an external etcd cluster." -date: 2021-03-08T23:04:32+01:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 640 toc: true +title: Monitoring external etcd +menu: + docs: + parent: kube +lead: This guide will help you monitor an external etcd cluster. +images: [] +draft: false +description: This guide will help you monitor an external etcd cluster. +date: "2021-03-08T23:04:32+01:00" --- When the etcd cluster is not hosted inside Kubernetes. This is often the case with Kubernetes setups. This approach has been tested with kube-aws but the same principals apply to other tools. Note that [etcd.jsonnet](../examples/etcd.jsonnet) & [static-etcd.libsonnet](../jsonnet/kube-prometheus/addons/static-etcd.libsonnet) (which are described by a section of the [Readme](../README.md#static-etcd-configuration)) do the following: - * Put the three etcd TLS client files (CA & cert & key) into a secret in the namespace, and have Prometheus Operator load the secret. - * Create the following (to expose etcd metrics - port 2379): a Service, Endpoint, & ServiceMonitor. +* Put the three etcd TLS client files (CA & cert & key) into a secret in the namespace, and have Prometheus Operator load the secret. +* Create the following (to expose etcd metrics - port 2379): a Service, Endpoint, & ServiceMonitor. # Step 1: Open the port @@ -26,6 +26,7 @@ You now need to allow the nodes Prometheus are running on to talk to the etcd on If using kube-aws, you will need to edit the etcd security group inbound, specifying the security group of your Kubernetes node (worker) as the source. ## kube-aws and EIP or ENI inconsistency + With kube-aws, each etcd node has two IP addresses: * EC2 instance IP @@ -40,6 +41,7 @@ Another idea woud be to use the DNS entries of etcd, but those are not currently # Step 2: verify Go to the Prometheus UI on :9090/config and check that you have an etcd job entry: + ``` - job_name: monitoring/etcd-k8s/0 scrape_interval: 30s @@ -48,6 +50,5 @@ Go to the Prometheus UI on :9090/config and check that you have an etcd job entr ``` On the :9090/targets page: - * You should see "etcd" with the UP state. If not, check the Error column for more information. - * If no "etcd" targets are even shown on this page, prometheus isn't attempting to scrape it. - +* You should see "etcd" with the UP state. If not, check the Error column for more information. +* If no "etcd" targets are even shown on this page, prometheus isn't attempting to scrape it. diff --git a/docs/monitoring-other-namespaces.md b/docs/monitoring-other-namespaces.md index dc111b69d42059ff16e7917002e234ca265e322f..8e7b6599357faeffe0cefd9992ee1c334de2aba6 100644 --- a/docs/monitoring-other-namespaces.md +++ b/docs/monitoring-other-namespaces.md @@ -1,24 +1,26 @@ --- -title: "Monitoring other Namespaces" -description: "This guide will help you monitor applications in other Namespaces." -lead: "This guide will help you monitor applications in other Namespaces." -date: 2021-03-08T23:04:32+01:00 -draft: false -images: [] -menu: - docs: - parent: "kube" weight: 640 toc: true +title: Monitoring other Namespaces +menu: + docs: + parent: kube +lead: This guide will help you monitor applications in other Namespaces. +images: [] +draft: false +description: This guide will help you monitor applications in other Namespaces. +date: "2021-03-08T23:04:32+01:00" --- This guide will help you monitor applications in other Namespaces. By default the RBAC rules are only enabled for the `Default` and `kube-system` Namespace during Install. # Setup + You have to give the list of the Namespaces that you want to be able to monitor. This is done in the variable `prometheus.roleSpecificNamespaces`. You usually set this in your `.jsonnet` file when building the manifests. -Example to create the needed `Role` and `RoleBinding` for the Namespace `foo` : +Example to create the needed `Role` and `RoleBinding` for the Namespace `foo` : + ``` local kp = (import 'kube-prometheus/main.libsonnet') + { _config+:: { diff --git a/docs/weave-net-support.md b/docs/weave-net-support.md index 2c9e1d12339cbf6f2e95176375054ac4f5cbc52b..3ffc30fe0b4e45518f6f6b8db98965c46d75e7a8 100644 --- a/docs/weave-net-support.md +++ b/docs/weave-net-support.md @@ -1,9 +1,11 @@ # Setup Weave Net monitoring using kube-prometheus + [Weave Net](https://kubernetes.io/docs/concepts/cluster-administration/networking/#weave-net-from-weaveworks) is a resilient and simple to use CNI provider for Kubernetes. A well monitored and observed CNI provider helps in troubleshooting Kubernetes networking problems. [Weave Net](https://www.weave.works/docs/net/latest/concepts/how-it-works/) emits [prometheus metrics](https://www.weave.works/docs/net/latest/tasks/manage/metrics/) for monitoring Weave Net. There are many ways to install Weave Net in your cluster. One of them is using [kops](https://github.com/kubernetes/kops/blob/master/docs/networking.md). Following this document, you can setup Weave Net monitoring for your cluster using kube-prometheus. ## Contents + Using kube-prometheus and kubectl you will be able install the following for monitoring Weave Net in your cluster: 1. [Service for Weave Net](https://gist.github.com/alok87/379c6234b582f555c141f6fddea9fbce) The service which the [service monitor](https://coreos.com/operators/prometheus/docs/latest/user-guides/cluster-monitoring.html) scrapes. @@ -65,6 +67,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + ``` - After you have the required yamls file please run + ``` kubectl create -f prometheus-serviceWeaveNet.yaml kubectl create -f prometheus-serviceMonitorWeaveNet.yaml diff --git a/docs/windows.md b/docs/windows.md index dcdc2be9af41db17ad84259f57b5f039ae2dbcaf..6302a92434d95aa23b42ffc28c768106752fe136 100644 --- a/docs/windows.md +++ b/docs/windows.md @@ -1,11 +1,10 @@ # Windows -The [Windows addon](../examples/windows.jsonnet) adds the dashboards and rules from [kubernetes-monitoring/kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin#dashboards-for-windows-nodes). +The [Windows addon](../examples/windows.jsonnet) adds the dashboards and rules from [kubernetes-monitoring/kubernetes-mixin](https://github.com/kubernetes-monitoring/kubernetes-mixin#dashboards-for-windows-nodes). Currently, Windows does not support running with [windows_exporter](https://github.com/prometheus-community/windows_exporter) in a pod so this add on uses [additional scrape configuration](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/additional-scrape-config.md) to set up a static config to scrape the node ports where windows_exporter is configured. - -The addon requires you to specify the node ips and ports where it can find the windows_exporter. See the [full example](../examples/windows.jsonnet) for setup. +The addon requires you to specify the node ips and ports where it can find the windows_exporter. See the [full example](../examples/windows.jsonnet) for setup. ``` local kp = (import 'kube-prometheus/main.libsonnet') + diff --git a/jsonnet/kube-prometheus/platforms/README.md b/jsonnet/kube-prometheus/platforms/README.md index 0517200bdab3f6f4acf46bbde0eca5bc059a6e12..c9a4b2327dd8d0d47a0a81b0775775c21d1949ac 100644 --- a/jsonnet/kube-prometheus/platforms/README.md +++ b/jsonnet/kube-prometheus/platforms/README.md @@ -1,3 +1,3 @@ # Adding a new platform specific configuration -Adding a new platform specific configuration requires to update the [README](../../../README.md#cluster-creation-tools) and the [platforms.libsonnet](./platforms.libsonnet) file by adding the platform to the list of existing ones. This allow the new platform to be discoverable and easily configurable by the users. +Adding a new platform specific configuration requires to update the [README](../../../README.md#cluster-creation-tools) and the [platforms.libsonnet](platforms.libsonnet) file by adding the platform to the list of existing ones. This allow the new platform to be discoverable and easily configurable by the users.