diff --git a/README.md b/README.md index ea01eb25fc6426bb557114fd7e9fffbfcc87053a..e6945dc2de35b55c76d36b3f554a467fa40ce485 100644 --- a/README.md +++ b/README.md @@ -48,21 +48,6 @@ If you are migrating from `release-0.7` branch or earlier please read [what chan - [Compile the manifests and apply](#compile-the-manifests-and-apply) - [Configuration](#configuration) - [Customization Examples](#customization-examples) - - [Cluster Creation Tools](#cluster-creation-tools) - - [Internal Registry](#internal-registry) - - [NodePorts](#nodeports) - - [Prometheus Object Name](#prometheus-object-name) - - [node-exporter DaemonSet namespace](#node-exporter-daemonset-namespace) - - [Alertmanager configuration](#alertmanager-configuration) - - [Adding additional namespaces to monitor](#adding-additional-namespaces-to-monitor) - - [Defining the ServiceMonitor for each additional Namespace](#defining-the-servicemonitor-for-each-additional-namespace) - - [Monitoring all namespaces](#monitoring-all-namespaces) - - [Static etcd configuration](#static-etcd-configuration) - - [Pod Anti-Affinity](#pod-anti-affinity) - - [Stripping container resource limits](#stripping-container-resource-limits) - - [Customizing Prometheus alerting/recording rules and Grafana dashboards](#customizing-prometheus-alertingrecording-rules-and-grafana-dashboards) - - [Exposing Prometheus/Alermanager/Grafana via Ingress](#exposing-prometheusalermanagergrafana-via-ingress) - - [Setting up a blackbox exporter](#setting-up-a-blackbox-exporter) - [Minikube Example](#minikube-example) - [Continuous Delivery](#continuous-delivery) - [Troubleshooting](#troubleshooting) @@ -145,7 +130,7 @@ kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup Prometheus, Grafana, and Alertmanager dashboards can be accessed quickly using `kubectl port-forward` after running the quickstart via the commands below. Kubernetes 1.10 or later is required. -> Note: There are instructions on how to route to these pods behind an ingress controller in the [Exposing Prometheus/Alermanager/Grafana via Ingress](#exposing-prometheusalermanagergrafana-via-ingress) section. +> Note: There are instructions on how to route to these pods behind an ingress controller in the [Exposing Prometheus/Alermanager/Grafana via Ingress](docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md) section. Prometheus @@ -367,357 +352,7 @@ The grafana definition is located in a different project (https://github.com/bra Jsonnet is a turing complete language, any logic can be reflected in it. It also has powerful merge functionalities, allowing sophisticated customizations of any kind simply by merging it into the object the library provides. -### Cluster Creation Tools - -A common example is that not all Kubernetes clusters are created exactly the same way, meaning the configuration to monitor them may be slightly different. For the following clusters there are mixins available to easily configure them: - -* aws -* bootkube -* eks -* gke -* kops -* kops_coredns -* kubeadm -* kubespray - -These mixins are selectable via the `platform` field of kubePrometheus: - -```jsonnet mdox-exec="cat examples/jsonnet-snippets/platform.jsonnet" -(import 'kube-prometheus/main.libsonnet') + -{ - values+:: { - common+: { - platform: 'example-platform', - }, - }, -} -``` - -### Internal Registry - -Some Kubernetes installations source all their images from an internal registry. kube-prometheus supports this use case and helps the user synchronize every image it uses to the internal registry and generate manifests pointing at the internal registry. - -To produce the `docker pull/tag/push` commands that will synchronize upstream images to `internal-registry.com/organization` (after having run the `jb` command to populate the vendor directory): - -```shell -$ jsonnet -J vendor -S --tla-str repository=internal-registry.com/organization sync-to-internal-registry.jsonnet -$ docker pull k8s.gcr.io/addon-resizer:1.8.4 -$ docker tag k8s.gcr.io/addon-resizer:1.8.4 internal-registry.com/organization/addon-resizer:1.8.4 -$ docker push internal-registry.com/organization/addon-resizer:1.8.4 -$ docker pull quay.io/prometheus/alertmanager:v0.16.2 -$ docker tag quay.io/prometheus/alertmanager:v0.16.2 internal-registry.com/organization/alertmanager:v0.16.2 -$ docker push internal-registry.com/organization/alertmanager:v0.16.2 -... -``` - -The output of this command can be piped to a shell to be executed by appending `| sh`. - -Then to generate manifests with `internal-registry.com/organization`, use the `withImageRepository` mixin: - -```jsonnet mdox-exec="cat examples/internal-registry.jsonnet" -local mixin = import 'kube-prometheus/addons/config-mixins.libsonnet'; -local kp = (import 'kube-prometheus/main.libsonnet') + { - values+:: { - common+: { - namespace: 'monitoring', - }, - }, -} + mixin.withImageRepository('internal-registry.com/organization'); - -{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + -{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + -{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + -{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + -{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + -{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } -``` - -### NodePorts - -Another mixin that may be useful for exploring the stack is to expose the UIs of Prometheus, Alertmanager and Grafana on NodePorts: - -```jsonnet mdox-exec="cat examples/jsonnet-snippets/node-ports.jsonnet" -(import 'kube-prometheus/main.libsonnet') + -(import 'kube-prometheus/addons/node-ports.libsonnet') -``` - -### Prometheus Object Name - -To give another customization example, the name of the `Prometheus` object provided by this library can be overridden: - -```jsonnet mdox-exec="cat examples/prometheus-name-override.jsonnet" -((import 'kube-prometheus/main.libsonnet') + { - prometheus+: { - prometheus+: { - metadata+: { - name: 'my-name', - }, - }, - }, - }).prometheus.prometheus -``` - -### node-exporter DaemonSet namespace - -Standard Kubernetes manifests are all written using [ksonnet-lib](https://github.com/ksonnet/ksonnet-lib/), so they can be modified with the mixins supplied by ksonnet-lib. For example to override the namespace of the node-exporter DaemonSet: - -```jsonnet mdox-exec="cat examples/ksonnet-example.jsonnet" -((import 'kube-prometheus/main.libsonnet') + { - nodeExporter+: { - daemonset+: { - metadata+: { - namespace: 'my-custom-namespace', - }, - }, - }, - }).nodeExporter.daemonset -``` - -### Alertmanager configuration - -The Alertmanager configuration is located in the `values.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field. - -```jsonnet mdox-exec="cat examples/alertmanager-config.jsonnet" -((import 'kube-prometheus/main.libsonnet') + { - values+:: { - alertmanager+: { - config: ||| - global: - resolve_timeout: 10m - route: - group_by: ['job'] - group_wait: 30s - group_interval: 5m - repeat_interval: 12h - receiver: 'null' - routes: - - match: - alertname: Watchdog - receiver: 'null' - receivers: - - name: 'null' - |||, - }, - }, - }).alertmanager.secret -``` - -In the above example the configuration has been inlined, but can just as well be an external file imported in jsonnet via the `importstr` function. - -```jsonnet mdox-exec="cat examples/alertmanager-config-external.jsonnet" -((import 'kube-prometheus/main.libsonnet') + { - values+:: { - alertmanager+: { - config: importstr 'alertmanager-config.yaml', - }, - }, - }).alertmanager.secret -``` - -### Adding additional namespaces to monitor - -In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$.values.namespace`. This is specified in `$.values.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces: - -```jsonnet mdox-exec="cat examples/additional-namespaces.jsonnet" -local kp = (import 'kube-prometheus/main.libsonnet') + { - values+:: { - common+: { - namespace: 'monitoring', - }, - - prometheus+: { - namespaces+: ['my-namespace', 'my-second-namespace'], - }, - }, -}; - -{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + -{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + -{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + -{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + -{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + -{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } -``` - -#### Defining the ServiceMonitor for each additional Namespace - -In order to Prometheus be able to discovery and scrape services inside the additional namespaces specified in previous step you need to define a ServiceMonitor resource. - -> Typically it is up to the users of a namespace to provision the ServiceMonitor resource, but in case you want to generate it with the same tooling as the rest of the cluster monitoring infrastructure, this is a guide on how to achieve this. - -You can define ServiceMonitor resources in your `jsonnet` spec. See the snippet bellow: - -```jsonnet mdox-exec="cat examples/additional-namespaces-servicemonitor.jsonnet" -local kp = (import 'kube-prometheus/main.libsonnet') + { - values+:: { - common+: { - namespace: 'monitoring', - }, - prometheus+:: { - namespaces+: ['my-namespace', 'my-second-namespace'], - }, - }, - exampleApplication: { - serviceMonitorMyNamespace: { - apiVersion: 'monitoring.coreos.com/v1', - kind: 'ServiceMonitor', - metadata: { - name: 'my-servicemonitor', - namespace: 'my-namespace', - }, - spec: { - jobLabel: 'app', - endpoints: [ - { - port: 'http-metrics', - }, - ], - selector: { - matchLabels: { - 'app.kubernetes.io/name': 'myapp', - }, - }, - }, - }, - }, - -}; - -{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + -{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + -{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + -{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + -{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + -{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + -{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) } -``` - -> NOTE: make sure your service resources have the right labels (eg. `'app': 'myapp'`) applied. Prometheus uses kubernetes labels to discover resources inside the namespaces. - -### Monitoring all namespaces - -In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them. - -```jsonnet mdox-exec="cat examples/all-namespaces.jsonnet" -local kp = (import 'kube-prometheus/main.libsonnet') + - (import 'kube-prometheus/addons/all-namespaces.libsonnet') + { - values+:: { - common+: { - namespace: 'monitoring', - }, - prometheus+: { - namespaces: [], - }, - }, -}; - -{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + -{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + -{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + -{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + -{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + -{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } -``` - -> NOTE: This configuration can potentially make your cluster insecure especially in a multi-tenant cluster. This is because this gives Prometheus visibility over the whole cluster which might not be expected in a scenario when certain namespaces are locked down for security reasons. - -Proceed with [creating ServiceMonitors for the services in the namespaces](#defining-the-servicemonitor-for-each-additional-namespace) you actually want to monitor - -### Static etcd configuration - -In order to configure a static etcd cluster to scrape there is a simple [static-etcd.libsonnet](jsonnet/kube-prometheus/addons/static-etcd.libsonnet) mixin prepared - see [etcd.jsonnet](examples/etcd.jsonnet) for an example of how to use that mixin, and [Monitoring external etcd](docs/monitoring-external-etcd.md) for more information. - -> Note that monitoring etcd in minikube is currently not possible because of how etcd is setup. (minikube's etcd binds to 127.0.0.1:2379 only, and within host networking namespace.) - -### Pod Anti-Affinity - -To prevent `Prometheus` and `Alertmanager` instances from being deployed onto the same node when -possible, one can include the [kube-prometheus-anti-affinity.libsonnet](jsonnet/kube-prometheus/addons/anti-affinity.libsonnet) mixin: - -```jsonnet mdox-exec="cat examples/anti-affinity.jsonnet" -local kp = (import 'kube-prometheus/main.libsonnet') + - (import 'kube-prometheus/addons/anti-affinity.libsonnet') + { - values+:: { - common+: { - namespace: 'monitoring', - }, - }, -}; - -{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + -{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + -{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + -{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + -{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + -{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } -``` - -### Stripping container resource limits - -Sometimes in small clusters, the CPU/memory limits can get high enough for alerts to be fired continuously. To prevent this, one can strip off the predefined limits. -To do that, one can import the following mixin - -```jsonnet mdox-exec="cat examples/strip-limits.jsonnet" -local kp = (import 'kube-prometheus/main.libsonnet') + - (import 'kube-prometheus/addons/strip-limits.libsonnet') + { - values+:: { - common+: { - namespace: 'monitoring', - }, - }, -}; - -{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + -{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + -{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + -{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + -{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + -{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } -``` - -### Customizing Prometheus alerting/recording rules and Grafana dashboards - -See [developing Prometheus rules and Grafana dashboards](docs/developing-prometheus-rules-and-grafana-dashboards.md) guide. - -### Exposing Prometheus/Alermanager/Grafana via Ingress - -See [exposing Prometheus/Alertmanager/Grafana](docs/exposing-prometheus-alertmanager-grafana-ingress.md) guide. - -### Setting up a blackbox exporter - -```jsonnet -local kp = (import 'kube-prometheus/kube-prometheus.libsonnet') + - // ... all necessary mixins ... - { - values+:: { - // ... configuration for other features ... - blackboxExporter+:: { - modules+:: { - tls_connect: { - prober: 'tcp', - tcp: { - tls: true - } - } - } - } - } - }; - -{ ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + -// ... other rendering blocks ... -{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } -``` - -Then describe the actual blackbox checks you want to run using `Probe` resources. Specify `blackbox-exporter.<namespace>.svc.cluster.local:9115` as the `spec.prober.url` field of the `Probe` resource. - -See the [blackbox exporter guide](docs/blackbox-exporter.md) for the list of configurable options and a complete example. +To get started, we provide several customization examples in the [docs/customizations/](docs/customizations) section. ## Minikube Example diff --git a/docs/customizations/alertmanager-configuration.md b/docs/customizations/alertmanager-configuration.md new file mode 100644 index 0000000000000000000000000000000000000000..f74c666cc1fdfb1a0786eb8d16a0a2d07bf422b6 --- /dev/null +++ b/docs/customizations/alertmanager-configuration.md @@ -0,0 +1,40 @@ +### Alertmanager configuration + +The Alertmanager configuration is located in the `values.alertmanager.config` configuration field. In order to set a custom Alertmanager configuration simply set this field. + +```jsonnet mdox-exec="cat examples/alertmanager-config.jsonnet" +((import 'kube-prometheus/main.libsonnet') + { + values+:: { + alertmanager+: { + config: ||| + global: + resolve_timeout: 10m + route: + group_by: ['job'] + group_wait: 30s + group_interval: 5m + repeat_interval: 12h + receiver: 'null' + routes: + - match: + alertname: Watchdog + receiver: 'null' + receivers: + - name: 'null' + |||, + }, + }, + }).alertmanager.secret +``` + +In the above example the configuration has been inlined, but can just as well be an external file imported in jsonnet via the `importstr` function. + +```jsonnet mdox-exec="cat examples/alertmanager-config-external.jsonnet" +((import 'kube-prometheus/main.libsonnet') + { + values+:: { + alertmanager+: { + config: importstr 'alertmanager-config.yaml', + }, + }, + }).alertmanager.secret +``` diff --git a/docs/customizations/components-name-namespace-overrides.md b/docs/customizations/components-name-namespace-overrides.md new file mode 100644 index 0000000000000000000000000000000000000000..4f9bf10be7e8a50a0114b860fd038625155ae397 --- /dev/null +++ b/docs/customizations/components-name-namespace-overrides.md @@ -0,0 +1,56 @@ +### Components' name and namespace overrides + +It is possible to override the namespace where kube-prometheus is going to be deployed, like the example below: + +```jsonnet +local kp = (import 'kube-prometheus/main.libsonnet') + +{ + values+:: { + common+: { + namespace: 'monitoring', + }, + }, +}; +``` + +If prefered, it can be changed individually by component. It is also possible to change the name of Prometheus and Alertmanager Custom Resources, like shown below: + +```jsonnet mdox-exec="cat examples/name-namespace-overrides.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + + { + values+:: { + common+: { + namespace: 'monitoring', + }, + + prometheus+: { + namespace: 'foo', + name: 'bar', + }, + + alertmanager+: { + namespace: 'bar', + name: 'foo', + }, + }, + }; + +{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } + +// Add the restricted psp to setup +{ + ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name] + for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator)) +} + +// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready +{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } + +{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } + +{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) } +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } +``` diff --git a/docs/developing-prometheus-rules-and-grafana-dashboards.md b/docs/customizations/developing-prometheus-rules-and-grafana-dashboards.md similarity index 97% rename from docs/developing-prometheus-rules-and-grafana-dashboards.md rename to docs/customizations/developing-prometheus-rules-and-grafana-dashboards.md index c13ccea6124725f1fd258cf3eb624d1f27945a39..8eaf61ea8fa97edb97fe3e04d8928c745138b27f 100644 --- a/docs/developing-prometheus-rules-and-grafana-dashboards.md +++ b/docs/customizations/developing-prometheus-rules-and-grafana-dashboards.md @@ -18,7 +18,7 @@ All manifests of kube-prometheus are generated using [jsonnet](https://jsonnet.o For both the Prometheus rules and the Grafana dashboards Kubernetes `ConfigMap`s are generated within kube-prometheus. In order to add additional rules and dashboards simply merge them onto the existing json objects. This document illustrates examples for rules as well as dashboards. -As a basis, all examples in this guide are based on the base example of the kube-prometheus [readme](../README.md): +As a basis, all examples in this guide are based on the base example of the kube-prometheus [readme](../../README.md): ```jsonnet mdox-exec="cat example.jsonnet" local kp = @@ -216,7 +216,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + { ### Changing default rules -Along with adding additional rules, we give the user the option to filter or adjust the existing rules imported by `kube-prometheus/main.libsonnet`. The recording rules can be found in [kube-prometheus/components/mixin/rules](../jsonnet/kube-prometheus/components/mixin/rules) and [kubernetes-mixin/rules](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/rules) while the alerting rules can be found in [kube-prometheus/components/mixin/alerts](../jsonnet/kube-prometheus/components/mixin/alerts) and [kubernetes-mixin/alerts](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/alerts). +Along with adding additional rules, we give the user the option to filter or adjust the existing rules imported by `kube-prometheus/main.libsonnet`. The recording rules can be found in [kube-prometheus/components/mixin/rules](../../jsonnet/kube-prometheus/components/mixin/rules) and [kubernetes-mixin/rules](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/rules) while the alerting rules can be found in [kube-prometheus/components/mixin/alerts](../../jsonnet/kube-prometheus/components/mixin/alerts) and [kubernetes-mixin/alerts](https://github.com/kubernetes-monitoring/kubernetes-mixin/tree/master/alerts). Knowing which rules to change, the user can now use functions from the [Jsonnet standard library](https://jsonnet.org/ref/stdlib.html) to make these changes. Below are examples of both a filter and an adjustment being made to the default rules. These changes can be assigned to a local variable and then added to the `local kp` object as seen in the examples above. @@ -394,7 +394,7 @@ local kp = (import 'kube-prometheus/main.libsonnet') + { ### Pre-rendered Grafana dashboards -As jsonnet is a superset of json, the jsonnet `import` function can be used to include Grafana dashboard json blobs. In this example we are importing a [provided example dashboard](../examples/example-grafana-dashboard.json). +As jsonnet is a superset of json, the jsonnet `import` function can be used to include Grafana dashboard json blobs. In this example we are importing a [provided example dashboard](../../examples/example-grafana-dashboard.json). ```jsonnet mdox-exec="cat examples/grafana-additional-rendered-dashboard-example.jsonnet" local kp = (import 'kube-prometheus/main.libsonnet') + { diff --git a/docs/exposing-prometheus-alertmanager-grafana-ingress.md b/docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md similarity index 96% rename from docs/exposing-prometheus-alertmanager-grafana-ingress.md rename to docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md index 64706c9643018dcd154a4b201c13a1ac11bbb215..ada5e22d4d40398f08f2fc4d0c1ada4a33892d9e 100644 --- a/docs/exposing-prometheus-alertmanager-grafana-ingress.md +++ b/docs/customizations/exposing-prometheus-alertmanager-grafana-ingress.md @@ -102,9 +102,9 @@ k.core.v1.list.new([ ]) ``` -In order to expose Alertmanager and Grafana, simply create additional fields containing an ingress object, but simply pointing at the `alertmanager` or `grafana` instead of the `prometheus-k8s` Service. Make sure to also use the correct port respectively, for Alertmanager it is also `web`, for Grafana it is `http`. Be sure to also specify the appropriate external URL. Note that the external URL for grafana is set in a different way than the external URL for Prometheus or Alertmanager. See [ingress.jsonnet](../examples/ingress.jsonnet) for how to set the Grafana external URL. +In order to expose Alertmanager and Grafana, simply create additional fields containing an ingress object, but simply pointing at the `alertmanager` or `grafana` instead of the `prometheus-k8s` Service. Make sure to also use the correct port respectively, for Alertmanager it is also `web`, for Grafana it is `http`. Be sure to also specify the appropriate external URL. Note that the external URL for grafana is set in a different way than the external URL for Prometheus or Alertmanager. See [ingress.jsonnet](../../examples/ingress.jsonnet) for how to set the Grafana external URL. -In order to render the ingress objects similar to the other objects use as demonstrated in the [main readme](../README.md): +In order to render the ingress objects similar to the other objects use as demonstrated in the [main readme](../../README.md): ``` { ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + @@ -119,4 +119,4 @@ In order to render the ingress objects similar to the other objects use as demon Note, that in comparison only the last line was added, the rest is identical to the original. -See [ingress.jsonnet](../examples/ingress.jsonnet) for an example implementation. +See [ingress.jsonnet](../../examples/ingress.jsonnet) for an example implementation. diff --git a/docs/customizations/monitoring-additional-namespaces.md b/docs/customizations/monitoring-additional-namespaces.md new file mode 100644 index 0000000000000000000000000000000000000000..3a8f183b19e1b357dd4e471ed25c5e21746e0ee6 --- /dev/null +++ b/docs/customizations/monitoring-additional-namespaces.md @@ -0,0 +1,81 @@ +### Monitoring additional namespaces + +In order to monitor additional namespaces, the Prometheus server requires the appropriate `Role` and `RoleBinding` to be able to discover targets from that namespace. By default the Prometheus server is limited to the three namespaces it requires: default, kube-system and the namespace you configure the stack to run in via `$.values.namespace`. This is specified in `$.values.prometheus.namespaces`, to add new namespaces to monitor, simply append the additional namespaces: + +```jsonnet mdox-exec="cat examples/additional-namespaces.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + + prometheus+: { + namespaces+: ['my-namespace', 'my-second-namespace'], + }, + }, +}; + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +``` + +#### Defining the ServiceMonitor for each additional Namespace + +In order to Prometheus be able to discovery and scrape services inside the additional namespaces specified in previous step you need to define a ServiceMonitor resource. + +> Typically it is up to the users of a namespace to provision the ServiceMonitor resource, but in case you want to generate it with the same tooling as the rest of the cluster monitoring infrastructure, this is a guide on how to achieve this. + +You can define ServiceMonitor resources in your `jsonnet` spec. See the snippet bellow: + +```jsonnet mdox-exec="cat examples/additional-namespaces-servicemonitor.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + prometheus+:: { + namespaces+: ['my-namespace', 'my-second-namespace'], + }, + }, + exampleApplication: { + serviceMonitorMyNamespace: { + apiVersion: 'monitoring.coreos.com/v1', + kind: 'ServiceMonitor', + metadata: { + name: 'my-servicemonitor', + namespace: 'my-namespace', + }, + spec: { + jobLabel: 'app', + endpoints: [ + { + port: 'http-metrics', + }, + ], + selector: { + matchLabels: { + 'app.kubernetes.io/name': 'myapp', + }, + }, + }, + }, + }, + +}; + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + +{ ['example-application-' + name]: kp.exampleApplication[name] for name in std.objectFields(kp.exampleApplication) } +``` + +> NOTE: make sure your service resources have the right labels (eg. `'app': 'myapp'`) applied. Prometheus uses kubernetes labels to discover resources inside the namespaces. diff --git a/docs/customizations/monitoring-all-namespaces.md b/docs/customizations/monitoring-all-namespaces.md new file mode 100644 index 0000000000000000000000000000000000000000..0db18f1f0e437b51674e7f05c1bfc2ae35424596 --- /dev/null +++ b/docs/customizations/monitoring-all-namespaces.md @@ -0,0 +1,29 @@ +### Monitoring all namespaces + +In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them. + +```jsonnet mdox-exec="cat examples/all-namespaces.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + + (import 'kube-prometheus/addons/all-namespaces.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + prometheus+: { + namespaces: [], + }, + }, +}; + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +``` + +> NOTE: This configuration can potentially make your cluster insecure especially in a multi-tenant cluster. This is because this gives Prometheus visibility over the whole cluster which might not be expected in a scenario when certain namespaces are locked down for security reasons. + +Proceed with [creating ServiceMonitors for the services in the namespaces](monitoring-additional-namespaces.md#defining-the-servicemonitor-for-each-additional-namespace) you actually want to monitor diff --git a/docs/customizations/node-ports.md b/docs/customizations/node-ports.md new file mode 100644 index 0000000000000000000000000000000000000000..6483b29fb1cb490e17152182a09a989b887b9826 --- /dev/null +++ b/docs/customizations/node-ports.md @@ -0,0 +1,8 @@ +### NodePorts + +Another mixin that may be useful for exploring the stack is to expose the UIs of Prometheus, Alertmanager and Grafana on NodePorts: + +```jsonnet mdox-exec="cat examples/jsonnet-snippets/node-ports.jsonnet" +(import 'kube-prometheus/main.libsonnet') + +(import 'kube-prometheus/addons/node-ports.libsonnet') +``` diff --git a/docs/customizations/platform-specific.md b/docs/customizations/platform-specific.md new file mode 100644 index 0000000000000000000000000000000000000000..3552d4580b1d3986532477e860512d3e717ba29f --- /dev/null +++ b/docs/customizations/platform-specific.md @@ -0,0 +1,25 @@ +### Running kube-prometheus on specific platforms + +A common example is that not all Kubernetes clusters are created exactly the same way, meaning the configuration to monitor them may be slightly different. For the following clusters there are mixins available to easily configure them: + +* aws +* bootkube +* eks +* gke +* kops +* kops_coredns +* kubeadm +* kubespray + +These mixins are selectable via the `platform` field of kubePrometheus: + +```jsonnet mdox-exec="cat examples/jsonnet-snippets/platform.jsonnet" +(import 'kube-prometheus/main.libsonnet') + +{ + values+:: { + common+: { + platform: 'example-platform', + }, + }, +} +``` diff --git a/docs/customizations/pod-anti-affinity.md b/docs/customizations/pod-anti-affinity.md new file mode 100644 index 0000000000000000000000000000000000000000..34812257a166b8a7553f242c4b26bd994419870a --- /dev/null +++ b/docs/customizations/pod-anti-affinity.md @@ -0,0 +1,23 @@ +### Pod Anti-Affinity + +To prevent `Prometheus` and `Alertmanager` instances from being deployed onto the same node when +possible, one can include the [kube-prometheus-anti-affinity.libsonnet](../../jsonnet/kube-prometheus/addons/anti-affinity.libsonnet) mixin: + +```jsonnet mdox-exec="cat examples/anti-affinity.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + + (import 'kube-prometheus/addons/anti-affinity.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + }, +}; + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +``` diff --git a/docs/customizations/static-etcd-configuration.md b/docs/customizations/static-etcd-configuration.md new file mode 100644 index 0000000000000000000000000000000000000000..9e8ad49bd6a139a33226be05e03813cea630d016 --- /dev/null +++ b/docs/customizations/static-etcd-configuration.md @@ -0,0 +1,66 @@ +### Static etcd configuration + +In order to configure a static etcd cluster to scrape there is a simple [static-etcd.libsonnet](../../jsonnet/kube-prometheus/addons/static-etcd.libsonnet) mixin prepared. + +An example of how to use it can be seen below: + +```jsonnet mdox-exec="cat examples/etcd.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + + (import 'kube-prometheus/addons/static-etcd.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + + etcd+: { + // Configure this to be the IP(s) to scrape - i.e. your etcd node(s) (use commas to separate multiple values). + ips: ['127.0.0.1'], + + // Reference info: + // * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec (has endpoints) + // * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint (has tlsConfig) + // * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#tlsconfig (has: caFile, certFile, keyFile, serverName, & insecureSkipVerify) + + // Set these three variables to the fully qualified directory path on your work machine to the certificate files that are valid to scrape etcd metrics with (check the apiserver container). + // Most likely these certificates are generated somewhere in an infrastructure repository, so using the jsonnet `importstr` function can + // be useful here. (Kube-aws stores these three files inside the credential folder.) + // All the sensitive information on the certificates will end up in a Kubernetes Secret. + clientCA: importstr 'etcd-client-ca.crt', + clientKey: importstr 'etcd-client.key', + clientCert: importstr 'etcd-client.crt', + + // Note that you should specify a value EITHER for 'serverName' OR for 'insecureSkipVerify'. (Don't specify a value for both of them, and don't specify a value for neither of them.) + // * Specifying serverName: Ideally you should provide a valid value for serverName (and then insecureSkipVerify should be left as false - so that serverName gets used). + // * Specifying insecureSkipVerify: insecureSkipVerify is only to be used (i.e. set to true) if you cannot (based on how your etcd certificates were created) use a Subject Alternative Name. + // * If you specify a value: + // ** for both of these variables: When 'insecureSkipVerify: true' is specified, then also specifying a value for serverName won't hurt anything but it will be ignored. + // ** for neither of these variables: then you'll get authentication errors on the prom '/targets' page with your etcd targets. + + // A valid name (DNS or Subject Alternative Name) that the client (i.e. prometheus) will use to verify the etcd TLS certificate. + // * Note that doing `nslookup etcd.kube-system.svc.cluster.local` (on a pod in a K8s cluster where kube-prometheus has been installed) shows that kube-prometheus sets up this hostname. + // * `openssl x509 -noout -text -in etcd-client.pem` will print the Subject Alternative Names. + serverName: 'etcd.kube-system.svc.cluster.local', + + // When insecureSkipVerify isn't specified, the default value is "false". + //insecureSkipVerify: true, + + // In case you have generated the etcd certificate with kube-aws: + // * If you only have one etcd node, you can use the value from 'etcd.internalDomainName' (specified in your kube-aws cluster.yaml) as the value for 'serverName'. + // * But if you have multiple etcd nodes, you will need to use 'insecureSkipVerify: true' (if using default certificate generators method), as the valid certificate domain + // will be different for each etcd node. (kube-aws default certificates are not valid against the IP - they were created for the DNS.) + }, + }, +}; + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +``` + +If you'd like to monitor an etcd instance that lives outside the cluster, see [Monitoring external etcd](../monitoring-external-etcd.md) for more information. + +> Note that monitoring etcd in minikube is currently not possible because of how etcd is setup. (minikube's etcd binds to 127.0.0.1:2379 only, and within host networking namespace.) diff --git a/docs/customizations/strip-limits.md b/docs/customizations/strip-limits.md new file mode 100644 index 0000000000000000000000000000000000000000..a7e9e4c652327d05ca0103d19fecb82f4f9c7ef5 --- /dev/null +++ b/docs/customizations/strip-limits.md @@ -0,0 +1,23 @@ +### Stripping container resource limits + +Sometimes in small clusters, the CPU/memory limits can get high enough for alerts to be fired continuously. To prevent this, one can strip off the predefined limits. +To do that, one can import the following mixin + +```jsonnet mdox-exec="cat examples/strip-limits.jsonnet" +local kp = (import 'kube-prometheus/main.libsonnet') + + (import 'kube-prometheus/addons/strip-limits.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + }, +}; + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +``` diff --git a/docs/customizations/using-custom-container-registry.md b/docs/customizations/using-custom-container-registry.md new file mode 100644 index 0000000000000000000000000000000000000000..f4ffab0bb6af29f35d2be2d4c7d7a299f414b654 --- /dev/null +++ b/docs/customizations/using-custom-container-registry.md @@ -0,0 +1,39 @@ +### Internal Registry + +Some Kubernetes installations source all their images from an internal registry. kube-prometheus supports this use case and helps the user synchronize every image it uses to the internal registry and generate manifests pointing at the internal registry. + +To produce the `docker pull/tag/push` commands that will synchronize upstream images to `internal-registry.com/organization` (after having run the `jb` command to populate the vendor directory): + +```shell +$ jsonnet -J vendor -S --tla-str repository=internal-registry.com/organization sync-to-internal-registry.jsonnet +$ docker pull k8s.gcr.io/addon-resizer:1.8.4 +$ docker tag k8s.gcr.io/addon-resizer:1.8.4 internal-registry.com/organization/addon-resizer:1.8.4 +$ docker push internal-registry.com/organization/addon-resizer:1.8.4 +$ docker pull quay.io/prometheus/alertmanager:v0.16.2 +$ docker tag quay.io/prometheus/alertmanager:v0.16.2 internal-registry.com/organization/alertmanager:v0.16.2 +$ docker push internal-registry.com/organization/alertmanager:v0.16.2 +... +``` + +The output of this command can be piped to a shell to be executed by appending `| sh`. + +Then to generate manifests with `internal-registry.com/organization`, use the `withImageRepository` mixin: + +```jsonnet mdox-exec="cat examples/internal-registry.jsonnet" +local mixin = import 'kube-prometheus/addons/config-mixins.libsonnet'; +local kp = (import 'kube-prometheus/main.libsonnet') + { + values+:: { + common+: { + namespace: 'monitoring', + }, + }, +} + mixin.withImageRepository('internal-registry.com/organization'); + +{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + +{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } +``` diff --git a/docs/monitoring-external-etcd.md b/docs/monitoring-external-etcd.md index a49642b53c1403a7376ca6b0099e23e6e00e9e77..ac3c58138ecd4a2629c8a8c68db813d9a74628f9 100644 --- a/docs/monitoring-external-etcd.md +++ b/docs/monitoring-external-etcd.md @@ -15,7 +15,7 @@ date: "2021-03-08T23:04:32+01:00" When the etcd cluster is not hosted inside Kubernetes. This is often the case with Kubernetes setups. This approach has been tested with kube-aws but the same principals apply to other tools. -Note that [etcd.jsonnet](../examples/etcd.jsonnet) & [static-etcd.libsonnet](../jsonnet/kube-prometheus/addons/static-etcd.libsonnet) (which are described by a section of the [Readme](../README.md#static-etcd-configuration)) do the following: +Note that [etcd.jsonnet](../examples/etcd.jsonnet) & [static-etcd.libsonnet](../jsonnet/kube-prometheus/addons/static-etcd.libsonnet) (which are described by a section of the [customization](customizations/static-etcd-configuration.md)) do the following: * Put the three etcd TLS client files (CA & cert & key) into a secret in the namespace, and have Prometheus Operator load the secret. * Create the following (to expose etcd metrics - port 2379): a Service, Endpoint, & ServiceMonitor. diff --git a/examples/etcd.jsonnet b/examples/etcd.jsonnet index 7126ee314b330a418790056ca1b2ed5453c92193..bcfd93ae0b372fdb7a9d7bd2b31a28ca0220e5ac 100644 --- a/examples/etcd.jsonnet +++ b/examples/etcd.jsonnet @@ -5,15 +5,14 @@ local kp = (import 'kube-prometheus/main.libsonnet') + namespace: 'monitoring', }, - // Reference info: https://github.com/coreos/kube-prometheus/blob/master/README.md#static-etcd-configuration etcd+: { // Configure this to be the IP(s) to scrape - i.e. your etcd node(s) (use commas to separate multiple values). ips: ['127.0.0.1'], // Reference info: - // * https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#servicemonitorspec (has endpoints) - // * https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint (has tlsConfig) - // * https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig (has: caFile, certFile, keyFile, serverName, & insecureSkipVerify) + // * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitorspec (has endpoints) + // * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#endpoint (has tlsConfig) + // * https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#tlsconfig (has: caFile, certFile, keyFile, serverName, & insecureSkipVerify) // Set these three variables to the fully qualified directory path on your work machine to the certificate files that are valid to scrape etcd metrics with (check the apiserver container). // Most likely these certificates are generated somewhere in an infrastructure repository, so using the jsonnet `importstr` function can diff --git a/examples/ksonnet-example.jsonnet b/examples/ksonnet-example.jsonnet deleted file mode 100644 index 36640ab4309282810785c58cb6a1c91e9f59bdd5..0000000000000000000000000000000000000000 --- a/examples/ksonnet-example.jsonnet +++ /dev/null @@ -1,9 +0,0 @@ -((import 'kube-prometheus/main.libsonnet') + { - nodeExporter+: { - daemonset+: { - metadata+: { - namespace: 'my-custom-namespace', - }, - }, - }, - }).nodeExporter.daemonset diff --git a/examples/name-namespace-overrides.jsonnet b/examples/name-namespace-overrides.jsonnet new file mode 100644 index 0000000000000000000000000000000000000000..5c1007ab8d0b73483543ca93c9e887913804dd94 --- /dev/null +++ b/examples/name-namespace-overrides.jsonnet @@ -0,0 +1,37 @@ +local kp = (import 'kube-prometheus/main.libsonnet') + + { + values+:: { + common+: { + namespace: 'monitoring', + }, + + prometheus+: { + namespace: 'foo', + name: 'bar', + }, + + alertmanager+: { + namespace: 'bar', + name: 'foo', + }, + }, + }; + +{ 'setup/0namespace-namespace': kp.kubePrometheus.namespace } + +// Add the restricted psp to setup +{ + ['setup/prometheus-operator-' + name]: kp.prometheusOperator[name] + for name in std.filter((function(name) name != 'serviceMonitor' && name != 'prometheusRule'), std.objectFields(kp.prometheusOperator)) +} + +// serviceMonitor and prometheusRule are separated so that they can be created after the CRDs are ready +{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } + +{ 'prometheus-operator-prometheusRule': kp.prometheusOperator.prometheusRule } + +{ 'kube-prometheus-prometheusRule': kp.kubePrometheus.prometheusRule } + +{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + +{ ['blackbox-exporter-' + name]: kp.blackboxExporter[name] for name in std.objectFields(kp.blackboxExporter) } + +{ ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) } + +{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + +{ ['kubernetes-' + name]: kp.kubernetesControlPlane[name] for name in std.objectFields(kp.kubernetesControlPlane) } +{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + +{ ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } + +{ ['prometheus-adapter-' + name]: kp.prometheusAdapter[name] for name in std.objectFields(kp.prometheusAdapter) } diff --git a/examples/prometheus-name-override.jsonnet b/examples/prometheus-name-override.jsonnet deleted file mode 100644 index b6c3906059b88c938d60e84dd1a7f603fbfadbc5..0000000000000000000000000000000000000000 --- a/examples/prometheus-name-override.jsonnet +++ /dev/null @@ -1,9 +0,0 @@ -((import 'kube-prometheus/main.libsonnet') + { - prometheus+: { - prometheus+: { - metadata+: { - name: 'my-name', - }, - }, - }, - }).prometheus.prometheus diff --git a/jsonnet/kube-prometheus/platforms/README.md b/jsonnet/kube-prometheus/platforms/README.md index c9a4b2327dd8d0d47a0a81b0775775c21d1949ac..65cabd70322d71e79ab6755b8b7928fb9380d39e 100644 --- a/jsonnet/kube-prometheus/platforms/README.md +++ b/jsonnet/kube-prometheus/platforms/README.md @@ -1,3 +1,3 @@ # Adding a new platform specific configuration -Adding a new platform specific configuration requires to update the [README](../../../README.md#cluster-creation-tools) and the [platforms.libsonnet](platforms.libsonnet) file by adding the platform to the list of existing ones. This allow the new platform to be discoverable and easily configurable by the users. +Adding a new platform specific configuration requires to update the [customization example](../../../docs/customizations/platform-specific.md#running-kube-prometheus-on-specific-platforms) and the [platforms.libsonnet](platforms.libsonnet) file by adding the platform to the list of existing ones. This allow the new platform to be discoverable and easily configurable by the users.