Skip to content
Snippets Groups Projects
Select Git revision
  • ecc058185bb2c0a366ea432b32bfcd6fd95631b8
  • main default protected
  • automated-updates-main
  • dependabot/go_modules/github.com/prometheus/client_golang-1.23.0
  • release-0.14
  • 14-env
  • fix-version-3
  • automated-updates-fix-action
  • release-0.15
  • release-0.13
  • automated-updates-release-0.13
  • release-0.10
  • release-0.11
  • release-0.12
  • fix-versions-action
  • versions-fix
  • release-0.9
  • release-0.8
  • release-0.7
  • release-0.6
  • release-0.5
  • v0.15.0
  • v0.14.0
  • v0.13.0
  • v0.12.0
  • v0.11.0
  • v0.10.0
  • v0.9.0
  • v0.8.0
  • v0.7.0
  • v0.6.0
  • v0.5.0
  • v0.4.0
  • v0.3.0
  • v0.2.0
  • v0.1.0
36 results

monitoring-all-namespaces.md

Blame
  • user avatar
    ArthurSens authored
    With the objective of improving our README, customization examples are being moved to a dedicated folder under `docs/`.
    
    Signed-off-by: default avatarArthurSens <arthursens2005@gmail.com>
    af00060d
    History
    monitoring-all-namespaces.md 1.71 KiB

    Monitoring all namespaces

    In case you want to monitor all namespaces in a cluster, you can add the following mixin. Also, make sure to empty the namespaces defined in prometheus so that roleBindings are not created against them.

    local kp = (import 'kube-prometheus/main.libsonnet') +
               (import 'kube-prometheus/addons/all-namespaces.libsonnet') + {
      values+:: {
        common+: {
          namespace: 'monitoring',
        },
        prometheus+: {
          namespaces: [],
        },
      },
    };
    
    { ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
    { ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } +
    { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
    { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
    { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
    { ['prometheus-' + name]: kp.prometheus[name] for name in std.objectFields(kp.prometheus) } +
    { ['grafana-' + name]: kp.grafana[name] for name in std.objectFields(kp.grafana) }

    NOTE: This configuration can potentially make your cluster insecure especially in a multi-tenant cluster. This is because this gives Prometheus visibility over the whole cluster which might not be expected in a scenario when certain namespaces are locked down for security reasons.

    Proceed with creating ServiceMonitors for the services in the namespaces you actually want to monitor