Skip to content
Snippets Groups Projects
Commit c8273cf9 authored by Paul Gier's avatar Paul Gier Committed by Frederic Branczyk
Browse files

Scripts and readme (#258)


* Avoid race condition when deploying quickstart example

The namespace and CRD creation must happen before any dependent objects
are created.  So we can put these in a separate directory (manifest/setup)
so they can be created before the other objects.

Some minor updates to the README and added a couple of scripts
for the quickstarts

Update travis script to avoid race condition

Signed-off-by: default avatarPaul Gier <pgier@redhat.com>

* simplify the example quickstart script and improve readme

Signed-off-by: default avatarPaul Gier <pgier@redhat.com>

* increase minikube memory to 6g for quickstart example
parent 24aebaf9
No related branches found
No related tags found
No related merge requests found
Showing
with 80 additions and 34 deletions
...@@ -71,13 +71,13 @@ This adapter is an Extension API Server and Kubernetes needs to be have this fea ...@@ -71,13 +71,13 @@ This adapter is an Extension API Server and Kubernetes needs to be have this fea
### minikube ### minikube
In order to just try out this stack, start [minikube](https://github.com/kubernetes/minikube) with the following command: To try out this stack, start [minikube](https://github.com/kubernetes/minikube) with the following command:
```shell ```shell
$ minikube delete && minikube start --kubernetes-version=v1.14.4 --memory=4096 --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 $ minikube delete && minikube start --kubernetes-version=v1.16.0 --memory=6g --bootstrapper=kubeadm --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0
``` ```
The kube-prometheus stack includes a resource metrics API server, like the metrics-server does. So ensure the metrics-server plugin is disabled on minikube: The kube-prometheus stack includes a resource metrics API server, so the metrics-server addon is not necessary. Ensure the metrics-server addon is disabled on minikube:
```shell ```shell
$ minikube addons disable metrics-server $ minikube addons disable metrics-server
...@@ -90,20 +90,23 @@ $ minikube addons disable metrics-server ...@@ -90,20 +90,23 @@ $ minikube addons disable metrics-server
This project is intended to be used as a library (i.e. the intent is not for you to create your own modified copy of this repository). This project is intended to be used as a library (i.e. the intent is not for you to create your own modified copy of this repository).
Though for a quickstart a compiled version of the Kubernetes [manifests](manifests) generated with this library (specifically with `example.jsonnet`) is checked into this repository in order to try the content out quickly. To try out the stack un-customized run: Though for a quickstart a compiled version of the Kubernetes [manifests](manifests) generated with this library (specifically with `example.jsonnet`) is checked into this repository in order to try the content out quickly. To try out the stack un-customized run:
* Simply create the stack: * Create the monitoring stack using the config in the `manifests` directory:
```shell
$ kubectl create -f manifests/
# It can take a few seconds for the above 'create manifests' command to fully create the following resources, so verify the resources are ready before proceeding.
$ until kubectl get customresourcedefinitions servicemonitors.monitoring.coreos.com ; do date; sleep 1; echo ""; done
$ until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
$ kubectl apply -f manifests/ # This command sometimes may need to be done twice (to workaround a race condition). ```shell
# Create the namespace and CRDs, and then wait for them to be availble before creating the remaining resources
kubectl create -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl create -f manifests/
``` ```
We create the namespace and CustomResourceDefinitions first to avoid race conditions when deploying the monitoring components.
Alternatively, the resources in both folders can be applied with a single command
`kubectl create -f manifests/setup -f manifests`, but it may be necessary to run the command multiple times for all components to
be created successfullly.
* And to teardown the stack: * And to teardown the stack:
```shell ```shell
$ kubectl delete -f manifests/ kubectl delete --ignore-not-found=true -f manifests/ -f manifests/setup
``` ```
### Access the dashboards ### Access the dashboards
...@@ -187,8 +190,13 @@ local kp = ...@@ -187,8 +190,13 @@ local kp =
}, },
}; };
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + { ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + {
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
...@@ -212,7 +220,7 @@ set -o pipefail ...@@ -212,7 +220,7 @@ set -o pipefail
# Make sure to start with a clean 'manifests' dir # Make sure to start with a clean 'manifests' dir
rm -rf manifests rm -rf manifests
mkdir manifests mkdir -p manifests/setup
# optional, but we would like to generate yaml, not json # optional, but we would like to generate yaml, not json
jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml; rm -f {}' -- {} jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml; rm -f {}' -- {}
......
...@@ -9,7 +9,7 @@ set -o pipefail ...@@ -9,7 +9,7 @@ set -o pipefail
# Make sure to start with a clean 'manifests' dir # Make sure to start with a clean 'manifests' dir
rm -rf manifests rm -rf manifests
mkdir manifests mkdir -p manifests/setup
# optional, but we would like to generate yaml, not json # optional, but we would like to generate yaml, not json
jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml; rm -f {}' -- {} jsonnet -J vendor -m manifests "${1-example.jsonnet}" | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml; rm -f {}' -- {}
......
...@@ -24,8 +24,13 @@ local kp = ...@@ -24,8 +24,13 @@ local kp =
}, },
}; };
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + { ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + {
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
......
...@@ -12,8 +12,13 @@ local kp = ...@@ -12,8 +12,13 @@ local kp =
}, },
}; };
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + { ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + {
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
......
...@@ -8,8 +8,13 @@ local kp = ...@@ -8,8 +8,13 @@ local kp =
local manifests = local manifests =
// Uncomment line below to enable vertical auto scaling of kube-state-metrics // Uncomment line below to enable vertical auto scaling of kube-state-metrics
//{ ['ksm-autoscaler-' + name]: kp.ksmAutoscaler[name] for name in std.objectFields(kp.ksmAutoscaler) } + //{ ['ksm-autoscaler-' + name]: kp.ksmAutoscaler[name] for name in std.objectFields(kp.ksmAutoscaler) } +
{ ['00namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } + { ['setup/0namespace-' + name]: kp.kubePrometheus[name] for name in std.objectFields(kp.kubePrometheus) } +
{ ['0prometheus-operator-' + name]: kp.prometheusOperator[name] for name in std.objectFields(kp.prometheusOperator) } + {
['setup/prometheus-operator-' + name]: kp.prometheusOperator[name]
for name in std.filter((function(name) name != 'serviceMonitor'), std.objectFields(kp.prometheusOperator))
} +
// serviceMonitor is separated so that it can be created after the CRDs are ready
{ 'prometheus-operator-serviceMonitor': kp.prometheusOperator.serviceMonitor } +
{ ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } + { ['node-exporter-' + name]: kp.nodeExporter[name] for name in std.objectFields(kp.nodeExporter) } +
{ ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } + { ['kube-state-metrics-' + name]: kp.kubeStateMetrics[name] for name in std.objectFields(kp.kubeStateMetrics) } +
{ ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } + { ['alertmanager-' + name]: kp.alertmanager[name] for name in std.objectFields(kp.alertmanager) } +
......
apiVersion: kustomize.config.k8s.io/v1beta1 apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization kind: Kustomization
resources: resources:
- ./manifests/00namespace-namespace.yaml
- ./manifests/0prometheus-operator-0alertmanagerCustomResourceDefinition.yaml
- ./manifests/0prometheus-operator-0podmonitorCustomResourceDefinition.yaml
- ./manifests/0prometheus-operator-0prometheusCustomResourceDefinition.yaml
- ./manifests/0prometheus-operator-0prometheusruleCustomResourceDefinition.yaml
- ./manifests/0prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
- ./manifests/0prometheus-operator-clusterRole.yaml
- ./manifests/0prometheus-operator-clusterRoleBinding.yaml
- ./manifests/0prometheus-operator-deployment.yaml
- ./manifests/0prometheus-operator-service.yaml
- ./manifests/0prometheus-operator-serviceAccount.yaml
- ./manifests/0prometheus-operator-serviceMonitor.yaml
- ./manifests/alertmanager-alertmanager.yaml - ./manifests/alertmanager-alertmanager.yaml
- ./manifests/alertmanager-secret.yaml - ./manifests/alertmanager-secret.yaml
- ./manifests/alertmanager-service.yaml - ./manifests/alertmanager-service.yaml
...@@ -52,6 +40,7 @@ resources: ...@@ -52,6 +40,7 @@ resources:
- ./manifests/prometheus-adapter-serviceAccount.yaml - ./manifests/prometheus-adapter-serviceAccount.yaml
- ./manifests/prometheus-clusterRole.yaml - ./manifests/prometheus-clusterRole.yaml
- ./manifests/prometheus-clusterRoleBinding.yaml - ./manifests/prometheus-clusterRoleBinding.yaml
- ./manifests/prometheus-operator-serviceMonitor.yaml
- ./manifests/prometheus-prometheus.yaml - ./manifests/prometheus-prometheus.yaml
- ./manifests/prometheus-roleBindingConfig.yaml - ./manifests/prometheus-roleBindingConfig.yaml
- ./manifests/prometheus-roleBindingSpecificNamespaces.yaml - ./manifests/prometheus-roleBindingSpecificNamespaces.yaml
...@@ -66,3 +55,14 @@ resources: ...@@ -66,3 +55,14 @@ resources:
- ./manifests/prometheus-serviceMonitorKubeControllerManager.yaml - ./manifests/prometheus-serviceMonitorKubeControllerManager.yaml
- ./manifests/prometheus-serviceMonitorKubeScheduler.yaml - ./manifests/prometheus-serviceMonitorKubeScheduler.yaml
- ./manifests/prometheus-serviceMonitorKubelet.yaml - ./manifests/prometheus-serviceMonitorKubelet.yaml
- ./manifests/setup/0namespace-namespace.yaml
- ./manifests/setup/prometheus-operator-0alertmanagerCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0podmonitorCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0prometheusCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0prometheusruleCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
- ./manifests/setup/prometheus-operator-clusterRole.yaml
- ./manifests/setup/prometheus-operator-clusterRoleBinding.yaml
- ./manifests/setup/prometheus-operator-deployment.yaml
- ./manifests/setup/prometheus-operator-service.yaml
- ./manifests/setup/prometheus-operator-serviceAccount.yaml
#!/bin/bash
minikube delete
minikube addons disable metrics-server
minikube start \
--vm-driver=kvm2 \
--kubernetes-version=v1.16.0 \
--memory=6g \
--bootstrapper=kubeadm \
--extra-config=kubelet.authentication-token-webhook=true \
--extra-config=kubelet.authorization-mode=Webhook \
--extra-config=scheduler.address=0.0.0.0 \
--extra-config=controller-manager.address=0.0.0.0
#!/bin/bash
minikube delete
minikube addons disable metrics-server
minikube start \
--kubernetes-version=v1.16.0 \
--memory=6g \
--bootstrapper=kubeadm \
--extra-config=kubelet.authentication-token-webhook=true \
--extra-config=kubelet.authorization-mode=Webhook \
--extra-config=scheduler.address=0.0.0.0 \
--extra-config=controller-manager.address=0.0.0.0
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment