diff --git a/experimental/custom-metrics-api/README.md b/experimental/custom-metrics-api/README.md index 91375a4294e66c00b979970d42ca3f0b7194fcc2..c5c7102ce6b6732232aa584e55fc44043054cc34 100644 --- a/experimental/custom-metrics-api/README.md +++ b/experimental/custom-metrics-api/README.md @@ -1,11 +1,21 @@ # Custom Metrics API -The custom metrics API allows the HPA v2 to scale on arbirary metrics. +The custom metrics API allows the HPA v2 to scale based on arbirary metrics. -This directory contains an example deployment of the custom metrics API adapter using Prometheus as the backing monitoring system. +This directory contains an example deployment which extends the Prometheus Adapter, deployed with kube-prometheus, serve the [Custom Metrics API](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md) by talking to Prometheus running inside the cluster. -In order to deploy the custom metrics adapter for Prometheus you need to generate TLS certficates used to serve the API. An example of how these could be generated can be found in `./gencerts.sh`, note that this is _not_ recommended to be used in production. You need to employ a secure PKI strategy, this is merely an example to get started and try it out quickly. +Make sure you have the Prometheus Adapter up and running in the `monitoring` namespace. -Once the generated `Secret` with the certificates is in place, you can deploy everything in the `monitoring` namespace using `./deploy.sh`. +You can deploy everything in the `monitoring` namespace using `./deploy.sh`. When you're done, you can teardown using the `./teardown.sh` script. + +### Sample App + +Additionally, this directory contains a sample app that uses the [Horizontal Pod Autoscaler](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) to scale the Deployment's replicas of Pods up and down as needed. +Deploy this app by running `kubectl apply -f sample-app.yaml`. +Make the app accessible on your system, for example by using `kubectl -n monitoring port-forward svc/sample-app 8080`. Next you need to put some load on its http endpoints. + +A tool like [hey](https://github.com/rakyll/hey) is helpful for doing so: `hey -c 20 -n 100000000 http://localhost:8080/metrics` + +There is an even more detailed information on this sample app at [luxas/kubeadm-workshop](https://github.com/luxas/kubeadm-workshop#deploying-the-prometheus-operator-for-monitoring-services-in-the-cluster). diff --git a/experimental/custom-metrics-api/deploy.sh b/experimental/custom-metrics-api/deploy.sh index a7324831e931de0d050bdeb5d93390538473cff8..1ac74878a542a58ce4aa5dba874d6484d2a61c7e 100644 --- a/experimental/custom-metrics-api/deploy.sh +++ b/experimental/custom-metrics-api/deploy.sh @@ -1,4 +1,4 @@ -#!/usr/bin/env bash +#!/usr/bin/env bash kubectl apply -n monitoring custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml kubectl apply -n monitoring custom-metrics-apiservice.yaml diff --git a/experimental/custom-metrics-api/teardown.sh b/experimental/custom-metrics-api/teardown.sh index 2287c799734ca83e9ba4b91924f7b649d26bb1f6..a62f685ec7089f7b0fb8395e51f9e332a754366f 100644 --- a/experimental/custom-metrics-api/teardown.sh +++ b/experimental/custom-metrics-api/teardown.sh @@ -1,4 +1,4 @@ -#!/usr/bin/env bash +#!/usr/bin/env bash kubectl delete -n monitoring custom-metrics-apiserver-resource-reader-cluster-role-binding.yaml kubectl delete -n monitoring custom-metrics-apiservice.yaml