Skip to content
Snippets Groups Projects
Select Git revision
  • c7ee34ed1224e37dabb22b3a6cb21071e811a004
  • master default protected
  • spilo-wale-removal
  • dependabot/go_modules/golang.org/x/oauth2-0.27.0
  • dependabot/go_modules/golang.org/x/net-0.38.0
  • dependabot/pip/ui/requests-2.32.4
  • bug-upgrade
  • gh-pages
  • patroni-4-integration
  • remove-zappr
  • ignore-auto-upgrade
  • arm-pooler
  • update-go-and-deps
  • pluralsh-liveness-probe
  • silenium-dev-master
  • bump-v1.9.1
  • enable-query-logging
  • bump-v1.7.1
  • resize-mixed-mode
  • instance-annotation
  • bump-v1.8.2
  • v1.14.0
  • v1.13.0
  • v1.12.2
  • v1.12.1
  • v1.12.0
  • v1.11.0
  • v1.10.1
  • v1.10.0
  • v1.9.0
  • v1.8.2
  • v1.8.1
  • v1.8.0
  • v1.7.1
  • v1.7.0
  • v1.6.3
  • v1.6.2
  • v1.6.1
  • v1.6.0
  • v1.5.0
  • v1.4.0
41 results

quickstart.md

Blame
  • user avatar
    John Flynn Matthew authored and GitHub committed
    87b7ac08
    History
    quickstart.md 9.10 KiB

    Quickstart

    This guide aims to give you a quick look and feel for using the Postgres Operator on a local Kubernetes environment.

    Prerequisites

    Since the Postgres Operator is designed for the Kubernetes (K8s) framework, hence set it up first. For local tests we recommend to use one of the following solutions:

    • minikube, which creates a single-node K8s cluster inside a VM (requires KVM or VirtualBox),
    • kind and k3d, which allows creating multi-nodes K8s clusters running on Docker (requires Docker)

    To interact with the K8s infrastructure install its CLI runtime kubectl.

    This quickstart assumes that you have started minikube or created a local kind cluster. Note that you can also use built-in K8s support in the Docker Desktop for Mac to follow the steps of this tutorial. You would have to replace minikube start and minikube delete with your launch actions for the Docker built-in K8s support.

    Configuration Options

    Configuring the Postgres Operator is only possible before deploying a new Postgres cluster. This can work in two ways: via a ConfigMap or a custom OperatorConfiguration object. More details on configuration can be found here.

    Deployment options

    The Postgres Operator can be deployed in the following ways:

    • Manual deployment
    • Kustomization
    • Helm chart

    Manual deployment setup on Kubernetes

    The Postgres Operator can be installed simply by applying yaml manifests. Note, we provide the /manifests directory as an example only; you should consider adjusting the manifests to your K8s environment (e.g. namespaces).

    # First, clone the repository and change to the directory
    git clone https://github.com/zalando/postgres-operator.git
    cd postgres-operator
    
    # apply the manifests in the following order
    kubectl create -f manifests/configmap.yaml  # configuration
    kubectl create -f manifests/operator-service-account-rbac.yaml  # identity and permissions
    kubectl create -f manifests/postgres-operator.yaml  # deployment
    kubectl create -f manifests/api-service.yaml  # operator API to be used by UI

    There is a Kustomization manifest that combines the mentioned resources (except for the CRD) - it can be used with kubectl 1.14 or newer as easy as:

    kubectl apply -k github.com/zalando/postgres-operator/manifests

    For convenience, we have automated starting the operator with minikube using the run_operator_locally script. It applies the acid-minimal-cluster. manifest.

    ./run_operator_locally.sh

    Manual deployment setup on OpenShift

    To install the Postgres Operator in OpenShift you have to change the config parameter kubernetes_use_configmaps to "true". Otherwise, the operator and Patroni will store leader and config keys in Endpoints that are not supported in OpenShift. This requires also a slightly different set of rules for the postgres-operator and postgres-pod cluster roles.

    oc create -f manifests/operator-service-account-rbac-openshift.yaml

    Helm chart

    Alternatively, the operator can be installed by using the provided Helm chart which saves you the manual steps. The charts for both the Postgres Operator and its UI are hosted via the gh-pages branch. They only work only with Helm v3. Helm v2 support was dropped with v1.8.0.

    # add repo for postgres-operator
    helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator
    
    # install the postgres-operator
    helm install postgres-operator postgres-operator-charts/postgres-operator
    
    # add repo for postgres-operator-ui
    helm repo add postgres-operator-ui-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator-ui
    
    # install the postgres-operator-ui
    helm install postgres-operator-ui postgres-operator-ui-charts/postgres-operator-ui

    Check if Postgres Operator is running

    Starting the operator may take a few seconds. Check if the operator pod is running before applying a Postgres cluster manifest.

    # if you've created the operator using yaml manifests
    kubectl get pod -l name=postgres-operator
    
    # if you've created the operator using helm chart
    kubectl get pod -l app.kubernetes.io/name=postgres-operator

    If the operator doesn't get into Running state, either check the latest K8s events of the deployment or pod with kubectl describe or inspect the operator logs:

    kubectl logs "$(kubectl get pod -l name=postgres-operator --output='name')"

    Deploy the operator UI

    In the following paragraphs we describe how to access and manage PostgreSQL clusters from the command line with kubectl. But it can also be done from the browser-based Postgres Operator UI. Before deploying the UI make sure the operator is running and its REST API is reachable through a K8s service. The URL to this API must be configured in the deployment manifest of the UI.

    To deploy the UI simply apply all its manifests files or use the UI helm chart:

    # manual deployment
    kubectl apply -f ui/manifests/
    
    # or kustomization
    kubectl apply -k github.com/zalando/postgres-operator/ui/manifests
    
    # or helm chart
    helm install postgres-operator-ui ./charts/postgres-operator-ui

    Like with the operator, check if the UI pod gets into Running state:

    # if you've created the operator using yaml manifests
    kubectl get pod -l name=postgres-operator-ui
    
    # if you've created the operator using helm chart
    kubectl get pod -l app.kubernetes.io/name=postgres-operator-ui

    You can now access the web interface by port forwarding the UI pod (mind the label selector) and enter localhost:8081 in your browser:

    kubectl port-forward svc/postgres-operator-ui 8081:80

    Available option are explained in detail in the UI docs.

    Create a Postgres cluster

    If the operator pod is running it listens to new events regarding postgresql resources. Now, it's time to submit your first Postgres cluster manifest.

    # create a Postgres cluster
    kubectl create -f manifests/minimal-postgres-manifest.yaml

    After the cluster manifest is submitted and passed the validation the operator will create Service and Endpoint resources and a StatefulSet which spins up new Pod(s) given the number of instances specified in the manifest. All resources are named like the cluster. The database pods can be identified by their number suffix, starting from -0. They run the Spilo container image by Zalando. As for the services and endpoints, there will be one for the master pod and another one for all the replicas (-repl suffix). Check if all components are coming up. Use the label application=spilo to filter and list the label spilo-role to see who is currently the master.

    # check the deployed cluster
    kubectl get postgresql
    
    # check created database pods
    kubectl get pods -l application=spilo -L spilo-role
    
    # check created service resources
    kubectl get svc -l application=spilo -L spilo-role

    Connect to the Postgres cluster via psql

    You can create a port-forward on a database pod to connect to Postgres. See the user guide for instructions. With minikube it's also easy to retrieve the connections string from the K8s service that is pointing to the master pod:

    export HOST_PORT=$(minikube service acid-minimal-cluster --url | sed 's,.*/,,')
    export PGHOST=$(echo $HOST_PORT | cut -d: -f 1)
    export PGPORT=$(echo $HOST_PORT | cut -d: -f 2)

    Retrieve the password from the K8s Secret that is created in your cluster. Non-encrypted connections are rejected by default, so set the SSL mode to require:

    export PGPASSWORD=$(kubectl get secret postgres.acid-minimal-cluster.credentials.postgresql.acid.zalan.do -o 'jsonpath={.data.password}' | base64 -d)
    export PGSSLMODE=require
    psql -U postgres

    Delete a Postgres cluster

    To delete a Postgres cluster simply delete the postgresql custom resource.

    kubectl delete postgresql acid-minimal-cluster

    This should remove the associated StatefulSet, database Pods, Services and Endpoints. The PersistentVolumes are released and the PodDisruptionBudget is deleted. Secrets however are not deleted and backups will remain in place.

    When deleting a cluster while it is still starting up or got stuck during that phase it can happen that the postgresql resource is deleted leaving orphaned components behind. This can cause troubles when creating a new Postgres cluster. For a fresh setup you can delete your local minikube or kind cluster and start again.