Skip to content
Snippets Groups Projects
Select Git revision
21 results Searching

administrator.md

Blame
  • user avatar
    Felix Kunde authored and GitHub committed
    * Update major version upgrade logs
    4929dd20
    History
    administrator.md 57.22 KiB

    Administrator Guide

    Learn how to configure and manage the Postgres Operator in your Kubernetes (K8s) environment.

    CRD registration and validation

    On startup, the operator will try to register the necessary CustomResourceDefinitions Postgresql and OperatorConfiguration. The latter will only get created if the POSTGRES_OPERATOR_CONFIGURATION_OBJECT environment variable is set in the deployment yaml and is not empty. If the CRDs already exists they will only be patched. If you do not wish the operator to create or update the CRDs set enable_crd_registration config option to false.

    CRDs are defined with a openAPIV3Schema structural schema against which new manifests of postgresql or OperatorConfiguration resources will be validated. On creation you can bypass the validation with kubectl create --validate=false.

    By default, the operator will register the CRDs in the all category so that resources are listed on kubectl get all commands. The crd_categories config option allows for customization of categories.

    Upgrading the operator

    The Postgres Operator is upgraded by changing the docker image within the deployment. Before doing so, it is recommended to check the release notes for new configuration options or changed behavior you might want to reflect in the ConfigMap or config CRD. E.g. a new feature might get introduced which is enabled or disabled by default and you want to change it to the opposite with the corresponding flag option.

    When using helm, be aware that installing the new chart will not update the Postgresql and OperatorConfiguration CRD. Make sure to update them before with the provided manifests in the crds folder. Otherwise, you might face errors about new Postgres manifest or configuration options being unknown to the CRD schema validation.

    Minor and major version upgrade

    Minor version upgrades for PostgreSQL are handled via updating the Spilo Docker image. The operator will carry out a rolling update of Pods which includes a switchover (planned failover) of the master to the Pod with new minor version. The switch should usually take less than 5 seconds, still clients have to reconnect.

    Upgrade on cloning

    With cloning, the new cluster manifest must have a higher version string than the source cluster and will be created from a basebackup. Depending of the cluster size, downtime in this case can be significant as writes to the database should be stopped and all WAL files should be archived first before cloning is started. Therefore, use cloning only to test major version upgrades and check for compatibility of your app with to Postgres server of a higher version.

    In-place major version upgrade

    Starting with Spilo 13, Postgres Operator can run an in-place major version upgrade which is much faster than cloning. First, you need to make sure, that the PGVERSION environment variable is set for the database pods. Since v1.6.0 the related option enable_pgversion_env_var is enabled by default.

    In-place major version upgrades can be configured to be executed by the operator with the major_version_upgrade_mode option. By default, it is enabled (mode: manual). In any case, altering the version in the manifest will trigger a rolling update of pods to update the PGVERSION env variable. Spilo's configure_spilo script will notice the version mismatch but start the current version again.

    Next, the operator would call an updage script inside Spilo. When automatic upgrades are disabled (mode: off) the upgrade could still be run by a user from within the primary pod. This gives you full control about the point in time when the upgrade can be started (check also maintenance windows below). Exec into the container and run:

    python3 /scripts/inplace_upgrade.py N

    where N is the number of members of your cluster (see numberOfInstances). The upgrade is usually fast, well under one minute for most DBs. Note, that changes become irrevertible once pg_upgrade is called. To understand the upgrade procedure, refer to the corresponding PR in Spilo.

    When major_version_upgrade_mode is set to full the operator will compare the version in the manifest with the configured minimal_major_version. If it is lower the operator would start an automatic upgrade as described above. The configured major_target_version will be used as the new version. This option can be useful if you have to get rid of outdated major versions in your fleet. Please note, that the operator does not patch the version in the manifest. Thus, the full mode can create drift between desired and actual state.

    Upgrade during maintenance windows

    When maintenanceWindows are defined in the Postgres manifest the operator will trigger a major version upgrade only during these periods. Make sure they are at least twice as long as your configured resync_period to guarantee that operator actions can be triggered.

    Upgrade annotations

    When an upgrade is executed, the operator sets an annotation in the PostgreSQL resource, either last-major-upgrade-success if the upgrade succeeds, or last-major-upgrade-failure if it fails. The value of the annotation is a timestamp indicating when the upgrade occurred.

    If a PostgreSQL resource contains a failure annotation, the operator will not attempt to retry the upgrade during a sync event. To remove the failure annotation, you can revert the PostgreSQL version back to the current version. This action will trigger the removal of the failure annotation.

    Non-default cluster domain

    If your cluster uses a DNS domain other than the default cluster.local, this needs to be set in the operator configuration (cluster_domain variable). This is used by the operator to connect to the clusters after creation.

    Namespaces

    Select the namespace to deploy to

    The operator can run in a namespace other than default. For example, to use the test namespace, run the following before deploying the operator's manifests:

    kubectl create namespace test
    kubectl config set-context $(kubectl config current-context) --namespace=test

    All subsequent kubectl commands will work with the test namespace. The operator will run in this namespace and look up needed resources - such as its ConfigMap - there. Please note that the namespace for service accounts and cluster role bindings in operator RBAC rules needs to be adjusted to the non-default value.

    Specify the namespace to watch

    Watching a namespace for an operator means tracking requests to change Postgres clusters in the namespace such as "increase the number of Postgres replicas to 5" and reacting to the requests, in this example by actually scaling up.

    By default, the operator watches the namespace it is deployed to. You can change this by setting the WATCHED_NAMESPACE var in the env section of the operator deployment manifest or by altering the watched_namespace field in the operator configuration. In the case both are set, the env var takes the precedence. To make the operator listen to all namespaces, explicitly set the field/env var to "*".

    Note that for an operator to manage pods in the watched namespace, the operator's service account (as specified in the operator deployment manifest) has to have appropriate privileges to access the watched namespace. The operator may not be able to function in the case it watches all namespaces but lacks access rights to any of them (except K8s system namespaces like kube-system). The reason is that for multiple namespaces operations such as 'list pods' execute at the cluster scope and fail at the first violation of access rights.

    Operators with defined ownership of certain Postgres clusters

    By default, multiple operators can only run together in one K8s cluster when isolated into their own namespaces. But, it is also possible to define ownership between operator instances and Postgres clusters running all in the same namespace or K8s cluster without interfering.

    First, define the CONTROLLER_ID environment variable in the operator deployment manifest. Then specify the ID in every Postgres cluster manifest you want this operator to watch using the "acid.zalan.do/controller" annotation:

    apiVersion: "acid.zalan.do/v1"
    kind: postgresql
    metadata:
      name: demo-cluster
      annotations:
        "acid.zalan.do/controller": "second-operator"
    spec:
      ...

    Every other Postgres cluster which lacks the annotation will be ignored by this operator. Conversely, operators without a defined CONTROLLER_ID will ignore clusters with defined ownership of another operator.

    Understanding rolling update of Spilo pods

    The operator logs reasons for a rolling update with the info level and a diff between the old and new StatefulSet specs with the debug level. To benefit from numerous escape characters in the latter log entry, view it in CLI with echo -e. Note that the resultant message will contain some noise because the PodTemplate used by the operator is yet to be updated with the default values used internally in K8s.

    The StatefulSet is replaced if the following properties change:

    • annotations
    • volumeClaimTemplates
    • template volumes

    The StatefulSet is replaced and a rolling updates is triggered if the following properties differ between the old and new state:

    • container name, ports, image, resources, env, envFrom, securityContext and volumeMounts
    • template labels, annotations, service account, securityContext, affinity, priority class and termination grace period

    Note that, changes in SPILO_CONFIGURATION env variable under bootstrap.dcs path are ignored for the diff. They will be applied through Patroni's rest api interface, following a restart of all instances.

    The operator also support lazy updates of the Spilo image. In this case the StatefulSet is only updated, but no rolling update follows. This feature saves you a switchover - and hence downtime - when you know pods are re-started later anyway, for instance due to the node rotation. To force a rolling update, disable this mode by setting the enable_lazy_spilo_upgrade to false in the operator configuration and restart the operator pod.

    Delete protection via annotations

    To avoid accidental deletes of Postgres clusters the operator can check the manifest for two existing annotations containing the cluster name and/or the current date (in YYYY-MM-DD format). The name of the annotation keys can be defined in the configuration. By default, they are not set which disables the delete protection. Thus, one could choose to only go with one annotation.

    postgres-operator ConfigMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      delete_annotation_date_key: "delete-date"
      delete_annotation_name_key: "delete-clustername"

    OperatorConfiguration

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-operator-configuration
    configuration:
      kubernetes:
        delete_annotation_date_key: "delete-date"
        delete_annotation_name_key: "delete-clustername"

    Now, every cluster manifest must contain the configured annotation keys to trigger the delete process when running kubectl delete pg. Note, that the Postgresql resource would still get deleted because the operator does not instruct K8s' API server to block it. Only the operator logs will tell, that the delete criteria was not met.

    cluster manifest

    apiVersion: "acid.zalan.do/v1"
    kind: postgresql
    metadata:
      name: demo-cluster
      annotations:
        delete-date: "2020-08-31"
        delete-clustername: "demo-cluster"
    spec:
      ...

    In case, the resource has been deleted accidentally or the annotations were simply forgotten, it's safe to recreate the cluster with kubectl create. Existing Postgres cluster are not replaced by the operator. But, when the original cluster still exists the status will be CreateFailed at first. On the next sync event it should change to Running. However, because it is in fact a new resource for K8s, the UID and therefore, the backup path to S3, will differ and trigger a rolling update of the pods.

    Owner References and Finalizers

    The Postgres Operator can set owner references to most of a cluster's child resources to improve monitoring with GitOps tools and enable cascading deletes. There are two exceptions:

    The operator would clean these resources up with its regular delete loop unless they got synced correctly. If for some reason the initial cluster sync fails, e.g. after a cluster creation or operator restart, a deletion of the cluster manifest might leave orphaned resources behind which the user has to clean up manually.

    Another option is to enable finalizers which first ensures the deletion of all child resources before the cluster manifest gets removed. There is a trade-off though: The deletion is only performed after the next two operator SYNC cycles with the first one setting a deletionTimestamp and the latter reacting to it. The final removal of the custom resource will add a DELETE event to the worker queue but the child resources are already gone at this point. If you do not desire this behavior consider enabling owner references instead.

    postgres-operator ConfigMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      enable_finalizers: "false"
      enable_owner_references: "true"

    OperatorConfiguration

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-operator-configuration
    configuration:
      kubernetes:
        enable_finalizers: false
        enable_owner_references: true

    ⚠️ Please note, both options are disabled by default. When enabling owner references the operator cannot block cascading deletes, even when the delete protection annotations are in place. You would need an K8s admission controller that blocks the actual kubectl delete API call e.g. based on existing annotations.

    Role-based access control for the operator

    The manifest operator-service-account-rbac.yaml defines the service account, cluster roles and bindings needed for the operator to function under access control restrictions. The file also includes a cluster role postgres-pod with privileges for Patroni to watch and manage pods and endpoints. To deploy the operator with this RBAC policies use:

    kubectl create -f manifests/configmap.yaml
    kubectl create -f manifests/operator-service-account-rbac.yaml
    kubectl create -f manifests/postgres-operator.yaml
    kubectl create -f manifests/minimal-postgres-manifest.yaml

    Namespaced service account and role binding

    For each namespace the operator watches it creates (or reads) a service account and role binding to be used by the Postgres Pods. The service account is bound to the postgres-pod cluster role. The name and definitions of these resources can be configured. Note, that the operator performs no further syncing of namespaced service accounts and role bindings.

    Give K8s users access to create/list postgresqls

    By default postgresql custom resources can only be listed and changed by cluster admins. To allow read and/or write access to other human users apply the user-facing-clusterrole manifest:

    kubectl create -f manifests/user-facing-clusterroles.yaml

    It creates zalando-postgres-operator:user:view, :edit and :admin clusterroles that are aggregated into the K8s default roles.

    For Helm deployments setting rbac.createAggregateClusterRoles: true adds these clusterroles to the deployment.

    Password rotation in K8s secrets

    The operator regularly updates credentials in the K8s secrets if the enable_password_rotation option is set to true in the configuration. It happens only for LOGIN roles with an associated secret (manifest roles, default users from preparedDatabases). Furthermore, there are the following exceptions:

    1. Infrastructure role secrets since rotation should happen by the infrastructure.
    2. Team API roles that connect via OAuth2 and JWT token (no secrets to these roles anyway).
    3. Database owners since ownership on database objects can not be inherited.
    4. System users such as postgres, standby and pooler user.

    The interval of days can be set with password_rotation_interval (default 90 = 90 days, minimum 1). On each rotation the user name and password values are replaced in the K8s secret. They belong to a newly created user named after the original role plus rotation date in YYMMDD format. All priviliges are inherited meaning that migration scripts should still grant and revoke rights against the original role. The timestamp of the next rotation (in RFC 3339 format, UTC timezone) is written to the secret as well. Note, if the rotation interval is decreased it is reflected in the secrets only if the next rotation date is more days away than the new length of the interval.

    Pods still using the previous secret values which they keep in memory continue to connect to the database since the password of the corresponding user is not replaced. However, a retention policy can be configured for users created by the password rotation feature with password_rotation_user_retention. The operator will ensure that this period is at least twice as long as the configured rotation interval, hence the default of 180 = 180 days. When the creation date of a rotated user is older than the retention period it might not get removed immediately. Only on the next user rotation it is checked if users can get removed. Therefore, you might want to configure the retention to be a multiple of the rotation interval.

    Password rotation for single users

    From the configuration, password rotation is enabled for all secrets with the mentioned exceptions. If you wish to first test rotation for a single user (or just have it enabled only for a few secrets) you can specify it in the cluster manifest. The rotation and retention intervals can only be configured globally.

    spec:
      usersWithSecretRotation:
      - foo_user
      - bar_reader_user

    Password replacement without extra users

    For some use cases where the secret is only used rarely - think of a flyway user running a migration script on pod start - we do not need to create extra database users but can replace only the password in the K8s secret. This type of rotation cannot be configured globally but specified in the cluster manifest:

    spec:
      usersWithInPlaceSecretRotation:
      - flyway
      - bar_owner_user

    This would be the recommended option to enable rotation in secrets of database owners, but only if they are not used as application users for regular read and write operations.

    Ignore rotation for certain users

    If you wish to globally enable password rotation but need certain users to opt out from it there are two ways. First, you can remove the user from the manifest's users section. The corresponding secret to this user will no longer be synced by the operator then.

    Secondly, if you want the operator to continue syncing the secret (e.g. to recreate if it got accidentally removed) but cannot allow it being rotated, add the user to the following list in your manifest:

    spec:
      usersIgnoringSecretRotation:
      - bar_user

    Turning off password rotation

    When password rotation is turned off again the operator will check if the username value in the secret matches the original username and replace it with the latter. A new password is assigned and the nextRotation field is cleared. A final lookup for child (rotation) users to be removed is done but they will only be dropped if the retention policy allows for it. This is to avoid sudden connection issues in pods which still use credentials of these users in memory. You have to remove these child users manually or re-enable password rotation with smaller interval so they get cleaned up.

    Use taints and tolerations for dedicated PostgreSQL nodes

    To ensure Postgres pods are running on nodes without any other application pods, you can use taints and tolerations and configure the required toleration in the operator configuration.

    As an example you can set following node taint:

    kubectl taint nodes <nodeName> postgres=:NoSchedule

    And configure the toleration for the Postgres pods by adding following line to the ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      toleration: "key:postgres,operator:Exists,effect:NoSchedule"

    For an OperatorConfiguration resource the toleration should be defined like this:

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-configuration
    configuration:
      kubernetes:
        toleration:
          postgres: "key:postgres,operator:Exists,effect:NoSchedule"

    Note that the K8s version 1.13 brings taint-based eviction to the beta stage and enables it by default. Postgres pods by default receive tolerations for unreachable and noExecute taints with the timeout of 5m. Depending on your setup, you may want to adjust these parameters to prevent master pods from being evicted by the K8s runtime. To prevent eviction completely, specify the toleration by leaving out the tolerationSeconds value (similar to how Kubernetes' own DaemonSets are configured)

    Node readiness labels

    The operator can watch on certain node labels to detect e.g. the start of a Kubernetes cluster upgrade procedure and move master pods off the nodes to be decommissioned. Key-value pairs for these node readiness labels can be specified in the configuration (option name is in singular form):

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      node_readiness_label: "status1:ready,status2:ready"
    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-configuration
    configuration:
      kubernetes:
        node_readiness_label:
          status1: ready
          status2: ready

    The operator will create a nodeAffinity on the pods. This makes the node_readiness_label option the global configuration for defining node affinities for all Postgres clusters. You can have both, cluster-specific and global affinity, defined and they will get merged on the pods. If node_readiness_label_merge is configured to "AND" the node readiness affinity will end up under the same matchExpressions section(s) from the manifest affinity.

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: environment
                operator: In
                values:
                - pci
              - key: status1
                operator: In
                values:
                - ready
              - key: status2
                ...

    If node_readiness_label_merge is set to "OR" (default) the readiness label affinty will be appended with its own expressions block:

      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: environment
                ...
            - matchExpressions:
              - key: storage
                ...
            - matchExpressions:
              - key: status1
                ...
              - key: status2
                ...

    Enable pod anti affinity

    To ensure Postgres pods are running on different topologies, you can use pod anti affinity and configure the required topology in the operator configuration.

    Enable pod anti affinity by adding following line to the operator ConfigMap:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      enable_pod_antiaffinity: "true"

    Likewise, when using an OperatorConfiguration resource add:

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-configuration
    configuration:
      kubernetes:
        enable_pod_antiaffinity: true

    By default the type of pod anti affinity is requiredDuringSchedulingIgnoredDuringExecution, you can switch to preferredDuringSchedulingIgnoredDuringExecution by setting pod_antiaffinity_preferred_during_scheduling: true.

    By default the topology key for the pod anti affinity is set to kubernetes.io/hostname, you can set another topology key e.g. failure-domain.beta.kubernetes.io/zone. See built-in node labels for available topology keys.

    Pod Disruption Budget

    By default the operator uses a PodDisruptionBudget (PDB) to protect the cluster from voluntarily disruptions and hence unwanted DB downtime. The MinAvailable parameter of the PDB is set to 1 which prevents killing masters in single-node clusters and/or the last remaining running instance in a multi-node cluster.

    The PDB is only relaxed in two scenarios:

    • If a cluster is scaled down to 0 instances (e.g. for draining nodes)
    • If the PDB is disabled in the configuration (enable_pod_disruption_budget)

    The PDB is still in place having MinAvailable set to 0. If enabled it will be automatically set to 1 on scale up. Disabling PDBs helps avoiding blocking Kubernetes upgrades in managed K8s environments at the cost of prolonged DB downtime. See PR #384 for the use case.

    Add cluster-specific labels

    In some cases, you might want to add labels that are specific to a given Postgres cluster, in order to identify its child objects. The typical use case is to add labels that identifies the Pods created by the operator, in order to implement fine-controlled NetworkPolicies.

    postgres-operator ConfigMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      inherited_labels: application,environment

    OperatorConfiguration

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-operator-configuration
    configuration:
      kubernetes:
        inherited_labels:
        - application
        - environment

    cluster manifest

    apiVersion: "acid.zalan.do/v1"
    kind: postgresql
    metadata:
      name: demo-cluster
      labels:
        application: my-app
        environment: demo
    spec:
      ...

    network policy

    kind: NetworkPolicy
    apiVersion: networking.k8s.io/v1
    metadata:
      name: netpol-example
    spec:
      podSelector:
        matchLabels:
          application: my-app
          environment: demo

    Custom Pod Environment Variables

    The operator will assign a set of environment variables to the database pods that cannot be overridden to guarantee core functionality. Only variables with 'WAL_' and 'LOG_' prefixes can be customized to allow for backup and log shipping to be specified differently. There are three ways to specify extra environment variables (or override existing ones) for database pods:

    The first two options must be referenced from the operator configuration making them global settings for all Postgres cluster the operator watches. One use case is a customized Spilo image that must be configured by extra environment variables. Another case could be to provide custom cloud provider or backup settings.

    The last options allows for specifying environment variables individual to every cluster via the env section in the manifest. For example, if you use individual backup locations for each of your clusters. Or you want to disable WAL archiving for a certain cluster by setting WAL_S3_BUCKET, WAL_GS_BUCKET or AZURE_STORAGE_ACCOUNT to an empty string.

    The operator will give precedence to environment variables in the following order (e.g. a variable defined in 4. overrides a variable with the same name in 5.):

    1. Assigned by the operator
    2. env section in cluster manifest
    3. Clone section (with WAL settings from operator config when s3_wal_path is empty)
    4. Standby section
    5. Pod environment secret via operator config
    6. Pod environment config map via operator config
    7. WAL and logical backup settings from operator config

    Via ConfigMap

    The ConfigMap with the additional settings is referenced in the operator's main configuration. A namespace can be specified along with the name. If left out, the configured default namespace of your K8s client will be used and if the ConfigMap is not found there, the Postgres cluster's namespace is taken when different:

    postgres-operator ConfigMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      # referencing config map with custom settings
      pod_environment_configmap: default/postgres-pod-config

    OperatorConfiguration

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-operator-configuration
    configuration:
      kubernetes:
        # referencing config map with custom settings
        pod_environment_configmap: default/postgres-pod-config

    referenced ConfigMap postgres-pod-config

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-pod-config
      namespace: default
    data:
      MY_CUSTOM_VAR: value

    The key-value pairs of the ConfigMap are then added as environment variables to the Postgres StatefulSet/pods.

    Via Secret

    The Secret with the additional variables is referenced in the operator's main configuration. To protect the values of the secret from being exposed in the pod spec they are each referenced as SecretKeyRef. This does not allow for the secret to be in a different namespace as the pods though

    postgres-operator ConfigMap

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: postgres-operator
    data:
      # referencing secret with custom environment variables
      pod_environment_secret: postgres-pod-secrets

    OperatorConfiguration

    apiVersion: "acid.zalan.do/v1"
    kind: OperatorConfiguration
    metadata:
      name: postgresql-operator-configuration
    configuration:
      kubernetes:
        # referencing secret with custom environment variables
        pod_environment_secret: postgres-pod-secrets

    referenced Secret postgres-pod-secrets

    apiVersion: v1
    kind: Secret
    metadata:
      name: postgres-pod-secrets
      namespace: default
    data:
      MY_CUSTOM_VAR: dmFsdWU=

    The key-value pairs of the Secret are all accessible as environment variables to the Postgres StatefulSet/pods.

    Via Postgres Cluster Manifest

    It is possible to define environment variables directly in the Postgres cluster manifest to configure it individually. The variables must be listed under the env section in the same way you would do for containers. Global parameters served from a custom config map or secret will be overridden.

    apiVersion: "acid.zalan.do/v1"
    kind: postgresql
    metadata:
      name: acid-test-cluster
    spec:
      env:
      - name: wal_s3_bucket
        value: my-custom-bucket
      - name: minio_secret_key
          valueFrom:
            secretKeyRef:
              name: my-custom-secret
              key: minio_secret_key

    Limiting the number of min and max instances in clusters

    As a preventive measure, one can restrict the minimum and the maximum number of instances permitted by each Postgres cluster managed by the operator. If either min_instances or max_instances is set to a non-zero value, the operator may adjust the number of instances specified in the cluster manifest to match either the min or the max boundary. For instance, of a cluster manifest has 1 instance and the min_instances is set to 3, the cluster will be created with 3 instances. By default, both parameters are set to -1.

    Load balancers and allowed IP ranges

    For any Postgres/Spilo cluster, the operator creates two separate K8s services: one for the master pod and one for replica pods. To expose these services to an outer network, one can attach load balancers to them by setting enableMasterLoadBalancer and/or enableReplicaLoadBalancer to true in the cluster manifest. In the case any of these variables are omitted from the manifest, the operator configuration settings enable_master_load_balancer and enable_replica_load_balancer apply. Note that the operator settings affect all Postgresql services running in all namespaces watched by the operator. If load balancing is enabled two default annotations will be applied to its services:

    • external-dns.alpha.kubernetes.io/hostname with the value defined by the operator configs master_dns_name_format and replica_dns_name_format. This value can't be overwritten. If any changing in its value is needed, it MUST be done changing the DNS format operator config parameters; and
    • service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout with a default value of "3600".