Skip to content
Snippets Groups Projects
Commit b7e1b32d authored by Sergey Dudoladov's avatar Sergey Dudoladov
Browse files

Merge branch 'test/417' of https://github.com/zalando-incubator/postgres-operator into test/417

parents b50587cc 71a23730
Branches test/417
No related tags found
No related merge requests found
Showing
with 304 additions and 49 deletions
# https://github.com/golangci/golangci/wiki/Configuration
service:
prepare:
- make deps
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
[![Coverage Status](https://coveralls.io/repos/github/zalando-incubator/postgres-operator/badge.svg)](https://coveralls.io/github/zalando-incubator/postgres-operator) [![Coverage Status](https://coveralls.io/repos/github/zalando-incubator/postgres-operator/badge.svg)](https://coveralls.io/github/zalando-incubator/postgres-operator)
[![Go Report Card](https://goreportcard.com/badge/github.com/zalando-incubator/postgres-operator)](https://goreportcard.com/report/github.com/zalando-incubator/postgres-operator) [![Go Report Card](https://goreportcard.com/badge/github.com/zalando-incubator/postgres-operator)](https://goreportcard.com/report/github.com/zalando-incubator/postgres-operator)
[![GoDoc](https://godoc.org/github.com/zalando-incubator/postgres-operator?status.svg)](https://godoc.org/github.com/zalando-incubator/postgres-operator) [![GoDoc](https://godoc.org/github.com/zalando-incubator/postgres-operator?status.svg)](https://godoc.org/github.com/zalando-incubator/postgres-operator)
[![golangci](https://golangci.com/badges/github.com/zalando-incubator/postgres-operator.svg)](https://golangci.com/r/github.com/zalando-incubator/postgres-operator)
## Introduction ## Introduction
...@@ -90,6 +91,8 @@ cd postgres-operator ...@@ -90,6 +91,8 @@ cd postgres-operator
./run_operator_locally.sh ./run_operator_locally.sh
``` ```
Note we provide the `/manifests` directory as an example only; you should consider adjusting the manifests to your particular setting.
## Running and testing the operator ## Running and testing the operator
The best way to test the operator is to run it locally in [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/). See developer docs(`docs/developer.yaml`) for details. The best way to test the operator is to run it locally in [minikube](https://kubernetes.io/docs/getting-started-guides/minikube/). See developer docs(`docs/developer.yaml`) for details.
......
...@@ -41,12 +41,12 @@ manifests: ...@@ -41,12 +41,12 @@ manifests:
```bash ```bash
$ kubectl create namespace test $ kubectl create namespace test
$ kubectl config set-context --namespace=test $ kubectl config set-context $(kubectl config current-context) --namespace=test
``` ```
All subsequent `kubectl` commands will work with the `test` namespace. The All subsequent `kubectl` commands will work with the `test` namespace. The
operator will run in this namespace and look up needed resources - such as its operator will run in this namespace and look up needed resources - such as its
config map - there. config map - there. Please note that the namespace for service accounts and cluster role bindings in [operator RBAC rules](manifests/operator-service-account-rbac.yaml) needs to be adjusted to the non-default value.
## Specify the namespace to watch ## Specify the namespace to watch
......
...@@ -51,7 +51,9 @@ Please, report any issues discovered to https://github.com/zalando-incubator/pos ...@@ -51,7 +51,9 @@ Please, report any issues discovered to https://github.com/zalando-incubator/pos
## Talks ## Talks
1. "PostgreSQL High Availability on Kubernetes with Patroni" talk by Oleksii Kliukin, Atmosphere 2018: [video](https://www.youtube.com/watch?v=cFlwQOPPkeg) | [slides](https://speakerdeck.com/alexeyklyukin/postgresql-high-availability-on-kubernetes-with-patroni) 1. "PostgreSQL and Kubernetes: DBaaS without a vendor-lock" talk by Oleksii Kliukin, PostgreSQL Sessions 2018: [slides](https://speakerdeck.com/alexeyklyukin/postgresql-and-kubernetes-dbaas-without-a-vendor-lock)
2. "PostgreSQL High Availability on Kubernetes with Patroni" talk by Oleksii Kliukin, Atmosphere 2018: [video](https://www.youtube.com/watch?v=cFlwQOPPkeg) | [slides](https://speakerdeck.com/alexeyklyukin/postgresql-high-availability-on-kubernetes-with-patroni)
2. "Blue elephant on-demand: Postgres + Kubernetes" talk by Oleksii Kliukin and Jan Mussler, FOSDEM 2018: [video](https://fosdem.org/2018/schedule/event/blue_elephant_on_demand_postgres_kubernetes/) | [slides (pdf)](https://www.postgresql.eu/events/fosdem2018/sessions/session/1735/slides/59/FOSDEM%202018_%20Blue_Elephant_On_Demand.pdf) 2. "Blue elephant on-demand: Postgres + Kubernetes" talk by Oleksii Kliukin and Jan Mussler, FOSDEM 2018: [video](https://fosdem.org/2018/schedule/event/blue_elephant_on_demand_postgres_kubernetes/) | [slides (pdf)](https://www.postgresql.eu/events/fosdem2018/sessions/session/1735/slides/59/FOSDEM%202018_%20Blue_Elephant_On_Demand.pdf)
......
...@@ -97,6 +97,18 @@ Those are parameters grouped directly under the `spec` key in the manifest. ...@@ -97,6 +97,18 @@ Those are parameters grouped directly under the `spec` key in the manifest.
is taken from the `pod_priority_class_name` operator parameter, if not set is taken from the `pod_priority_class_name` operator parameter, if not set
then the default priority class is taken. The priority class itself must be defined in advance. then the default priority class is taken. The priority class itself must be defined in advance.
* **enableShmVolume**
Start a database pod without limitations on shm memory. By default docker
limit `/dev/shm` to `64M` (see e.g. the [docker
issue](https://github.com/docker-library/postgres/issues/416), which could be
not enough if PostgreSQL uses parallel workers heavily. If this option is
present and value is `true`, to the target database pod will be mounted a new
tmpfs volume to remove this limitation. If it's not present, the decision
about mounting a volume will be made based on operator configuration
(`enable_shm_volume`, which is `true` by default). It it's present and value
is `false`, then no volume will be mounted no matter how operator was
configured (so you can override the operator configuration).
## Postgres parameters ## Postgres parameters
Those parameters are grouped under the `postgresql` top-level key. Those parameters are grouped under the `postgresql` top-level key.
...@@ -112,6 +124,7 @@ Those parameters are grouped under the `postgresql` top-level key. ...@@ -112,6 +124,7 @@ Those parameters are grouped under the `postgresql` top-level key.
cluster. Optional (Spilo automatically sets reasonable defaults for cluster. Optional (Spilo automatically sets reasonable defaults for
parameters like work_mem or max_connections). parameters like work_mem or max_connections).
## Patroni parameters ## Patroni parameters
Those parameters are grouped under the `patroni` top-level key. See the [patroni Those parameters are grouped under the `patroni` top-level key. See the [patroni
......
...@@ -10,29 +10,37 @@ configuration. ...@@ -10,29 +10,37 @@ configuration.
configuration structure. There is an configuration structure. There is an
[example](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/configmap.yaml) [example](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/configmap.yaml)
* CRD-based configuration. The configuration is stored in the custom YAML * CRD-based configuration. The configuration is stored in a custom YAML
manifest, an instance of the custom resource definition (CRD) called manifest. The manifest is an instance of the custom resource definition (CRD) called
`OperatorConfiguration`. This CRD is registered by the operator `OperatorConfiguration`. The operator registers this CRD
during the start when `POSTGRES_OPERATOR_CONFIGURATION_OBJECT` variable is during the start and uses it for configuration if the [operator deployment manifest ](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/postgres-operator.yaml#L21) sets the `POSTGRES_OPERATOR_CONFIGURATION_OBJECT` env variable to a non-empty value. The variable should point to the
set to a non-empty value. The CRD-based configuration is a regular YAML `postgresql-operator-configuration` object in the operator's namespace.
document; non-scalar keys are simply represented in the usual YAML way. The
usage of the CRD-based configuration is triggered by setting the The CRD-based configuration is a regular YAML
`POSTGRES_OPERATOR_CONFIGURATION_OBJECT` variable, which should point to the document; non-scalar keys are simply represented in the usual YAML way.
`postgresql-operator-configuration` object name in the operators namespace.
There are no default values built-in in the operator, each parameter that is There are no default values built-in in the operator, each parameter that is
not supplied in the configuration receives an empty value. In order to not supplied in the configuration receives an empty value. In order to
create your own configuration just copy the [default create your own configuration just copy the [default
one](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/postgresql-operator-default-configuration.yaml) one](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/postgresql-operator-default-configuration.yaml)
and change it. and change it.
CRD-based configuration is more natural and powerful then the one based on To test the CRD-based configuration locally, use the following
```bash
kubectl create -f manifests/operator-service-account-rbac.yaml
kubectl create -f manifests/postgres-operator.yaml # set the env var as mentioned above
kubectl create -f manifests/postgresql-operator-default-configuration.yaml
kubectl get operatorconfigurations postgresql-operator-default-configuration -o yaml
```
Note that the operator first registers the definition of the CRD `OperatorConfiguration` and then waits for an instance of the CRD to be created. In between these two event the operator pod may be failing since it cannot fetch the not-yet-existing `OperatorConfiguration` instance.
The CRD-based configuration is more powerful than the one based on
ConfigMaps and should be used unless there is a compatibility requirement to ConfigMaps and should be used unless there is a compatibility requirement to
use an already existing configuration. Even in that case, it should be rather use an already existing configuration. Even in that case, it should be rather
straightforward to convert the configmap based configuration into the CRD-based straightforward to convert the configmap based configuration into the CRD-based
one and restart the operator. The ConfigMaps-based configuration will be one and restart the operator. The ConfigMaps-based configuration will be
deprecated and subsequently removed in future releases. deprecated and subsequently removed in future releases.
Note that for the CRD-based configuration configuration groups below correspond Note that for the CRD-based configuration groups of configuration options below correspond
to the non-leaf keys in the target YAML (i.e. for the Kubernetes resources the to the non-leaf keys in the target YAML (i.e. for the Kubernetes resources the
key is `kubernetes`). The key is mentioned alongside the group description. The key is `kubernetes`). The key is mentioned alongside the group description. The
ConfigMap-based configuration is flat and does not allow non-leaf keys. ConfigMap-based configuration is flat and does not allow non-leaf keys.
...@@ -46,7 +54,6 @@ They will be deprecated and removed in the future. ...@@ -46,7 +54,6 @@ They will be deprecated and removed in the future.
Variable names are underscore-separated words. Variable names are underscore-separated words.
## General ## General
Those are top-level keys, containing both leaf keys and groups. Those are top-level keys, containing both leaf keys and groups.
...@@ -221,6 +228,17 @@ CRD-based configuration. ...@@ -221,6 +228,17 @@ CRD-based configuration.
memory limits for the postgres containers, unless overridden by cluster-specific memory limits for the postgres containers, unless overridden by cluster-specific
settings. The default is `1Gi`. settings. The default is `1Gi`.
* **set_memory_request_to_limit**
Set `memory_request` to `memory_limit` for all Postgres clusters (the default value is also increased). This prevents certain cases of memory overcommitment at the cost of overprovisioning memory and potential scheduling problems for containers with high memory limits due to the lack of memory on Kubernetes cluster nodes. This affects all containers created by the operator (Postgres, Scalyr sidecar, and other sidecars); to set resources for the operator's own container, change the [operator deployment manually](https://github.com/zalando-incubator/postgres-operator/blob/master/manifests/postgres-operator.yaml#L13). The default is `false`.
* **enable_shm_volume**
Instruct operator to start any new database pod without limitations on shm
memory. If this option is enabled, to the target database pod will be mounted
a new tmpfs volume to remove shm memory limitation (see e.g. the [docker
issue](https://github.com/docker-library/postgres/issues/416)). This option
is global for an operator object, and can be overwritten by `enableShmVolume`
parameter from Postgres manifest. The default is `true`
## Operator timeouts ## Operator timeouts
This set of parameters define various timeouts related to some operator This set of parameters define various timeouts related to some operator
...@@ -323,7 +341,7 @@ Options to aid debugging of the operator itself. Grouped under the `debug` key. ...@@ -323,7 +341,7 @@ Options to aid debugging of the operator itself. Grouped under the `debug` key.
boolean parameter that toggles verbose debug logs from the operator. The boolean parameter that toggles verbose debug logs from the operator. The
default is `true`. default is `true`.
* **enable_db_access** * **enable_database_access**
boolean parameter that toggles the functionality of the operator that require boolean parameter that toggles the functionality of the operator that require
access to the postgres database, i.e. creating databases and users. The default access to the postgres database, i.e. creating databases and users. The default
is `true`. is `true`.
...@@ -362,6 +380,9 @@ key. ...@@ -362,6 +380,9 @@ key.
role name to grant to team members created from the Teams API. The default is role name to grant to team members created from the Teams API. The default is
`admin`, that role is created by Spilo as a `NOLOGIN` role. `admin`, that role is created by Spilo as a `NOLOGIN` role.
* **enable_admin_role_for_users**
if `true`, the `team_admin_role` will have the rights to grant roles coming from PG manifests. Such roles will be created as in "CREATE ROLE 'role_from_manifest' ... ADMIN 'team_admin_role'". The default is `true`.
* **pam_role_name** * **pam_role_name**
when set, the operator will add all team member roles to this group and add a when set, the operator will add all team member roles to this group and add a
`pg_hba` line to authenticate members of that role via `pam`. The default is `pg_hba` line to authenticate members of that role via `pam`. The default is
......
...@@ -6,7 +6,7 @@ metadata: ...@@ -6,7 +6,7 @@ metadata:
spec: spec:
teamId: "ACID" teamId: "ACID"
volume: volume:
size: 5Gi size: 1Gi
numberOfInstances: 2 numberOfInstances: 2
users: #Application/Robot users users: #Application/Robot users
zalando: zalando:
...@@ -19,6 +19,7 @@ spec: ...@@ -19,6 +19,7 @@ spec:
databases: databases:
foo: zalando foo: zalando
#Expert section #Expert section
enableShmVolume: true
postgresql: postgresql:
version: "10" version: "10"
parameters: parameters:
...@@ -31,7 +32,7 @@ spec: ...@@ -31,7 +32,7 @@ spec:
memory: 100Mi memory: 100Mi
limits: limits:
cpu: 300m cpu: 300m
memory: 3000Mi memory: 300Mi
patroni: patroni:
initdb: initdb:
encoding: "UTF8" encoding: "UTF8"
......
...@@ -10,14 +10,16 @@ data: ...@@ -10,14 +10,16 @@ data:
debug_logging: "true" debug_logging: "true"
workers: "4" workers: "4"
docker_image: registry.opensource.zalan.do/acid/spilo-cdp-10:1.4-p29 docker_image: registry.opensource.zalan.do/acid/spilo-cdp-11:1.5-p42
pod_service_account_name: "zalando-postgres-operator" pod_service_account_name: "zalando-postgres-operator"
secret_name_template: '{username}.{cluster}.credentials' secret_name_template: '{username}.{cluster}.credentials'
super_username: postgres super_username: postgres
enable_teams_api: "false" enable_teams_api: "false"
# set_memory_request_to_limit: "true"
# postgres_superuser_teams: "postgres_superusers" # postgres_superuser_teams: "postgres_superusers"
# enable_team_superuser: "false" # enable_team_superuser: "false"
# team_admin_role: "admin" # team_admin_role: "admin"
# enable_admin_role_for_users: "true"
# teams_api_url: http://fake-teams-api.default.svc.cluster.local # teams_api_url: http://fake-teams-api.default.svc.cluster.local
# team_api_role_configuration: "log_statement:all" # team_api_role_configuration: "log_statement:all"
# infrastructure_roles_secret_name: postgresql-infrastructure-roles # infrastructure_roles_secret_name: postgresql-infrastructure-roles
......
...@@ -15,7 +15,8 @@ spec: ...@@ -15,7 +15,8 @@ spec:
- createdb - createdb
# role for application foo # role for application foo
foo_user: foo_user: []
#databases: name->owner #databases: name->owner
databases: databases:
......
...@@ -14,6 +14,13 @@ spec: ...@@ -14,6 +14,13 @@ spec:
- name: postgres-operator - name: postgres-operator
image: registry.opensource.zalan.do/acid/smoke-tested-postgres-operator:v1.0.0-21-ge39915c image: registry.opensource.zalan.do/acid/smoke-tested-postgres-operator:v1.0.0-21-ge39915c
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 500m
memory: 250Mi
limits:
cpu: 2000m
memory: 500Mi
env: env:
# provided additional ENV vars can overwrite individual config map entries # provided additional ENV vars can overwrite individual config map entries
- name: CONFIG_MAP_NAME - name: CONFIG_MAP_NAME
......
...@@ -4,7 +4,7 @@ metadata: ...@@ -4,7 +4,7 @@ metadata:
name: postgresql-operator-default-configuration name: postgresql-operator-default-configuration
configuration: configuration:
etcd_host: "" etcd_host: ""
docker_image: registry.opensource.zalan.do/acid/spilo-cdp-10:1.4-p29 docker_image: registry.opensource.zalan.do/acid/spilo-cdp-11:1.5-p42
workers: 4 workers: 4
min_instances: -1 min_instances: -1
max_instances: -1 max_instances: -1
......
...@@ -147,6 +147,7 @@ type OperatorConfigurationData struct { ...@@ -147,6 +147,7 @@ type OperatorConfigurationData struct {
PostgresUsersConfiguration PostgresUsersConfiguration `json:"users"` PostgresUsersConfiguration PostgresUsersConfiguration `json:"users"`
Kubernetes KubernetesMetaConfiguration `json:"kubernetes"` Kubernetes KubernetesMetaConfiguration `json:"kubernetes"`
PostgresPodResources PostgresPodResourcesDefaults `json:"postgres_pod_resources"` PostgresPodResources PostgresPodResourcesDefaults `json:"postgres_pod_resources"`
SetMemoryRequestToLimit bool `json:"set_memory_request_to_limit,omitempty"`
Timeouts OperatorTimeouts `json:"timeouts"` Timeouts OperatorTimeouts `json:"timeouts"`
LoadBalancer LoadBalancerConfiguration `json:"load_balancer"` LoadBalancer LoadBalancerConfiguration `json:"load_balancer"`
AWSGCP AWSGCPConfiguration `json:"aws_or_gcp"` AWSGCP AWSGCPConfiguration `json:"aws_or_gcp"`
......
...@@ -52,6 +52,7 @@ type PostgresSpec struct { ...@@ -52,6 +52,7 @@ type PostgresSpec struct {
Tolerations []v1.Toleration `json:"tolerations,omitempty"` Tolerations []v1.Toleration `json:"tolerations,omitempty"`
Sidecars []Sidecar `json:"sidecars,omitempty"` Sidecars []Sidecar `json:"sidecars,omitempty"`
PodPriorityClassName string `json:"pod_priority_class_name,omitempty"` PodPriorityClassName string `json:"pod_priority_class_name,omitempty"`
ShmVolume *bool `json:"enableShmVolume,omitempty"`
} }
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object // +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
...@@ -92,7 +93,7 @@ type ResourceDescription struct { ...@@ -92,7 +93,7 @@ type ResourceDescription struct {
// Resources describes requests and limits for the cluster resouces. // Resources describes requests and limits for the cluster resouces.
type Resources struct { type Resources struct {
ResourceRequest ResourceDescription `json:"requests,omitempty"` ResourceRequests ResourceDescription `json:"requests,omitempty"`
ResourceLimits ResourceDescription `json:"limits,omitempty"` ResourceLimits ResourceDescription `json:"limits,omitempty"`
} }
......
...@@ -240,7 +240,7 @@ var unmarshalCluster = []struct { ...@@ -240,7 +240,7 @@ var unmarshalCluster = []struct {
Slots: map[string]map[string]string{"permanent_logical_1": {"type": "logical", "database": "foo", "plugin": "pgoutput"}}, Slots: map[string]map[string]string{"permanent_logical_1": {"type": "logical", "database": "foo", "plugin": "pgoutput"}},
}, },
Resources: Resources{ Resources: Resources{
ResourceRequest: ResourceDescription{CPU: "10m", Memory: "50Mi"}, ResourceRequests: ResourceDescription{CPU: "10m", Memory: "50Mi"},
ResourceLimits: ResourceDescription{CPU: "300m", Memory: "3000Mi"}, ResourceLimits: ResourceDescription{CPU: "300m", Memory: "3000Mi"},
}, },
...@@ -499,7 +499,7 @@ func TestMarshal(t *testing.T) { ...@@ -499,7 +499,7 @@ func TestMarshal(t *testing.T) {
t.Errorf("Marshal error: %v", err) t.Errorf("Marshal error: %v", err)
} }
if !bytes.Equal(m, tt.marshal) { if !bytes.Equal(m, tt.marshal) {
t.Errorf("Marshal Postgresql expected: %q, got: %q", string(tt.marshal), string(m)) t.Errorf("Marshal Postgresql \nexpected: %q, \ngot: %q", string(tt.marshal), string(m))
} }
} }
} }
...@@ -507,11 +507,11 @@ func TestMarshal(t *testing.T) { ...@@ -507,11 +507,11 @@ func TestMarshal(t *testing.T) {
func TestPostgresMeta(t *testing.T) { func TestPostgresMeta(t *testing.T) {
for _, tt := range unmarshalCluster { for _, tt := range unmarshalCluster {
if a := tt.out.GetObjectKind(); a != &tt.out.TypeMeta { if a := tt.out.GetObjectKind(); a != &tt.out.TypeMeta {
t.Errorf("GetObjectKindMeta expected: %v, got: %v", tt.out.TypeMeta, a) t.Errorf("GetObjectKindMeta \nexpected: %v, \ngot: %v", tt.out.TypeMeta, a)
} }
if a := tt.out.GetObjectMeta(); reflect.DeepEqual(a, tt.out.ObjectMeta) { if a := tt.out.GetObjectMeta(); reflect.DeepEqual(a, tt.out.ObjectMeta) {
t.Errorf("GetObjectMeta expected: %v, got: %v", tt.out.ObjectMeta, a) t.Errorf("GetObjectMeta \nexpected: %v, \ngot: %v", tt.out.ObjectMeta, a)
} }
} }
} }
......
...@@ -573,7 +573,7 @@ func (in *ResourceDescription) DeepCopy() *ResourceDescription { ...@@ -573,7 +573,7 @@ func (in *ResourceDescription) DeepCopy() *ResourceDescription {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil. // DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Resources) DeepCopyInto(out *Resources) { func (in *Resources) DeepCopyInto(out *Resources) {
*out = *in *out = *in
out.ResourceRequest = in.ResourceRequest out.ResourceRequests = in.ResourceRequests
out.ResourceLimits = in.ResourceLimits out.ResourceLimits = in.ResourceLimits
return return
} }
......
...@@ -709,11 +709,16 @@ func (c *Cluster) initRobotUsers() error { ...@@ -709,11 +709,16 @@ func (c *Cluster) initRobotUsers() error {
if err != nil { if err != nil {
return fmt.Errorf("invalid flags for user %q: %v", username, err) return fmt.Errorf("invalid flags for user %q: %v", username, err)
} }
adminRole := ""
if c.OpConfig.EnableAdminRoleForUsers {
adminRole = c.OpConfig.TeamAdminRole
}
newRole := spec.PgUser{ newRole := spec.PgUser{
Origin: spec.RoleOriginManifest, Origin: spec.RoleOriginManifest,
Name: username, Name: username,
Password: util.RandomPassword(constants.PasswordLength), Password: util.RandomPassword(constants.PasswordLength),
Flags: flags, Flags: flags,
AdminRole: adminRole,
} }
if currentRole, present := c.pgUsers[username]; present { if currentRole, present := c.pgUsers[username]; present {
c.pgUsers[username] = c.resolveNameConflict(&currentRole, &newRole) c.pgUsers[username] = c.resolveNameConflict(&currentRole, &newRole)
......
...@@ -18,6 +18,7 @@ import ( ...@@ -18,6 +18,7 @@ import (
acidv1 "github.com/zalando-incubator/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando-incubator/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando-incubator/postgres-operator/pkg/spec" "github.com/zalando-incubator/postgres-operator/pkg/spec"
"github.com/zalando-incubator/postgres-operator/pkg/util" "github.com/zalando-incubator/postgres-operator/pkg/util"
"github.com/zalando-incubator/postgres-operator/pkg/util/config"
"github.com/zalando-incubator/postgres-operator/pkg/util/constants" "github.com/zalando-incubator/postgres-operator/pkg/util/constants"
"k8s.io/apimachinery/pkg/labels" "k8s.io/apimachinery/pkg/labels"
) )
...@@ -92,18 +93,18 @@ func (c *Cluster) makeDefaultResources() acidv1.Resources { ...@@ -92,18 +93,18 @@ func (c *Cluster) makeDefaultResources() acidv1.Resources {
defaultRequests := acidv1.ResourceDescription{CPU: config.DefaultCPURequest, Memory: config.DefaultMemoryRequest} defaultRequests := acidv1.ResourceDescription{CPU: config.DefaultCPURequest, Memory: config.DefaultMemoryRequest}
defaultLimits := acidv1.ResourceDescription{CPU: config.DefaultCPULimit, Memory: config.DefaultMemoryLimit} defaultLimits := acidv1.ResourceDescription{CPU: config.DefaultCPULimit, Memory: config.DefaultMemoryLimit}
return acidv1.Resources{ResourceRequest: defaultRequests, ResourceLimits: defaultLimits} return acidv1.Resources{ResourceRequests: defaultRequests, ResourceLimits: defaultLimits}
} }
func generateResourceRequirements(resources acidv1.Resources, defaultResources acidv1.Resources) (*v1.ResourceRequirements, error) { func generateResourceRequirements(resources acidv1.Resources, defaultResources acidv1.Resources) (*v1.ResourceRequirements, error) {
var err error var err error
specRequests := resources.ResourceRequest specRequests := resources.ResourceRequests
specLimits := resources.ResourceLimits specLimits := resources.ResourceLimits
result := v1.ResourceRequirements{} result := v1.ResourceRequirements{}
result.Requests, err = fillResourceList(specRequests, defaultResources.ResourceRequest) result.Requests, err = fillResourceList(specRequests, defaultResources.ResourceRequests)
if err != nil { if err != nil {
return nil, fmt.Errorf("could not fill resource requests: %v", err) return nil, fmt.Errorf("could not fill resource requests: %v", err)
} }
...@@ -338,7 +339,6 @@ func generateSpiloContainer( ...@@ -338,7 +339,6 @@ func generateSpiloContainer(
envVars []v1.EnvVar, envVars []v1.EnvVar,
volumeMounts []v1.VolumeMount, volumeMounts []v1.VolumeMount,
) *v1.Container { ) *v1.Container {
privilegedMode := true privilegedMode := true
return &v1.Container{ return &v1.Container{
Name: name, Name: name,
...@@ -377,8 +377,8 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar, ...@@ -377,8 +377,8 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar,
resources, err := generateResourceRequirements( resources, err := generateResourceRequirements(
makeResources( makeResources(
sidecar.Resources.ResourceRequest.CPU, sidecar.Resources.ResourceRequests.CPU,
sidecar.Resources.ResourceRequest.Memory, sidecar.Resources.ResourceRequests.Memory,
sidecar.Resources.ResourceLimits.CPU, sidecar.Resources.ResourceLimits.CPU,
sidecar.Resources.ResourceLimits.Memory, sidecar.Resources.ResourceLimits.Memory,
), ),
...@@ -396,6 +396,16 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar, ...@@ -396,6 +396,16 @@ func generateSidecarContainers(sidecars []acidv1.Sidecar,
return nil, nil return nil, nil
} }
// Check whether or not we're requested to mount an shm volume,
// taking into account that PostgreSQL manifest has precedence.
func mountShmVolumeNeeded(opConfig config.Config, pgSpec *acidv1.PostgresSpec) bool {
if pgSpec.ShmVolume != nil {
return *pgSpec.ShmVolume
}
return opConfig.ShmVolume
}
func generatePodTemplate( func generatePodTemplate(
namespace string, namespace string,
labels labels.Set, labels labels.Set,
...@@ -407,6 +417,7 @@ func generatePodTemplate( ...@@ -407,6 +417,7 @@ func generatePodTemplate(
podServiceAccountName string, podServiceAccountName string,
kubeIAMRole string, kubeIAMRole string,
priorityClassName string, priorityClassName string,
shmVolume bool,
) (*v1.PodTemplateSpec, error) { ) (*v1.PodTemplateSpec, error) {
terminateGracePeriodSeconds := terminateGracePeriod terminateGracePeriodSeconds := terminateGracePeriod
...@@ -420,6 +431,10 @@ func generatePodTemplate( ...@@ -420,6 +431,10 @@ func generatePodTemplate(
Tolerations: *tolerationsSpec, Tolerations: *tolerationsSpec,
} }
if shmVolume {
addShmVolume(&podSpec)
}
if nodeAffinity != nil { if nodeAffinity != nil {
podSpec.Affinity = nodeAffinity podSpec.Affinity = nodeAffinity
} }
...@@ -475,6 +490,18 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri ...@@ -475,6 +490,18 @@ func (c *Cluster) generateSpiloPodEnvVars(uid types.UID, spiloConfiguration stri
Name: "PGUSER_SUPERUSER", Name: "PGUSER_SUPERUSER",
Value: c.OpConfig.SuperUsername, Value: c.OpConfig.SuperUsername,
}, },
{
Name: "KUBERNETES_SCOPE_LABEL",
Value: c.OpConfig.ClusterNameLabel,
},
{
Name: "KUBERNETES_ROLE_LABEL",
Value: c.OpConfig.PodRoleLabel,
},
{
Name: "KUBERNETES_LABELS",
Value: labels.Set(c.OpConfig.ClusterLabels).String(),
},
{ {
Name: "PGPASSWORD_SUPERUSER", Name: "PGPASSWORD_SUPERUSER",
ValueFrom: &v1.EnvVarSource{ ValueFrom: &v1.EnvVarSource{
...@@ -625,7 +652,7 @@ func getBucketScopeSuffix(uid string) string { ...@@ -625,7 +652,7 @@ func getBucketScopeSuffix(uid string) string {
func makeResources(cpuRequest, memoryRequest, cpuLimit, memoryLimit string) acidv1.Resources { func makeResources(cpuRequest, memoryRequest, cpuLimit, memoryLimit string) acidv1.Resources {
return acidv1.Resources{ return acidv1.Resources{
ResourceRequest: acidv1.ResourceDescription{ ResourceRequests: acidv1.ResourceDescription{
CPU: cpuRequest, CPU: cpuRequest,
Memory: memoryRequest, Memory: memoryRequest,
}, },
...@@ -644,6 +671,61 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State ...@@ -644,6 +671,61 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
podTemplate *v1.PodTemplateSpec podTemplate *v1.PodTemplateSpec
volumeClaimTemplate *v1.PersistentVolumeClaim volumeClaimTemplate *v1.PersistentVolumeClaim
) )
// Improve me. Please.
if c.OpConfig.SetMemoryRequestToLimit {
// controller adjusts the default memory request at operator startup
request := spec.Resources.ResourceRequests.Memory
if request == "" {
request = c.OpConfig.DefaultMemoryRequest
}
limit := spec.Resources.ResourceLimits.Memory
if limit == "" {
limit = c.OpConfig.DefaultMemoryLimit
}
isSmaller, err := util.RequestIsSmallerThanLimit(request, limit)
if err != nil {
return nil, err
}
if isSmaller {
c.logger.Warningf("The memory request of %v for the Postgres container is increased to match the memory limit of %v.", request, limit)
spec.Resources.ResourceRequests.Memory = limit
}
// controller adjusts the Scalyr sidecar request at operator startup
// as this sidecar is managed separately
// adjust sidecar containers defined for that particular cluster
for _, sidecar := range spec.Sidecars {
// TODO #413
sidecarRequest := sidecar.Resources.ResourceRequests.Memory
if request == "" {
request = c.OpConfig.DefaultMemoryRequest
}
sidecarLimit := sidecar.Resources.ResourceLimits.Memory
if limit == "" {
limit = c.OpConfig.DefaultMemoryLimit
}
isSmaller, err := util.RequestIsSmallerThanLimit(sidecarRequest, sidecarLimit)
if err != nil {
return nil, err
}
if isSmaller {
c.logger.Warningf("The memory request of %v for the %v sidecar container is increased to match the memory limit of %v.", sidecar.Resources.ResourceRequests.Memory, sidecar.Name, sidecar.Resources.ResourceLimits.Memory)
sidecar.Resources.ResourceRequests.Memory = sidecar.Resources.ResourceLimits.Memory
}
}
}
defaultResources := c.makeDefaultResources() defaultResources := c.makeDefaultResources()
resourceRequirements, err := generateResourceRequirements(spec.Resources, defaultResources) resourceRequirements, err := generateResourceRequirements(spec.Resources, defaultResources)
...@@ -670,8 +752,8 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State ...@@ -670,8 +752,8 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
// generate environment variables for the spilo container // generate environment variables for the spilo container
spiloEnvVars := deduplicateEnvVars( spiloEnvVars := deduplicateEnvVars(
c.generateSpiloPodEnvVars(c.Postgresql.GetUID(), spiloConfiguration, &spec.Clone, customPodEnvVarsList), c.generateSpiloPodEnvVars(c.Postgresql.GetUID(), spiloConfiguration, &spec.Clone,
c.containerName(), c.logger) customPodEnvVarsList), c.containerName(), c.logger)
// pickup the docker image for the spilo container // pickup the docker image for the spilo container
effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage) effectiveDockerImage := util.Coalesce(spec.DockerImage, c.OpConfig.DockerImage)
...@@ -679,9 +761,15 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State ...@@ -679,9 +761,15 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
volumeMounts := generateVolumeMounts() volumeMounts := generateVolumeMounts()
// generate the spilo container // generate the spilo container
spiloContainer := generateSpiloContainer(c.containerName(), &effectiveDockerImage, resourceRequirements, spiloEnvVars, volumeMounts) c.logger.Debugf("Generating Spilo container, environment variables: %v", spiloEnvVars)
spiloContainer := generateSpiloContainer(c.containerName(),
&effectiveDockerImage,
resourceRequirements,
spiloEnvVars,
volumeMounts,
)
// resolve conflicts between operator-global and per-cluster sidecards // resolve conflicts between operator-global and per-cluster sidecars
sideCars := c.mergeSidecars(spec.Sidecars) sideCars := c.mergeSidecars(spec.Sidecars)
resourceRequirementsScalyrSidecar := makeResources( resourceRequirementsScalyrSidecar := makeResources(
...@@ -710,7 +798,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State ...@@ -710,7 +798,7 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration) tolerationSpec := tolerations(&spec.Tolerations, c.OpConfig.PodToleration)
effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName) effectivePodPriorityClassName := util.Coalesce(spec.PodPriorityClassName, c.OpConfig.PodPriorityClassName)
// generate pod template for the statefulset, based on the spilo container and sidecards // generate pod template for the statefulset, based on the spilo container and sidecars
if podTemplate, err = generatePodTemplate( if podTemplate, err = generatePodTemplate(
c.Namespace, c.Namespace,
c.labelsSet(true), c.labelsSet(true),
...@@ -721,7 +809,8 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State ...@@ -721,7 +809,8 @@ func (c *Cluster) generateStatefulSet(spec *acidv1.PostgresSpec) (*v1beta1.State
int64(c.OpConfig.PodTerminateGracePeriod.Seconds()), int64(c.OpConfig.PodTerminateGracePeriod.Seconds()),
c.OpConfig.PodServiceAccountName, c.OpConfig.PodServiceAccountName,
c.OpConfig.KubeIAMRole, c.OpConfig.KubeIAMRole,
effectivePodPriorityClassName); err != nil { effectivePodPriorityClassName,
mountShmVolumeNeeded(c.OpConfig, spec)); err != nil {
return nil, fmt.Errorf("could not generate pod template: %v", err) return nil, fmt.Errorf("could not generate pod template: %v", err)
} }
...@@ -828,6 +917,32 @@ func (c *Cluster) getNumberOfInstances(spec *acidv1.PostgresSpec) int32 { ...@@ -828,6 +917,32 @@ func (c *Cluster) getNumberOfInstances(spec *acidv1.PostgresSpec) int32 {
return newcur return newcur
} }
// To avoid issues with limited /dev/shm inside docker environment, when
// PostgreSQL can't allocate enough of dsa segments from it, we can
// mount an extra memory volume
//
// see https://docs.okd.io/latest/dev_guide/shared_memory.html
func addShmVolume(podSpec *v1.PodSpec) {
volumes := append(podSpec.Volumes, v1.Volume{
Name: constants.ShmVolumeName,
VolumeSource: v1.VolumeSource{
EmptyDir: &v1.EmptyDirVolumeSource{
Medium: "Memory",
},
},
})
pgIdx := constants.PostgresContainerIdx
mounts := append(podSpec.Containers[pgIdx].VolumeMounts,
v1.VolumeMount{
Name: constants.ShmVolumeName,
MountPath: constants.ShmVolumePath,
})
podSpec.Containers[0].VolumeMounts = mounts
podSpec.Volumes = volumes
}
func generatePersistentVolumeClaimTemplate(volumeSize, volumeStorageClass string) (*v1.PersistentVolumeClaim, error) { func generatePersistentVolumeClaimTemplate(volumeSize, volumeStorageClass string) (*v1.PersistentVolumeClaim, error) {
var storageClassName *string var storageClassName *string
......
package cluster package cluster
import ( import (
"k8s.io/api/core/v1"
acidv1 "github.com/zalando-incubator/postgres-operator/pkg/apis/acid.zalan.do/v1" acidv1 "github.com/zalando-incubator/postgres-operator/pkg/apis/acid.zalan.do/v1"
"github.com/zalando-incubator/postgres-operator/pkg/util/config" "github.com/zalando-incubator/postgres-operator/pkg/util/config"
"github.com/zalando-incubator/postgres-operator/pkg/util/constants"
"github.com/zalando-incubator/postgres-operator/pkg/util/k8sutil" "github.com/zalando-incubator/postgres-operator/pkg/util/k8sutil"
"testing" "testing"
) )
...@@ -75,3 +78,54 @@ func TestCreateLoadBalancerLogic(t *testing.T) { ...@@ -75,3 +78,54 @@ func TestCreateLoadBalancerLogic(t *testing.T) {
} }
} }
} }
func TestShmVolume(t *testing.T) {
testName := "TestShmVolume"
tests := []struct {
subTest string
podSpec *v1.PodSpec
shmPos int
}{
{
subTest: "empty PodSpec",
podSpec: &v1.PodSpec{
Volumes: []v1.Volume{},
Containers: []v1.Container{
v1.Container{
VolumeMounts: []v1.VolumeMount{},
},
},
},
shmPos: 0,
},
{
subTest: "non empty PodSpec",
podSpec: &v1.PodSpec{
Volumes: []v1.Volume{v1.Volume{}},
Containers: []v1.Container{
v1.Container{
VolumeMounts: []v1.VolumeMount{
v1.VolumeMount{},
},
},
},
},
shmPos: 1,
},
}
for _, tt := range tests {
addShmVolume(tt.podSpec)
volumeName := tt.podSpec.Volumes[tt.shmPos].Name
volumeMountName := tt.podSpec.Containers[0].VolumeMounts[tt.shmPos].Name
if volumeName != constants.ShmVolumeName {
t.Errorf("%s %s: Expected volume %s was not created, have %s instead",
testName, tt.subTest, constants.ShmVolumeName, volumeName)
}
if volumeMountName != constants.ShmVolumeName {
t.Errorf("%s %s: Expected mount %s was not created, have %s instead",
testName, tt.subTest, constants.ShmVolumeName, volumeMountName)
}
}
}
...@@ -110,6 +110,29 @@ func (c *Controller) initOperatorConfig() { ...@@ -110,6 +110,29 @@ func (c *Controller) initOperatorConfig() {
c.opConfig = config.NewFromMap(configMapData) c.opConfig = config.NewFromMap(configMapData)
c.warnOnDeprecatedOperatorParameters() c.warnOnDeprecatedOperatorParameters()
if c.opConfig.SetMemoryRequestToLimit {
isSmaller, err := util.RequestIsSmallerThanLimit(c.opConfig.DefaultMemoryRequest, c.opConfig.DefaultMemoryLimit)
if err != nil {
panic(err)
}
if isSmaller {
c.logger.Warningf("The default memory request of %v for Postgres containers is increased to match the default memory limit of %v.", c.opConfig.DefaultMemoryRequest, c.opConfig.DefaultMemoryLimit)
c.opConfig.DefaultMemoryRequest = c.opConfig.DefaultMemoryLimit
}
isSmaller, err = util.RequestIsSmallerThanLimit(c.opConfig.ScalyrMemoryRequest, c.opConfig.ScalyrMemoryLimit)
if err != nil {
panic(err)
}
if isSmaller {
c.logger.Warningf("The memory request of %v for the Scalyr sidecar container is increased to match the memory limit of %v.", c.opConfig.ScalyrMemoryRequest, c.opConfig.ScalyrMemoryLimit)
c.opConfig.ScalyrMemoryRequest = c.opConfig.ScalyrMemoryLimit
}
// generateStatefulSet adjusts values for individual Postgres clusters
}
} }
func (c *Controller) modifyConfigFromEnvironment() { func (c *Controller) modifyConfigFromEnvironment() {
......
...@@ -55,6 +55,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur ...@@ -55,6 +55,7 @@ func (c *Controller) importConfigurationFromCRD(fromCRD *acidv1.OperatorConfigur
result.DefaultMemoryRequest = fromCRD.PostgresPodResources.DefaultMemoryRequest result.DefaultMemoryRequest = fromCRD.PostgresPodResources.DefaultMemoryRequest
result.DefaultCPULimit = fromCRD.PostgresPodResources.DefaultCPULimit result.DefaultCPULimit = fromCRD.PostgresPodResources.DefaultCPULimit
result.DefaultMemoryLimit = fromCRD.PostgresPodResources.DefaultMemoryLimit result.DefaultMemoryLimit = fromCRD.PostgresPodResources.DefaultMemoryLimit
result.SetMemoryRequestToLimit = fromCRD.SetMemoryRequestToLimit
result.ResourceCheckInterval = time.Duration(fromCRD.Timeouts.ResourceCheckInterval) result.ResourceCheckInterval = time.Duration(fromCRD.Timeouts.ResourceCheckInterval)
result.ResourceCheckTimeout = time.Duration(fromCRD.Timeouts.ResourceCheckTimeout) result.ResourceCheckTimeout = time.Duration(fromCRD.Timeouts.ResourceCheckTimeout)
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please to comment