Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,15 @@ as it is composed of a community PostgreSQL image and the latest
in your system to take advantage of the improvements introduced in
Barman cloud (as well as improve the security aspects of your cluster).

!!! Warning "Changes in Barman Cloud 3.16+ and Bucket Creation"
Starting with Barman Cloud 3.16, most Barman Cloud commands no longer
automatically create the target bucket, assuming it already exists. Only the
`barman-cloud-check-wal-archive` command creates the bucket now. Whenever this
is not the first operation run on an empty bucket, EDB Postgres for Kubernetes will throw an
error. As a result, to ensure reliable, future-proof operations and avoid
potential issues, we strongly recommend that you create and configure your
object store bucket *before* creating a `Cluster` resource that references it.

A backup is performed from a primary or a designated primary instance in a
`Cluster` (please refer to
[replica clusters](replica_cluster.md)
Expand Down
10 changes: 5 additions & 5 deletions product_docs/docs/postgres_for_kubernetes/1/bootstrap.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -626,7 +626,7 @@ file on the source PostgreSQL instance:
host replication streaming_replica all md5
```

The following manifest creates a new PostgreSQL 17.5 cluster,
The following manifest creates a new PostgreSQL 18.0 cluster,
called `target-db`, using the `pg_basebackup` bootstrap method
to clone an external PostgreSQL cluster defined as `source-db`
(in the `externalClusters` array). As you can see, the `source-db`
Expand All @@ -641,7 +641,7 @@ metadata:
name: target-db
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:17.5
imageName: quay.io/enterprisedb/postgresql:18.0-system-trixie

bootstrap:
pg_basebackup:
Expand All @@ -661,7 +661,7 @@ spec:
```

All the requirements must be met for the clone operation to work, including
the same PostgreSQL version (in our case 17.5).
the same PostgreSQL version (in our case 18.0).

#### TLS certificate authentication

Expand All @@ -676,7 +676,7 @@ in the same Kubernetes cluster.
This example can be easily adapted to cover an instance that resides
outside the Kubernetes cluster.

The manifest defines a new PostgreSQL 17.5 cluster called `cluster-clone-tls`,
The manifest defines a new PostgreSQL 18.0 cluster called `cluster-clone-tls`,
which is bootstrapped using the `pg_basebackup` method from the `cluster-example`
external cluster. The host is identified by the read/write service
in the same cluster, while the `streaming_replica` user is authenticated
Expand All @@ -691,7 +691,7 @@ metadata:
name: cluster-clone-tls
spec:
instances: 3
imageName: quay.io/enterprisedb/postgresql:17.5
imageName: quay.io/enterprisedb/postgresql:18.0-system-trixie

bootstrap:
pg_basebackup:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ uses a `Merge` policy to update only the specified fields (`password`, `pgpass`,
`jdbc-uri` and `uri`) in the `cluster-example-app` secret.

```yaml
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: ExternalSecret
metadata:
name: cluster-example-app-secret
Expand Down Expand Up @@ -182,7 +182,7 @@ named `vault-token` exists in the same namespace, containing the token used to
authenticate with Vault.

```yaml
apiVersion: external-secrets.io/v1beta1
apiVersion: external-secrets.io/v1
kind: SecretStore
metadata:
name: vault-backend
Expand Down
20 changes: 10 additions & 10 deletions product_docs/docs/postgres_for_kubernetes/1/cnp_i.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ originalFilePath: 'src/cnp_i.md'



The **CloudNativePG Interface** ([CNPG-I](https://github.com/cloudnative-pg/cnpg-i))
is a standard way to extend and customize {{name.ln}} without modifying its
The **EDB Postgres for Kubernetes Interface** ([CNPG-I](https://github.com/cloudnative-pg/cnpg-i))
is a standard way to extend and customize EDB Postgres for Kubernetes without modifying its
core codebase.

## Why CNP-I?

{{name.ln}} supports a wide range of use cases, but sometimes its built-in
EDB Postgres for Kubernetes supports a wide range of use cases, but sometimes its built-in
functionality isn’t enough, or adding certain features directly to the main
project isn’t practical.

Expand All @@ -23,7 +23,7 @@ Before CNP-I, users had two main options:
Both approaches created maintenance overhead, slowed upgrades, and delayed delivery of critical features.

CNP-I solves these problems by providing a stable, gRPC-based integration
point for extending {{name.ln}} at key points in a cluster’s lifecycle —such
point for extending EDB Postgres for Kubernetes at key points in a cluster’s lifecycle —such
as backups, recovery, and sub-resource reconciliation— without disrupting the
core project.

Expand All @@ -39,7 +39,7 @@ CNP-I is inspired by the Kubernetes
The operator communicates with registered plugins using **gRPC**, following the
[CNPG-I protocol](https://github.com/cloudnative-pg/cnpg-i/blob/main/docs/protocol.md).

{{name.ln}} discovers plugins **at startup**. You can register them in one of two ways:
EDB Postgres for Kubernetes discovers plugins **at startup**. You can register them in one of two ways:

- Sidecar container – run the plugin inside the operator’s Deployment
- Standalone Deployment – run the plugin as a separate workload in the same
Expand Down Expand Up @@ -89,7 +89,7 @@ operator’s and allows independent scaling. In this setup, the plugin exposes a
TCP gRPC endpoint behind a Service, with **mTLS** for secure communication.

!!! Warning
{{name.ln}} does **not** discover plugins dynamically. If you deploy a new
EDB Postgres for Kubernetes does **not** discover plugins dynamically. If you deploy a new
plugin, you must **restart the operator** to detect it.

Example Deployment:
Expand All @@ -113,7 +113,7 @@ spec:

The related Service for the plugin must include:

- The label `k8s.enterprisedb.io/plugin: <plugin-name>` — required for {{name.ln}} to
- The label `k8s.enterprisedb.io/plugin: <plugin-name>` — required for EDB Postgres for Kubernetes to
discover the plugin
- The annotation `k8s.enterprisedb.io/pluginPort: <port>` — specifies the port where the
plugin’s gRPC server is exposed
Expand All @@ -140,7 +140,7 @@ spec:

### Configuring TLS Certificates

When a plugin runs as a `Deployment`, communication with {{name.ln}} happens
When a plugin runs as a `Deployment`, communication with EDB Postgres for Kubernetes happens
over the network. To secure it, **mTLS is enforced**, requiring TLS
certificates for both sides.

Expand Down Expand Up @@ -169,7 +169,7 @@ spec:
## Using a plugin

To enable a plugin, configure the `.spec.plugins` section in your `Cluster`
resource. Refer to the {{name.ln}} API Reference for the full
resource. Refer to the EDB Postgres for Kubernetes API Reference for the full
[PluginConfiguration](https://cloudnative-pg.io/documentation/current/cloudnative-pg.v1/#postgresql-k8s-enterprisedb-io-v1-PluginConfiguration)
specification.

Expand Down Expand Up @@ -202,7 +202,7 @@ deployed:
## Community plugins

The CNP-I protocol has quickly become a proven and reliable pattern for
extending {{name.ln}} while keeping the core project maintainable.
extending EDB Postgres for Kubernetes while keeping the core project maintainable.
Over time, the community has built and shared plugins that address real-world
needs and serve as examples for developers.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -335,11 +335,13 @@ These are the PgBouncer options you can customize, with links to the PgBouncer
documentation for each parameter. Unless stated otherwise, the default values
are the ones directly set by PgBouncer.

- [`auth_type`](https://www.pgbouncer.org/config.html#auth_type)
- [`application_name_add_host`](https://www.pgbouncer.org/config.html#application_name_add_host)
- [`autodb_idle_timeout`](https://www.pgbouncer.org/config.html#autodb_idle_timeout)
- [`cancel_wait_timeout`](https://www.pgbouncer.org/config.html#cancel_wait_timeout)
- [`client_idle_timeout`](https://www.pgbouncer.org/config.html#client_idle_timeout)
- [`client_login_timeout`](https://www.pgbouncer.org/config.html#client_login_timeout)
- [`client_tls_sslmode`](https://www.pgbouncer.org/config.html#client_tls_sslmode)
- [`default_pool_size`](https://www.pgbouncer.org/config.html#default_pool_size)
- [`disable_pqexec`](https://www.pgbouncer.org/config.html#disable_pqexec)
- [`dns_max_ttl`](https://www.pgbouncer.org/config.html#dns_max_ttl)
Expand Down Expand Up @@ -378,6 +380,7 @@ are the ones directly set by PgBouncer.
- [`server_round_robin`](https://www.pgbouncer.org/config.html#server_round_robin)
- [`server_tls_ciphers`](https://www.pgbouncer.org/config.html#server_tls_ciphers)
- [`server_tls_protocols`](https://www.pgbouncer.org/config.html#server_tls_protocols)
- [`server_tls_sslmode`](https://www.pgbouncer.org/config.html#server_tls_sslmode)
- [`stats_period`](https://www.pgbouncer.org/config.html#stats_period)
- [`suspend_timeout`](https://www.pgbouncer.org/config.html#suspend_timeout)
- [`tcp_defer_accept`](https://www.pgbouncer.org/config.html#tcp_defer_accept)
Expand Down Expand Up @@ -587,18 +590,10 @@ cnp_pgbouncer_stats_total_xact_time{database="pgbouncer"} 0
For a better understanding of the metrics please refer to the PgBouncer documentation.

As for clusters, a specific pooler can be monitored using the
[Prometheus operator's](https://github.com/prometheus-operator/prometheus-operator) resource
[PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/v0.47.1/Documentation/api.md#podmonitor).
A `PodMonitor` correctly pointing to a pooler can be created by the operator by setting
`.spec.monitoring.enablePodMonitor` to `true` in the `Pooler` resource. The default is `false`.
[Prometheus operator's](https://github.com/prometheus-operator/prometheus-operator)
[`PodMonitor` resource](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api-reference/api.md#monitoring.coreos.com/v1.PodMonitor).

!!! Important
Any change to `PodMonitor` created automatically is overridden by the
operator at the next reconciliation cycle. If you need to customize it, you can
do so as shown in the following example.

To deploy a `PodMonitor` for a specific pooler manually, you can define it as
follows and change it as needed:
You can deploy a `PodMonitor` for a specific pooler using the following basic example, and change it as needed:

```yaml
apiVersion: monitoring.coreos.com/v1
Expand All @@ -613,6 +608,18 @@ spec:
- port: metrics
```

### Deprecation of Automatic `PodMonitor` Creation

!!!warning "Feature Deprecation Notice"
The `.spec.monitoring.enablePodMonitor` field in the `Pooler` resource is
now deprecated and will be removed in a future version of the operator.

If you are currently using this feature, we strongly recommend you either
remove or set `.spec.monitoring.enablePodMonitor` to `false` and manually
create a `PodMonitor` resource for your pooler as described above.
This change ensures that you have complete ownership of your monitoring
configuration, preventing it from being managed or overwritten by the operator.

## Logging

Logs are directly sent to standard output, in JSON format, like in the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,4 +70,4 @@ Examples of accepted image tags:
`latest` is not considered a valid tag for the image.

!!! Note
Image tag requirements do no apply for images defined in a catalog.
Image tag requirements do not apply for images defined in a catalog.
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ $ kubectl cnp status <cluster-name>
Cluster Summary
Name: cluster-example
Namespace: default
PostgreSQL Image: quay.io/enterprisedb/postgresql:17.5
PostgreSQL Image: quay.io/enterprisedb/postgresql:18.0-system-trixie
Primary instance: cluster-example-2
Status: Cluster in healthy state
Instances: 3
Expand Down
83 changes: 56 additions & 27 deletions product_docs/docs/postgres_for_kubernetes/1/image_catalog.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,13 @@ metadata:
spec:
images:
- major: 15
image: quay.io/enterprisedb/postgresql:15.6
image: quay.io/enterprisedb/postgresql:15.14-system-trixie
- major: 16
image: quay.io/enterprisedb/postgresql:16.8
image: quay.io/enterprisedb/postgresql:16.10-system-trixie
- major: 17
image: quay.io/enterprisedb/postgresql:17.5
image: quay.io/enterprisedb/postgresql:17.6-system-trixie
- major: 18
image: quay.io/enterprisedb/postgresql:18.0-system-trixie
```

**Example of a Cluster-Wide Catalog using `ClusterImageCatalog` Resource:**
Expand All @@ -52,15 +54,18 @@ metadata:
spec:
images:
- major: 15
image: quay.io/enterprisedb/postgresql:15.6
image: quay.io/enterprisedb/postgresql:15.14-system-trixie
- major: 16
image: quay.io/enterprisedb/postgresql:16.8
image: quay.io/enterprisedb/postgresql:16.10-system-trixie
- major: 17
image: quay.io/enterprisedb/postgresql:17.5
image: quay.io/enterprisedb/postgresql:17.6-system-trixie
- major: 18
image: quay.io/enterprisedb/postgresql:18.0-system-trixie
```

A `Cluster` resource has the flexibility to reference either an `ImageCatalog`
or a `ClusterImageCatalog` to precisely specify the desired image.
(like in the following example) or a `ClusterImageCatalog` to precisely specify
the desired image.

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
Expand All @@ -71,6 +76,7 @@ spec:
instances: 3
imageCatalogRef:
apiGroup: postgresql.k8s.enterprisedb.io
# Change the following to `ClusterImageCatalog` if needed
kind: ImageCatalog
name: postgresql
major: 16
Expand All @@ -84,33 +90,56 @@ Any alterations to the images within a catalog trigger automatic updates for

## {{name.ln}} Catalogs

The {{name.ln}} project maintains `ClusterImageCatalogs` for the images it
provides. These catalogs are regularly updated with the latest images for each
major version. By applying the `ClusterImageCatalog.yaml` file from the
{{name.ln}} project's GitHub repositories, cluster administrators can ensure
that their clusters are automatically updated to the latest version within the
specified major release.
The EDB Postgres for Kubernetes project maintains `ClusterImageCatalog` manifests for all
supported images.

### PostgreSQL Container Images
These catalogs are regularly updated and published in the
[artifacts repository](https://github.com/cloudnative-pg/artifacts/tree/main/image-catalogs).

You can install the
[latest version of the cluster catalog for the PostgreSQL Container Images](https://raw.githubusercontent.com/cloudnative-pg/postgres-containers/main/Debian/ClusterImageCatalog-bookworm.yaml)
([cloudnative-pg/postgres-containers](https://github.com/enterprisedb/docker-postgres) repository)
with:
Each catalog corresponds to a specific combination of image type (e.g.
`minimal`) and Debian release (e.g. `trixie`). It lists the most up-to-date
container images for every supported PostgreSQL major version.

By installing these catalogs, cluster administrators can ensure that their
PostgreSQL clusters are automatically updated to the latest patch release
within a given PostgreSQL major version, for the selected Debian distribution
and image type.

For example, to install the latest catalog for the `minimal` PostgreSQL
container images on Debian `trixie`, run:

```shell
kubectl apply \
-f https://raw.githubusercontent.com/cloudnative-pg/postgres-containers/main/Debian/ClusterImageCatalog-bookworm.yaml
kubectl apply -f \
https://raw.githubusercontent.com/cloudnative-pg/artifacts/refs/heads/main/image-catalogs/catalog-minimal-trixie.yaml
```

### PostGIS Container Images
You can install all the available catalogs by using the `kustomization` file
present in the `image-catalogs` directory:

```shell
kubectl apply -k https://github.com/cloudnative-pg/artifacts//image-catalogs?ref=main
```

You can install the
[latest version of the cluster catalog for the PostGIS Container Images](https://raw.githubusercontent.com/cloudnative-pg/postgis-containers/main/PostGIS/ClusterImageCatalog.yaml)
([cloudnative-pg/postgis-containers](https://github.com/cloudnative-pg/postgis-containers) repository)
with:
You can then view all the catalogs deployed with:

```shell
kubectl apply \
-f https://raw.githubusercontent.com/cloudnative-pg/postgis-containers/main/PostGIS/ClusterImageCatalog.yaml
kubectl get clusterimagecatalogs.postgresql.k8s.enterprisedb.io
```

For example, you can create a cluster with the latest `minimal` image for PostgreSQL 18 on `trixie` with:

```yaml
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: angus
spec:
instances: 3
imageCatalogRef:
apiGroup: postgresql.k8s.enterprisedb.io
kind: ClusterImageCatalog
name: postgresql-minimal-trixie
major: 18
storage:
size: 1Gi
```
Original file line number Diff line number Diff line change
Expand Up @@ -81,14 +81,14 @@ through a YAML manifest applied via `kubectl`.

There are two different manifests available depending on your subscription plan:

- Standard: The [latest standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.27.0.yaml).
- Enterprise: The [latest enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.27.0.yaml).
- Standard: The [latest standard operator manifest](https://get.enterprisedb.io/pg4k/pg4k-standard-1.26.2.yaml).
- Enterprise: The [latest enterprise operator manifest](https://get.enterprisedb.io/pg4k/pg4k-enterprise-1.26.2.yaml).

You can install the manifest for the latest version of the operator by running:

```sh
kubectl apply --server-side -f \
https://get.enterprisedb.io/pg4k/pg4k-$EDB_SUBSCRIPTION_PLAN-1.27.0.yaml
https://get.enterprisedb.io/pg4k/pg4k-$EDB_SUBSCRIPTION_PLAN-1.26.2.yaml
```

You can verify that with:
Expand Down
Loading