Skip to content

Commit

Permalink
Sync docs from Discourse (#589)
Browse files Browse the repository at this point in the history
Co-authored-by: GitHub Actions <41898282+github-actions[bot]@users.noreply.github.com>
  • Loading branch information
github-actions[bot] committed Aug 20, 2024
1 parent 3316ac8 commit 9c0fbae
Show file tree
Hide file tree
Showing 40 changed files with 399 additions and 376 deletions.
2 changes: 1 addition & 1 deletion docs/explanation/e-legacy-charm.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Charm types "legacy" vs "modern"
## Charm types: "legacy" vs "modern"
There are [two types of charms](https://juju.is/docs/sdk/charm-taxonomy#heading--charm-types-by-generation) stored under the same charm name `postgresql`:

1. [Reactive](https://juju.is/docs/sdk/charm-taxonomy#heading--reactive) charm in the channel `latest/stable` (called `legacy`)
Expand Down
1 change: 1 addition & 0 deletions docs/explanation/e-statuses.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ The charm follows [standard Juju applications statuses](https://juju.is/docs/olm
| **active** | any | Normal charm operations | No actions required |
| **waiting** | any | Charm is waiting for relations to be finished | No actions required |
| **maintenance** | any | Charm is performing the internal maintenance (e.g. cluster re-configuration, upgrade, ...) | No actions required |
| **blocked** | the S3 repository has backups from another cluster | The bucket contains foreign backup. To avoid accident DB corruption, use clean bucket. The cluster identified by Juju app name + DB UUID. | Chose/change the new S3 [bucket](https://charmhub.io/s3-integrator/configuration#bucket)/[path](https://charmhub.io/s3-integrator/configuration#path) OR clean the current one. |
| **blocked** | failed to update cluster members on member | TODO: error/retry? | |
| **blocked** | failed to install snap packages | There are issues with the network connection and/or the Snap Store | Check your internet connection and https://status.snapcraft.io/. Remove the application and when everything is ok, deploy the charm again |
| **blocked** | failed to patch snap seccomp profile | The charm failed to patch one issue that happens when pgBackRest restores a backup (this blocked status should be removed when https://github.com/pgbackrest/pgbackrest/releases/tag/release%2F2.46 is added to the snap) | Remove the unit and add it back again |
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
# Integrate with a client application
[note type="caution"]
This is an internal article. **Do not use it in production!**

Contact the [Canonical Data Platform team](https://chat.charmhub.io/charmhub/channels/data-platform) if you are interested in this topic.
[/note]

This guide will show you how to integrate a client application with a cross-regional async setup using an example PostgreSQL deployment with two servers: one in Rome and one in Lisbon.

Expand All @@ -13,13 +8,13 @@ This guide will show you how to integrate a client application with a cross-regi
* Refer to the page [How to set up clusters](/t/13991)

## Summary
* [Configure database endpoints](#heading--configure-endpoints)
* [Internal client](#heading--internal-client)
* [External client](#heading--external-client)
* [Configure database endpoints](#configure-database-endpoints)
* [Internal client](#internal-client)
* [External client](#external-client)

---

<a href="#heading--configure-endpoints"><h2 id="heading--configure-endpoints"> Configure database endpoints </h2></a>
## Configure database endpoints

To make your database available to a client application, you must first offer and consume database endpoints.

Expand All @@ -46,7 +41,7 @@ juju consume rome.db1database
juju consume lisbon.db2database
```

<a href="#heading--internal-client"><h2 id="heading--internal-client"> Internal client </h2></a>
## Internal client

If the client application is another charm, deploy them and connect them with `juju integrate`.

Expand All @@ -62,7 +57,7 @@ juju integrate postgresql-test-app:first-database pgbouncer
juju integrate pgbouncer db1database
```

<a href="#heading--external-client"><h2 id="heading--external-client"> External client </h2></a>
## External client

If the client application is external, they must be integrated via the [`data-integrator` charm](https://charmhub.io/data-integrator).

Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,4 @@
# Remove or recover a cluster
[note type="caution"]
This is an internal article. **Do not use it in production!**

Contact the [Canonical Data Platform team](https://chat.charmhub.io/charmhub/channels/data-platform) if you are interested in this topic.
[/note]

This guide will cover how to manage clusters using an example PostgreSQL deployment with two servers: one in Rome and one in Lisbon.

Expand All @@ -13,16 +8,16 @@ This guide will cover how to manage clusters using an example PostgreSQL deploym
* Refer to the page [How to set up clusters](/t/13991)

## Summary
* [Switchover](#heading--switchover)
* [Detach a cluster](#heading--detach)
* [Reuse a detached cluster](#heading--reuse)
* [Remove a detached cluster](#heading--remove)
* [Recover a cluster](#heading--recover)
* [Switchover](#switchover)
* [Detach a cluster](#detach-a-cluster)
* [Reuse a detached cluster](#reuse-a-detached-cluster)
* [Remove a detached cluster](#remove-a-detached-cluster)
* [Recover a cluster](#recover-a-cluster)

<!-- TODO: Rethink sections, especially "recover" -->
---

<a href="#heading--switchover"><h2 id="heading--switchover"> Switchover </h2></a>
## Switchover

If the primary cluster fails or is removed, it is necessary to appoint a new cluster as primary.

Expand All @@ -32,7 +27,7 @@ To switchover and promote `lisbon` to primary, one would run the command:
juju run -m lisbon db2/leader promote-to-primary
```

<a href="#heading--detach"><h2 id="heading--detach"> Detach a cluster </h2></a>
## Detach a cluster

Clusters in an async replica set can be detached. The detached cluster can then be either removed or reused.

Expand All @@ -44,22 +39,22 @@ juju remove-relation -m lisbon replication-offer db2:replication

The command above will move the `rome` cluster into a detached state (`blocked`) keeping all the data in place.

<a href="#heading--reuse"><h3 id="heading--reuse"> Reuse a detached cluster </h3></a>
### Reuse a detached cluster

The following command creates a new cluster in the replica set from the detached `rome` cluster, keeping its existing data in use:

```shell
juju run -m rome db1/leader promote-to-primary
```
<a href="#heading--remove"><h3 id="heading--remove"> Remove a detached cluster </h3></a>
### Remove a detached cluster

The following command removes the detached `rome` cluster and **destroys its stored data** with the optional `--destroy-storage` flag:

```shell
juju remove-application -m rome db1 --destroy-storage
```

<a href="#heading--recover"><h2 id="heading--recover"> Recover a cluster </h2></a>
## Recover a cluster

**If the integration between clusters was removed** and one side went into a `blocked` state, integrate both clusters again and call the `promote-cluster` action to restore async replication - similar to the "Reuse a detached cluster" step above.

Expand Down
75 changes: 75 additions & 0 deletions docs/how-to/h-async-set-up.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
# Set up clusters for cross-regional async replication

Cross-regional (or multi-server) asynchronous replication focuses on disaster recovery by distributing data across different servers.

This guide will show you the basics of initiating a cross-regional async setup using an example PostgreSQL deployment with two servers: one in Rome and one in Lisbon.

## Summary
* [Deploy](#deploy)
* [Offer](#offer)
* [Consume](#consume)
* [Promote or switchover a cluster](#promote-or-switchover-a-cluster)
* [Scale a cluster](#scale-a-cluster)

---

## Deploy

To deploy two clusters in different servers, create two juju models - one for the `rome` cluster, one for the `lisbon` cluster. In the example below, we use the config flag `profile=testing` to limit memory usage.

```shell
juju add-model rome
juju add-model lisbon

juju switch rome # active model must correspond to cluster
juju deploy postgresql db1 --channel=14/edge/async-replication --config profile=testing --base ubuntu@22.04

juju switch lisbon
juju deploy postgresql db2 --channel=14/edge/async-replication --config profile=testing --base ubuntu@22.04
```

## Offer

[Offer](https://juju.is/docs/juju/offer) asynchronous replication in one of the clusters.

```shell
juju switch rome
juju offer db1:async-primary async-primary
```

## Consume

Consume asynchronous replication on planned `Standby` cluster (Lisbon):
```shell
juju switch lisbon
juju consume rome.async-primary
juju integrate async-primary db2:async-replica
```

## Promote or switchover a cluster

To define the primary cluster, use the `promote-cluster` action.

```shell
juju run -m rome db1/leader promote-cluster
```

To switchover and use `lisbon` as the primary instead, run

```shell
juju run -m lisbon db2/leader promote-cluster force-promotion=true
```

## Scale a cluster

The two clusters work independently, which means that it’s possible to scale each cluster separately. The `-m` flag defines the target of this action, so it can be performed within any active model.

For example:

```shell
juju add-unit db1 -n 2 -m rome
juju add-unit db2 -n 2 -m lisbon
```
[note]
**Note:** Scaling is possible before and after the asynchronous replication is established/created.
[/note]
82 changes: 0 additions & 82 deletions docs/how-to/h-async/h-async-set-up.md

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Loading

0 comments on commit 9c0fbae

Please sign in to comment.