Explicit row-level locking in YSQL.
diff --git a/docs/content/preview/explore/transactions/distributed-transactions-ycql.md b/docs/content/preview/explore/transactions/distributed-transactions-ycql.md
index f60d4e2029d7..a369ee93661b 100644
--- a/docs/content/preview/explore/transactions/distributed-transactions-ycql.md
+++ b/docs/content/preview/explore/transactions/distributed-transactions-ycql.md
@@ -1,12 +1,12 @@
---
-title: Distributed Transactions
-headerTitle: Distributed Transactions
-linkTitle: Distributed Transactions
-description: Distributed Transactions in YugabyteDB.
-headcontent: Distributed Transactions in YugabyteDB.
+title: Distributed transactions
+headerTitle: Distributed transactions
+linkTitle: Distributed transactions
+description: Distributed transactions in YugabyteDB.
+headcontent: Explore distributed transactions in YugabyteDB.
menu:
preview:
- name: Distributed Transactions
+ name: Distributed transactions
identifier: explore-transactions-distributed-transactions-2-ycql
parent: explore-transactions
weight: 230
@@ -31,15 +31,19 @@ type: docs
-## Creating the table
+This example shows how a distributed transaction works in YugabyteDB.
-Create a keyspace.
+{{% explore-setup-single %}}
+
+## Create a table
+
+Create a keyspace as follows:
```sql
ycqlsh> CREATE KEYSPACE banking;
```
-The YCQL table should be created with the `transactions` property enabled. The statement should look something as follows.
+The YCQL table should be created with the `transactions` property enabled. The statement should be similar to the following:
```sql
ycqlsh> CREATE TABLE banking.accounts (
@@ -50,14 +54,14 @@ ycqlsh> CREATE TABLE banking.accounts (
) with transactions = { 'enabled' : true };
```
-You can verify that this table has transactions enabled on it by running the following query.
+You can verify that this table has transactions enabled by running the following query:
```sql
ycqlsh> select keyspace_name, table_name, transactions from system_schema.tables
where keyspace_name='banking' AND table_name = 'accounts';
```
-```
+```output
keyspace_name | table_name | transactions
---------------+------------+---------------------
banking | accounts | {'enabled': 'true'}
@@ -67,7 +71,7 @@ where keyspace_name='banking' AND table_name = 'accounts';
## Insert sample data
-Let us seed this table with some sample data.
+Seed the table with some sample data as follows:
```sql
INSERT INTO banking.accounts (account_name, account_type, balance) VALUES ('John', 'savings', 1000);
@@ -82,7 +86,7 @@ Here are the balances for John and Smith.
ycqlsh> select * from banking.accounts;
```
-```
+```output
account_name | account_type | balance
--------------+--------------+---------
John | checking | 100
@@ -91,25 +95,25 @@ ycqlsh> select * from banking.accounts;
Smith | savings | 2000
```
-Check John's balance.
+Check John's balance as follows:
```sql
ycqlsh> SELECT SUM(balance) as Johns_balance FROM banking.accounts WHERE account_name='John';
```
-```
+```output
johns_balance
---------------
1100
```
-Check Smith's balance.
+Check Smith's balance as follows:
```sql
ycqlsh> SELECT SUM(balance) as smiths_balance FROM banking.accounts WHERE account_name='Smith';
```
-```
+```output
smiths_balance
----------------
2050
@@ -118,9 +122,7 @@ ycqlsh> SELECT SUM(balance) as smiths_balance FROM banking.accounts WHERE accoun
## Execute a transaction
-Here are a couple of examples of executing transactions.
-
-Let us say John transfers $200 from his savings account to his checking account. This has to be a transactional operation. This can be achieved as follows.
+Suppose John transfers $200 from his savings account to his checking account. This has to be a transactional operation. This can be achieved as follows:
```sql
BEGIN TRANSACTION
@@ -129,26 +131,26 @@ BEGIN TRANSACTION
END TRANSACTION;
```
-If you now selected the value of John's account, you should see the amounts reflected. The total balance should be the same $1100 as before.
+If you now select the value of John's account, you should see the amounts reflected. The total balance should be the same $1100 as before.
```sql
ycqlsh> select * from banking.accounts where account_name='John';
```
-```
+```output
account_name | account_type | balance
--------------+--------------+---------
John | checking | 300
John | savings | 800
```
-Check John's balance.
+Check John's balance as follows:
```sql
ycqlsh> SELECT SUM(balance) as Johns_balance FROM banking.accounts WHERE account_name='John';
```
-```
+```output
johns_balance
---------------
1100
@@ -161,14 +163,14 @@ ycqlsh> select account_name, account_type, balance, writetime(balance)
from banking.accounts where account_name='John';
```
-```
+```output
account_name | account_type | balance | writetime(balance)
--------------+--------------+---------+--------------------
John | checking | 300 | 1517898028890171
John | savings | 800 | 1517898028890171
```
-Now let us say John transfers the $200 from his checking account to Smith's checking account. We can accomplish that with the following transaction.
+Now suppose John transfers the $200 from his checking account to Smith's checking account. Run the following transaction:
```sql
BEGIN TRANSACTION
@@ -179,13 +181,13 @@ END TRANSACTION;
## Verify
-We can verify the transfer was made as we intended, and also verify that the time at which the two accounts were updated are identical by performing the following query.
+Verify that the transfer was made as intended, and that the time at which the two accounts were updated are identical by performing the following query:
```sql
ycqlsh> select account_name, account_type, balance, writetime(balance) from banking.accounts;
```
-```
+```output
account_name | account_type | balance | writetime(balance)
--------------+--------------+---------+--------------------
John | checking | 100 | 1517898167629366
@@ -200,19 +202,19 @@ The net balance for John should have decreased by $200 which that of Smith shoul
ycqlsh> SELECT SUM(balance) as Johns_balance FROM banking.accounts WHERE account_name='John';
```
-```
+```output
johns_balance
---------------
900
```
-Check Smith's balance.
+Check Smith's balance as follows:
```sql
ycqlsh> SELECT SUM(balance) as smiths_balance FROM banking.accounts WHERE account_name='Smith';
```
-```
+```output
smiths_balance
----------------
2250
diff --git a/docs/content/preview/explore/transactions/distributed-transactions-ysql.md b/docs/content/preview/explore/transactions/distributed-transactions-ysql.md
index 756c442c63ba..9ee044ce9053 100644
--- a/docs/content/preview/explore/transactions/distributed-transactions-ysql.md
+++ b/docs/content/preview/explore/transactions/distributed-transactions-ysql.md
@@ -1,12 +1,12 @@
---
-title: Distributed Transactions
-headerTitle: Distributed Transactions
-linkTitle: Distributed Transactions
-description: Distributed Transactions in YugabyteDB.
-headcontent: Distributed Transactions in YugabyteDB.
+title: Distributed transactions
+headerTitle: Distributed transactions
+linkTitle: Distributed transactions
+description: Distributed transactions in YugabyteDB.
+headcontent: Explore distributed transactions in YugabyteDB.
menu:
preview:
- name: Distributed Transactions
+ name: Distributed transactions
identifier: explore-transactions-distributed-transactions-1-ysql
parent: explore-transactions
weight: 230
@@ -31,9 +31,13 @@ type: docs
-## Overview of transaction control
+This example shows how a distributed transaction works in YugabyteDB.
-This section explains how a distributed transaction works in YugabyteDB. We will use the example table below to describe the control flow of a simple transaction.
+{{% explore-setup-single %}}
+
+## Create a table
+
+Create the following table:
```sql
CREATE TABLE accounts (
@@ -55,8 +59,11 @@ INSERT INTO accounts VALUES ('Smith', 'checking', 50);
The table should look as follows:
+```sql
+yugabyte=# SELECT * FROM accounts;
```
-yugabyte=# select * from accounts;
+
+```output
account_name | account_type | balance
--------------+--------------+---------
John | checking | 100
@@ -66,7 +73,9 @@ yugabyte=# select * from accounts;
(4 rows)
```
-Now, we will run the following transaction and explain what happens at each step.
+## Run a transaction
+
+Run the following transaction:
```sql
BEGIN TRANSACTION;
@@ -77,6 +86,7 @@ BEGIN TRANSACTION;
COMMIT;
```
+The following table explains what happens at each step.
@@ -104,7 +114,7 @@ UPDATE accounts SET balance = balance - 200
- The transaction coordinator writes a *provisional record* to the tablet that contains this row. The provisional record consists of the transaction id, so the state of the transaction can be determined. If there already exists a provisional record written by another transaction, then the current transaction would use the transaction id that is present in the provisional record to fetch details and check if there is a potential conflict.
+ The transaction coordinator writes a provisional record to the tablet that contains this row. The provisional record consists of the transaction ID, so the state of the transaction can be determined. If a provisional record written by another transaction already exists, then the current transaction would use the transaction ID that is present in the provisional record to fetch details and check if there is a potential conflict.
|
@@ -128,20 +138,23 @@ COMMIT;
- Note that in order to COMMIT , all the provisional writes must have successfully completed. The COMMIT statement causes the transaction coordinator to update the transaction status in the transaction status table to COMMITED , at which point it is assigned the commit timestamp (which is a *hybrid timestamp* to be precise). At this point, the transaction is completed. In the background, the COMMIT record along with the commit timestamp is applied to each of the rows that participated to make future lookups of these rows efficient.
+ Note that to COMMIT , all the provisional writes must have successfully completed. The COMMIT statement causes the transaction coordinator to update the transaction status in the transaction status table to COMMITED , at which point it is assigned the commit timestamp (which is a hybrid timestamp to be precise). At this point, the transaction is completed. In the background, the COMMIT record along with the commit timestamp is applied to each of the rows that participated to make future lookups of these rows efficient.
|
-This is shown diagrammatically below.
-![distributed_txn_write_path](/images/architecture/txn/distributed_txn_write_path.svg)
+This is shown diagrammatically in the following illustration.
+![Distributed transaction write path](/images/architecture/txn/distributed_txn_write_path.svg)
-After the above transaction succeeds, the table should look as follows.
+After the above transaction succeeds, the table should look as follows:
+```sql
+yugabyte=# SELECT * FROM accounts;
```
-yugabyte=# select * from accounts;
+
+```output
account_name | account_type | balance
--------------+--------------+---------
John | checking | 300
@@ -151,10 +164,9 @@ yugabyte=# select * from accounts;
(4 rows)
```
-
### Scalability
-Since all nodes of the cluster can process transactions by becoming transaction coordinators, horizontal scalability can simply be achieved by distributing the queries evenly across the nodes of the cluster.
+Because all nodes of the cluster can process transactions by becoming transaction coordinators, horizontal scalability can be achieved by distributing the queries evenly across the nodes of the cluster.
### Resilience
@@ -166,15 +178,17 @@ Each update performed as a part of the transaction is replicated across multiple
{{< note title="Note" >}}
YugabyteDB currently supports optimistic concurrency control, with pessimistic concurrency control being worked on actively.
-{{}}
-
+{{ note >}}
-## Transaction Options
+## Transaction options
-You can see the various options supported by transactions by running the `\h BEGIN` statement, as shown below.
+You can see the various options supported by transactions by running the following `\h BEGIN` meta-command:
-```
+```sql
yugabyte=# \h BEGIN
+```
+
+```output
Command: BEGIN
Description: start a transaction block
Syntax:
@@ -196,21 +210,28 @@ The `transaction_mode` can be set to one of the following options:
As an example, trying to do a write operation such as creating a table or inserting a row in a `READ ONLY` transaction would result in an error as shown below.
-```
+```sql
yugabyte=# BEGIN READ ONLY;
+```
+
+```output
BEGIN
+```
+```sql
yugabyte=# CREATE TABLE example(k INT PRIMARY KEY);
-ERROR: 25P02: current transaction is aborted, commands ignored until end of
- transaction block
+```
+
+```output
+ERROR: cannot execute CREATE TABLE in a read-only transaction
```
### `DEFERRABLE` transactions
The `DEFERRABLE` transaction property in YSQL is similar to PostgreSQL in that has no effect unless the transaction is also `SERIALIZABLE` and `READ ONLY`.
-When all three of these properties (`SERIALIZABLE`, `DEFERRABLE` and `READ ONLY`) are set for a transaction, the transaction may block when first acquiring its snapshot, after which it is able to run without the normal overhead of a `SERIALIZABLE` transaction and without any risk of contributing to or being canceled by a serialization failure.
+When all three of these properties (`SERIALIZABLE`, `DEFERRABLE`, and `READ ONLY`) are set for a transaction, the transaction may block when first acquiring its snapshot, after which it is able to run without the typical overhead of a `SERIALIZABLE` transaction and without any risk of contributing to or being canceled by a serialization failure.
{{< tip title="Tip" >}}
-This mode is well suited for long-running reports or backups without being impacting or impacted by other transactions.
+This mode is well-suited for long-running reports or backups without being impacting or impacted by other transactions.
{{< /tip >}}
diff --git a/docs/content/preview/explore/transactions/explicit-locking.md b/docs/content/preview/explore/transactions/explicit-locking.md
index f80676ca2e19..0319d3b56fd1 100644
--- a/docs/content/preview/explore/transactions/explicit-locking.md
+++ b/docs/content/preview/explore/transactions/explicit-locking.md
@@ -1,13 +1,11 @@
---
-title: Explicit Locking
-headerTitle: Explicit Locking
-linkTitle: Explicit Locking
-description: Explicit Locking in YugabyteDB.
-headcontent: Explicit Locking in YugabyteDB.
-image:
+title: Explicit locking
+headerTitle: Explicit locking
+linkTitle: Explicit locking
+description: Explicit locking in YugabyteDB.
+headcontent: Explore row locking in YugabyteDB
menu:
preview:
- name: Explicit Locking
identifier: explore-transactions-explicit-locking-1-ysql
parent: explore-transactions
weight: 245
@@ -30,8 +28,8 @@ This section describes how explicit locking works in YugabyteDB.
YugabyteDB supports most row-level locks, similar to PostgreSQL. Explicit row-locks use transaction priorities to ensure that two transactions can never hold conflicting locks on the same row. To do this, the query layer acquires the row lock by assigning a very high value for the priority of the transaction that is being run. This causes all other transactions that conflict with the current transaction to fail, because they have a lower transaction priority.
{{< note title="Note" >}}
-Explicit locking is an area of active development in YugabyteDB. A number of enhancements are planned in this area. Unlike PostgreSQL, YugabyteDB uses optimistic concurrency control and does not block / wait for currently held locks, instead opting to abort the conflicting transaction with a lower priority. Pessimistic concurrency control is currently under development.
-{{}}
+Explicit locking is an area of active development in YugabyteDB. A number of enhancements are planned. Unlike PostgreSQL, YugabyteDB uses optimistic concurrency control and does not block or wait for currently held locks, instead opting to abort the conflicting transaction with a lower priority. Pessimistic concurrency control is currently under development.
+{{ note >}}
The types of row locks currently supported are:
@@ -40,8 +38,12 @@ The types of row locks currently supported are:
* `FOR SHARE`
* `FOR KEY SHARE`
+## Example
+
The following example uses the `FOR UPDATE` row lock. First, a row is selected for update, thereby locking it, and subsequently updated. A concurrent transaction should not be able to abort this transaction by updating the value of that row after the row is locked.
+{{% explore-setup-single %}}
+
To try out this scenario, first create an example table with sample data, as follows:
```sql
diff --git a/docs/content/preview/explore/transactions/isolation-levels.md b/docs/content/preview/explore/transactions/isolation-levels.md
index 54d08ea4540f..16b9f855ef4e 100644
--- a/docs/content/preview/explore/transactions/isolation-levels.md
+++ b/docs/content/preview/explore/transactions/isolation-levels.md
@@ -1,13 +1,13 @@
---
-title: Isolation Levels
-headerTitle: Isolation Levels
-linkTitle: Isolation Levels
+title: Isolation levels
+headerTitle: Isolation levels
+linkTitle: Isolation levels
description: Isolation Levels in YugabyteDB.
-headcontent: Isolation Levels in YugabyteDB.
+headcontent: Explore isolation levels in YugabyteDB.
image:
menu:
preview:
- name: Isolation Levels
+ name: Isolation levels
identifier: explore-transactions-isolation-levels-1-ysql
parent: explore-transactions
weight: 235
@@ -32,46 +32,47 @@ type: docs
-->
+YugabyteDB supports three isolation levels in the transactional layer - Serializable, Snapshot, and Read Committed. PostgreSQL (and the SQL standard) have four isolation levels - Serializable, Repeatable read, Read Committed, and Read uncommitted. The mapping between the PostgreSQL isolation levels in YSQL, along with which transaction anomalies can occur at each isolation level, are shown in the following table.
-Yugabyte supports three isolation levels in the transactional layer - Serializable, Snapshot and Read Committed. PostgreSQL (and the SQL standard) have four isolation levels - `Serializable`, `Repeatable read`, `Read Committed` and `Read uncommitted`. The mapping between the PostgreSQL isolation levels in YSQL, along with which transaction anomalies can occur at each isolation level are shown below.
-
-PostgreSQL Isolation | YugabyteDB Equivalent | Dirty Read | Nonrepeatable Read | Phantom Read | Serialization Anomaly
------------------|--------------|------------------------|---|---|---
-Read
uncommitted | Read Committed
$ | Allowed, but not in YSQL | Possible | Possible | Possible
-Read
committed | Read Committed
$ | Not possible | Possible | Possible | Possible
-Repeatable
read | Snapshot | Not possible | Not possible | Allowed, but not in YSQL | Possible
+| PostgreSQL Isolation | YugabyteDB Equivalent | Dirty Read | Non-repeatable Read | Phantom Read | Serialization Anomaly |
+| :------------------- | :-------------------- | :--------- | :------------------ | :----------- | :-------------------- |
+Read uncommitted | Read Committed
$ | Allowed, but not in YSQL | Possible | Possible | Possible
+Read committed | Read Committed
$ | Not possible | Possible | Possible | Possible
+Repeatable read | Snapshot | Not possible | Not possible | Allowed, but not in YSQL | Possible
Serializable | Serializable | Not possible | Not possible | Not possible | Not possible
-
$ Read Committed Isolation is supported only if the tserver gflag `yb_enable_read_committed_isolation` is set to `true`. By default this gflag is `false` and in this case the Read Committed isolation level of Yugabyte's transactional layer falls back to the stricter Snapshot Isolation (in which case `READ COMMITTED` and `READ UNCOMMITTED` of YSQL also in turn use Snapshot Isolation). Read Committed support is currently in [Beta](/preview/faq/general/#what-is-the-definition-of-the-beta-feature-tag).
+
$ Read Committed Isolation is supported only if the YB-Tserver flag `yb_enable_read_committed_isolation` is set to `true`. By default this flag is `false` and in this case the Read Committed isolation level of the YugabyteDB transactional layer falls back to the stricter Snapshot Isolation (in which case `READ COMMITTED` and `READ UNCOMMITTED` of YSQL also in turn use Snapshot Isolation). Read Committed support is currently in [Beta](/preview/faq/general/#what-is-the-definition-of-the-beta-feature-tag).
{{< note title="Note" >}}
-The default isolation level for the YSQL API is essentially Snapshot (i.e., same as PostgreSQL's `REPEATABLE READ`) because `READ COMMITTED`, which is the YSQL API's (and also PostgreSQL's) syntactic default, maps to Snapshot Isolation (unless the tserver gflag `yb_enable_read_committed_isolation` is set to `true`).
+The default isolation level for the YSQL API is essentially Snapshot (that is, the same as PostgreSQL's `REPEATABLE READ`) because `READ COMMITTED`, which is the YSQL API's (and also PostgreSQL's) syntactic default, maps to Snapshot Isolation (unless the YB-Tserver flag `yb_enable_read_committed_isolation` is set to `true`).
To set the transaction isolation level of a transaction, use the command `SET TRANSACTION`.
{{< /note >}}
-As seen from the table above, the most strict isolation level is `Serializable`, which requires that any concurrent execution of a set of `Serializable` transactions is guaranteed to produce the same effect as running them in some serial (one transaction at a time) order. The other levels are defined by which anomalies must not occur as a result of interactions between concurrent transactions. Due to the definition of Serializable isolation, none of these anomalies are possible at that level. For reference, the various transaction anomalies are described briefly below:
+As seen from the preceding table, the most strict isolation level is `Serializable`, which requires that any concurrent execution of a set of `Serializable` transactions is guaranteed to produce the same effect as running them in some serial (one transaction at a time) order. The other levels are defined by which anomalies must not occur as a result of interactions between concurrent transactions. Due to the definition of Serializable isolation, none of these anomalies are possible at that level. For reference, the various transaction anomalies are described briefly below:
* `Dirty read`: A transaction reads data written by a concurrent uncommitted transaction.
-* `Nonrepeatable read`: A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read).
+* `Nonrepeatable read`: A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed after the initial read).
* `Phantom read`: A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.
* `Serialization anomaly`: The result of successfully committing a group of transactions is inconsistent with all possible orderings of running those transactions one at a time.
-Let us now look at how Serializable, Snapshot and Read Committed isolation works in YSQL.
-
-## Serializable Isolation
+## Serializable isolation
The *Serializable* isolation level provides the strictest transaction isolation. This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently. Serializable isolation can detect read-write conflicts in addition to write-write conflicts. This is accomplished by writing *provisional records* for read operations as well.
-Let's use a bank overdraft protection example to illustrate this case. The hypothetical case is that there is a bank which allows depositors to withdraw money up to the total of what they have in all accounts. The bank will later automatically transfer funds as needed to close the day with a positive balance in each account. Within a single transaction they check that the total of all accounts exceeds the amount requested.
+### Example
+
+The following bank overdraft protection example illustrates this case. The hypothetical case is that there is a bank which allows depositors to withdraw money up to the total of what they have in all accounts. The bank will later automatically transfer funds as needed to close the day with a positive balance in each account. Within a single transaction they check that the total of all accounts exceeds the amount requested.
-Let's say someone tries to withdraw $900 from two of their accounts simultaneously, each with $500 balances. At the `REPEATABLE READ` transaction isolation level, that could work; but if the `SERIALIZABLE` transaction isolation level is used, a read/write conflict will be detected and one of the transactions will be rejected.
+{{% explore-setup-single %}}
-The example can be set up with these statements:
+Suppose someone tries to withdraw $900 from two of their accounts simultaneously, each with $500 balances. At the `REPEATABLE READ` transaction isolation level, that could work; but if the `SERIALIZABLE` transaction isolation level is used, a read/write conflict is detected and one of the transactions rejected.
+
+Set up the example with the following statements:
```sql
create table account
@@ -86,98 +87,125 @@ insert into account values
('kevin','checking', 500);
```
-
+Next, connect to the cluster using two independent ysqlsh instances, referred to as session #1 and session #2.
+
+
- session #1 |
- session #2 |
+ session #1 |
+ session #2 |
-
- Begin a transaction in session #1 with the Serializable isolation level. The account total is $1000, so a $900 withdrawal is OK.
-
-begin isolation level serializable;
+
+
+Begin a transaction in session #1 with the Serializable isolation level. The account total is $1000, so a $900 withdrawal is OK.
+
+```sql
+begin isolation level serializable;
select type, balance from account
- where name = 'kevin';
+ where name = 'kevin';
+```
+
+```output
type | balance
----------+---------
saving | $500.00
checking | $500.00
(2 rows)
-
- |
-
+```
+
+ |
+
|
|
-
+ |
|
-
- Begin a transaction in session #2 with the Serializable isolation level as well. Once again, the account total is $1000, so a $900 withdrawal is OK.
-
-begin isolation level serializable;
+
+
+Begin a transaction in session #2 with the Serializable isolation level as well. Once again, the account total is $1000, so a $900 withdrawal is OK.
+
+```sql
+begin isolation level serializable;
select type, balance from account
- where name = 'kevin';
+ where name = 'kevin';
+```
+
+```output
type | balance
----------+---------
saving | $500.00
checking | $500.00
(2 rows)
-
- |
+```
+
+
|
-
- Withdraw $900 from the savings account, given the total is $1000 this should be OK.
-
+
+
+Withdraw $900 from the savings account, given the total is $1000 this should be OK.
+
+```sql
update account
set balance = balance - 900::money
where name = 'kevin' and type = 'saving';
-
- |
-
+```
+
+ |
+
|
|
- |
-
- Simultaneously, withdrawing $900 from the checking account is going to be a problem. This cannot co-exist with the other transaction's activity. This transaction would fail immediately.
-
+ |
+
+
+Simultaneously, withdrawing $900 from the checking account is going to be a problem. This can't co-exist with the other transaction's activity. This transaction would fail immediately.
+
+```sql
update account
set balance = balance - 900::money
where name = 'kevin' and type = 'checking';
-
-ERROR: 40001: Operation failed. Try again.: Transaction aborted: XXXX
-
- |
+```
+
+```output
+ERROR: 40001: Operation failed.
+ Try again.: Transaction aborted: XXXX
+```
+
+
|
-
- This transaction can now be committed.
-
-commit;
+
+
+This transaction can now be committed.
+
+```sql
+commit;
select type, balance from account
- where name = 'kevin';
+ where name = 'kevin';
+```
+
+```output
type | balance
----------+----------
checking | $500.00
saving | -$400.00
(2 rows)
-
- |
-
+```
+
+ |
+
|
|
-
-
-## Snapshot Isolation
+## Snapshot isolation
The Snapshot isolation level only sees data committed before the transaction began (or in other words, it works on a "snapshot" of the table). Transactions running under Snapshot isolation do not see either uncommitted data or changes committed during transaction execution by other concurrently running transactions. Note that the query does see the effects of previous updates executed within its own transaction, even though they are not yet committed. This is a stronger guarantee than is required by the SQL standard for the `REPEATABLE READ` isolation level.
@@ -190,205 +218,261 @@ Snapshot isolation detects only write-write conflicts, it does not detect read-w
Applications using this level must be prepared to retry transactions due to serialization failures.
{{< /note >}}
-Let's run through the scenario below to understand how transactions behave under the snapshot isolation level (which PostgreSQL's *Repeatable Read* maps to).
+### Example
+
+The following scenario shows how transactions behave under the snapshot isolation level (which PostgreSQL's *Repeatable Read* maps to).
-First, create an example table with sample data.
+Create an example table with sample data as follows:
```sql
CREATE TABLE IF NOT EXISTS example (k INT PRIMARY KEY);
TRUNCATE TABLE example;
```
-Next, connect to the cluster using two independent `ysqlsh` instances called *session #1* and *session #2* below.
+Next, connect to the cluster using two independent `ysqlsh` instances, referred to as *session #1* and *session #2*.
-{{< note title="Note" >}}
-You can connect the session #1 and session #2 `ysqlsh` instances to the same server, or to different servers.
-{{< /note >}}
-
-
+
- session #1 |
- session #2 |
+ session #1 |
+ session #2 |
-
- Begin a transaction in session #1. This will be snapshot isolation by default, meaning it will work against a snapshot of the database as of this point.
-
+
+
+Begin a transaction in session #1. This is snapshot isolation by default, meaning it will work against a snapshot of the database as of this point.
+
+```sql
BEGIN TRANSACTION;
-
- |
-
+```
+
+ |
+
|
|
-
- Insert a row, but let's not commit the transaction. This row should be visible only to this transaction.
-
-INSERT INTO example VALUES (1);
+
+
+Insert a row, but don't commit the transaction. This row should be visible only to this transaction.
+
+```sql
+INSERT INTO example VALUES (1);
SELECT * FROM example;
+```
+
+```output
k
---
1
(1 row)
-
- |
-
+```
+
+ |
+
|
|
-
+ |
|
-
- Insert a different row here. Verify that the row inserted in the transaction in session #1 is not visible in this session.
-
-INSERT INTO example VALUES (2);
+
+
+Insert a different row. Verify that the row inserted in the transaction in session #1 is not visible in this session as follows:
+
+```sql
+INSERT INTO example VALUES (2);
SELECT * FROM example;
+```
+
+```output
k
---
2
(1 row)
-
- |
+```
+
+
|
-
- The row inserted in the other session would not be visible here, because we're working against an older snapshot of the database. Let's verify that.
-
+
+
+The row inserted in the other session is not visible here, because you're working against an older snapshot of the database. Verify that as follows:
+
+```sql
SELECT * FROM example;
+```
+
+```output
k
---
1
(1 row)
-
- |
-
+```
+
+ |
+
|
|
-
- Now let's commit this transaction. As long as the row(s) we're writing as a part of this transaction are not modified during the lifetime of the transaction, there would be no conflicts. Let's verify we can see all rows after the commit.
-
-COMMIT;
+
+
+Now commit this transaction. As long as the row(s) you're writing as a part of this transaction are not modified during the lifetime of the transaction, there would be no conflicts. Verify you can see all rows after the commit as follows:
+
+```sql
+COMMIT;
SELECT * FROM example;
+```
+
+```output
k
---
1
2
(2 rows)
-
- |
-
+```
+
+ |
+
|
|
-
-## Read Committed Isolation
+## Read committed isolation
-This is same as Snapshot Isolation except that every statement in the transaction will see all data that has been committed before it is issued (note that this implicitly also means that the statement will see a consistent snapshot). In other words, each statement works on a new "snapshot" of the database that includes everything that is committed before the statement is issued. Conflict detection is the same as in Snapshot Isolation.
+Read committed isolation is the same as Snapshot isolation except that every statement in the transaction will see all data that has been committed before it is issued (note that this implicitly also means that the statement will see a consistent snapshot). In other words, each statement works on a new "snapshot" of the database that includes everything that is committed before the statement is issued. Conflict detection is the same as in Snapshot isolation.
-
-
- session #1 |
- session #2 |
-
+### Example
-
-
- Create a sample table.
-
+Create a sample table as follows:
+
+```sql
CREATE TABLE test (k int PRIMARY KEY, v int);
INSERT INTO test VALUES (1, 2);
-
- |
-
- |
+```
+
+Next, connect to the cluster using two independent ysqlsh instances, referred to as session #1 and session #2.
+
+
+
+ session #1 |
+ session #2 |
-
- By default, the tserver gflag yb_enable_read_committed_isolation=false. In this case, Read Committed maps to Snapshot Isolation at the transactional layer. So, READ COMMITTED of YSQL API in turn maps to Snapshot Isolation.
- BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
-SELECT * FROM test;
+
+
+By default, the YB-TServer flag `yb_enable_read_committed_isolation` is false. In this case, Read Committed maps to Snapshot Isolation at the transactional layer. So, READ COMMITTED of YSQL API in turn maps to Snapshot Isolation.
+
+```sql
+BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
+SELECT * FROM test;
+```
+
+```output
k | v
---+---
1 | 2
-(1 row)
- |
-
+(1 row)
+```
+
+ |
+
|
|
-
- |
-
- Insert a new row.
- INSERT INTO test VALUES (2, 3);
+ |
|
+
+
+Insert a new row.
+
+```sql
+INSERT INTO test VALUES (2, 3);
+```
+
+ |
-
- Perform read again in the same transaction. Note that the recently inserted row (2, 3) isn't
- visible to the statement because Read Committed is disabled at the transactional layer and maps to
- Snapshot (in which the whole transaction sees a consistent snapshot of the database).
- SELECT * FROM test;
-COMMIT;
+
+
+Perform the read again in the same transaction.
+
+```sql
+SELECT * FROM test;
+COMMIT;
+```
+
+```output
k | v
---+---
1 | 2
-(1 row)
- |
-
- |
-
|
+(1 row)
+```
-
-
- Set tserver gflag yb_enable_read_committed_isolation=true
- BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
-SELECT * FROM test;
+The inserted row (2, 3) isn't visible because Read Committed is disabled at the transactional layer and maps to Snapshot (in which the whole transaction sees a consistent snapshot of the database).
+
+Set the YB-Tserver flag yb_enable_read_committed_isolation=true.
+
+```sql
+BEGIN TRANSACTION ISOLATION LEVEL READ COMMITTED;
+SELECT * FROM test;
+```
+
+```output
k | v
---+---
1 | 2
2 | 3
-(2 rows)
- |
-
+(2 rows)
+```
+
+ |
+
|
-
- |
-
- In another session, insert a new row.
- INSERT INTO test VALUES (3, 4);
+ |
|
+
+
+Insert a new row.
+
+```sql
+INSERT INTO test VALUES (3, 4);
+```
+
+ |
-
- Perform read again in the same transaction. This time, the statement will be able to see the
- row (3, 4) that was committed after this transaction was started but before the statement was issued.
-
+
+
+Perform the read again in the same transaction.
+
+```sql
SELECT * FROM test;
+```
+
+```output
k | v
---+---
1 | 2
2 | 3
3 | 4
(3 rows)
-
- |
-
+```
+
+This time, the statement can see the row (3, 4) that was committed after this transaction was started but before the statement was issued.
+
+ |
+
|
|
diff --git a/docs/content/preview/explore/ysql-language-features/_index.md b/docs/content/preview/explore/ysql-language-features/_index.md
index 41d536fec353..5267c801b207 100644
--- a/docs/content/preview/explore/ysql-language-features/_index.md
+++ b/docs/content/preview/explore/ysql-language-features/_index.md
@@ -24,7 +24,7 @@ The following diagram shows how YugabyteDB reuses the PostgreSQL query layer, sp
![Reusing the PostgreSQL query layer in YSQL](/images/section_icons/architecture/Reusing-PostgreSQL-query-layer.png)
-## SQL Features in YSQL
+## SQL features in YSQL
The following table lists the most important YSQL features which you would find familiar if you have worked with PostgreSQL.
diff --git a/docs/content/preview/explore/ysql-language-features/advanced-features/collations.md b/docs/content/preview/explore/ysql-language-features/advanced-features/collations.md
index 447b8371741b..a73f0733ba6d 100644
--- a/docs/content/preview/explore/ysql-language-features/advanced-features/collations.md
+++ b/docs/content/preview/explore/ysql-language-features/advanced-features/collations.md
@@ -79,7 +79,7 @@ select count(collname) from pg_collation where collprovider = 'i';
(1 row)
```
-## Collation Creation
+## Collation creation
In addition to predefined collations, you can define new collations. For example,
@@ -139,7 +139,7 @@ select * from coll_tab4 order by name;
(2 rows)
```
-## Index Collation
+## Index collation
When a table column has an explicit collation, an index built on the column will be sorted according to the column collation. YSQL also allows the index to have its own explicit collation that is different from that of the table column. For example:
@@ -148,9 +148,9 @@ create table coll_tab5(name text collate "en-US-x-icu");
create index name_idx on coll_tab5(name collate "C" asc);
```
-This can be useful to speed up queries that involve pattern matching operators such as LIKE because a regular index will be sorted according to collation "en-US-x-icu" and such an index cannot be used by pattern matching operators.
+This can speed up queries that involve pattern matching operators such as LIKE because a regular index will be sorted according to collation "en-US-x-icu" and such an index cannot be used by pattern matching operators.
-## Collation Strength
+## Collation strength
YSQL uses the same rules as in PostgreSQL to determine which collation is used in sorting character strings. An explicitly specified collation has more _strength_ then a referenced column, which has more strength than a text expression without an explicit collation. For example:
@@ -255,7 +255,7 @@ There are a number of YSQL limitations on collation due to the internal implemen
yugabyte=#
```
-* Libc collations are very limited:
+* libc collations are very limited:
```sql
select collname from pg_collation where collprovider = 'c';
diff --git a/docs/content/preview/explore/ysql-language-features/advanced-features/cursor.md b/docs/content/preview/explore/ysql-language-features/advanced-features/cursor.md
index 0d9ff8c4e5d6..e8ccc3d6c653 100644
--- a/docs/content/preview/explore/ysql-language-features/advanced-features/cursor.md
+++ b/docs/content/preview/explore/ysql-language-features/advanced-features/cursor.md
@@ -45,15 +45,15 @@ Operations involving cursors must be performed inside transactions.
You need to declare a cursor before you can open and use it. There are two ways to declare a cursor:
-- As a variable of type `refcursor` placed within the YSQL block's declaration section, as demonstrated by the following syntax:
+- As a variable of type `refcursor` placed in the YSQL block's declaration section, as demonstrated by the following syntax:
- ```
+ ```sql
DECLARE new_cursor refcursor;
```
- As an element bound to a query, based on the following syntax:
- ```
+ ```sql
new_cursor CURSOR [( arguments )] FOR a_query;
```
@@ -197,7 +197,7 @@ For more information and examples, refer to the "Returning Cursors" section in [
You can iterate through the result set of a bound cursor using a certain form of the `FOR` statement, as per the following syntax:
-```
+```sql
FOR rec_var
IN bound_cursor_var [ ( [ argument_name := ] argument_value [, ...] ) ]
LOOP
@@ -221,6 +221,8 @@ CLOSE employees_cursor_2;
## Examples
+{{% explore-setup-single %}}
+
Suppose you work with a database that includes the following table populated with data:
```sql
diff --git a/docs/content/preview/explore/ysql-language-features/advanced-features/partitions.md b/docs/content/preview/explore/ysql-language-features/advanced-features/partitions.md
index 0180467e5f06..1cdb32d6a98c 100644
--- a/docs/content/preview/explore/ysql-language-features/advanced-features/partitions.md
+++ b/docs/content/preview/explore/ysql-language-features/advanced-features/partitions.md
@@ -1,7 +1,7 @@
---
-title: Table Partitioning
-linkTitle: Table Partitioning
-description: Table Partitioning in YSQL
+title: Table partitioning
+linkTitle: Table partitioning
+description: Table partitioning in YSQL
image: /images/section_icons/secure/create-roles.png
menu:
preview:
@@ -15,11 +15,13 @@ type: docs
This section describes how to partition tables in YugabyteDB using YSQL.
+{{% explore-setup-single %}}
+
## Overview
Partitioning is another term for physically dividing large tables in YugabyteDB into smaller, more manageable tables to improve performance. Typically, tables with columns containing timestamps are subject to partitioning because of the historical and predictable nature of their data.
-Since partitioned tables do not appear nor act differently from the original table, applications accessing the database are not always aware of the fact that partitioning has taken place.
+Because partitioned tables do not appear nor act differently from the original table, applications accessing the database are not always aware of the fact that partitioning has taken place.
YSQL supports the following types of partitioning:
@@ -27,9 +29,9 @@ YSQL supports the following types of partitioning:
- List partitioning, when a table is partitioned via listing key values to appear in each partition.
- Hash partitioning, when a table is partitioned by specifying a modulus and remainder for each partition.
-For supplementary information on partitioning, see [Row-Level Geo-Partitioning](../../../multi-region-deployments/row-level-geo-partitioning/).
+For supplementary information on partitioning, see [Row-level geo-partitioning](../../../multi-region-deployments/row-level-geo-partitioning/).
-## Declarative Table Partitioning
+## Declarative table partitioning
YSQL allows you to specify how exactly to divide a table. You provide a partitioning method and partition key consisting of a list of columns or expressions. The divided table is called a partitioned table, and the resulting tables are called partitions. When you insert rows into a partitioned table, they are redirected to a partition depending on the value of the partition key. You can also directly insert rows into the partition table itself, and those rows can be fetched by querying the parent table.
@@ -89,8 +91,7 @@ CREATE TABLE order_changes_2021_01 PARTITION OF order_changes
FOR VALUES FROM ('2021-01-01') TO ('2021-02-01');
```
-Partitioning ranges are inclusive at the lower ( `FROM` ) bound and exclusive at the upper ( `TO` ) bound.
-Each month range in the preceding examples includes the start of the month, but does not include the start of the following month.
+Partitioning ranges are inclusive at the lower ( `FROM` ) bound and exclusive at the upper ( `TO` ) bound. Each month range in the preceding examples includes the start of the month, but does not include the start of the following month.
To create a new partition that contains only the rows that don't match the specified partitions, add a default partition as follows:
@@ -106,8 +107,11 @@ yugabyte=# CREATE INDEX ON order_changes (change_date);
This automatically creates indexes on each partition, as demonstrated by the following output:
-```output
+```sql
yugabyte=# \d order_changes_2019_02
+```
+
+```output
Table "public.order_changes_2019_02"
Column | Type | Collation | Nullable | Default
-------------+------+-----------+----------+---------
@@ -117,10 +121,13 @@ yugabyte=# \d order_changes_2019_02
Partition of: order_changes FOR VALUES FROM ('2019-02-01') TO ('2019-03-01')
Indexes:
"order_changes_2019_02_change_date_idx" lsm (change_date HASH)
+```
-...
-
+```sql
yugabyte=# \d order_changes_2021_01
+```
+
+```output
Table "public.order_changes_2021_01"
Column | Type | Collation | Nullable | Default
-------------+------+-----------+----------+---------
@@ -179,7 +186,7 @@ SELECT count(*) FROM order_changes WHERE change_date >= DATE '2020-01-01';
If the `order_changes` table is partitioned by `change_date`, there is a big chance that only a subset of partitions needs to be queried. When enabled, both partition pruning and constraint exclusion can provide significant performance improvements for such queries by filtering out partitions that do not satisfy the criteria.
-Even though partition pruning and constraint exclusion target the same goal, the underlying mechanisms are different. Specifically, constraint exclusion is applied during query planning, and therefore only works if the `WHERE` clause contains constants or externally supplied parameters. For example, a comparison against a non-immutable function such as `CURRENT_TIMESTAMP` cannot be optimized, since the planner cannot know which child table the function's value might fall into at run time. On the other hand, partition pruning is applied during query execution, and therefore can be more flexible. However, it is only used for `SELECT` queries. Updates can only benefit from constraint exclusion.
+Even though partition pruning and constraint exclusion target the same goal, the underlying mechanisms are different. Specifically, constraint exclusion is applied during query planning, and therefore only works if the `WHERE` clause contains constants or externally supplied parameters. For example, a comparison against a non-immutable function such as `CURRENT_TIMESTAMP` cannot be optimized, because the planner cannot know which child table the function's value might fall into at run time. On the other hand, partition pruning is applied during query execution, and therefore can be more flexible. However, it is only used for `SELECT` queries. Updates can only benefit from constraint exclusion.
Both optimizations are enabled by default, which is the recommended setting for the majority of cases. However, if you know for certain that one of your queries will have to scan all the partitions, you can consider disabling the optimizations for that query:
@@ -191,4 +198,4 @@ SELECT count(*) FROM order_changes WHERE change_date >= DATE '2019-01-01';
To re-enable partition pruning, set the `enable_partition_pruning` setting to `on`.
-For constraint exclusion, the recommended (and default) setting is neither `off` nor `on`, but rather an intermediate value `partition`, which means that it’s applied only to queries that are executed on partitioned tables.
+For constraint exclusion, the recommended (and default) setting is neither `off` nor `on`, but rather an intermediate value `partition`, which means that it's applied only to queries that are executed on partitioned tables.
diff --git a/docs/content/preview/explore/ysql-language-features/advanced-features/savepoints.md b/docs/content/preview/explore/ysql-language-features/advanced-features/savepoints.md
index c7e21bdd2cfc..4f0c00a668f0 100644
--- a/docs/content/preview/explore/ysql-language-features/advanced-features/savepoints.md
+++ b/docs/content/preview/explore/ysql-language-features/advanced-features/savepoints.md
@@ -13,7 +13,7 @@ aliases:
type: docs
---
-This document provides an overview of YSQL savepoints, and demonstrates how to use them to checkpoint your progress within a transaction. The `SAVEPOINT` command establishes a new savepoint within the current transaction.
+This document provides an overview of YSQL savepoints, and demonstrates how to use them to checkpoint your progress within a transaction. The `SAVEPOINT` command establishes a new savepoint in the current transaction.
## Overview
@@ -31,6 +31,8 @@ The relevant savepoint commands are:
## Example
+{{% explore-setup-single %}}
+
1. Create a sample table.
```plpgsql
diff --git a/docs/content/preview/explore/ysql-language-features/advanced-features/views.md b/docs/content/preview/explore/ysql-language-features/advanced-features/views.md
index 629ac8f9039e..6a1afb4d5533 100644
--- a/docs/content/preview/explore/ysql-language-features/advanced-features/views.md
+++ b/docs/content/preview/explore/ysql-language-features/advanced-features/views.md
@@ -15,11 +15,11 @@ type: docs
This document describes how to create, use, and manage views in YSQL.
-## Overview
-
Regular views allow you to present data in YugabyteDB tables by using a different variety of named queries. In essence, a view is a proxy for a complex query to which you assign a name. In YSQL, views do not store data. However, YSQL also supports materialized views which _do_ store the results of the query.
-## Creating Views
+{{% explore-setup-single %}}
+
+## Create views
You create views based on the following syntax:
@@ -27,7 +27,7 @@ You create views based on the following syntax:
CREATE VIEW view_name AS query_definition;
```
-*query_definition* can be a simple `SELECT` statement or a `SELECT` statement with joins.
+*query_definition* can be a basic `SELECT` statement or a `SELECT` statement with joins.
Suppose you work with a database that includes the following table populated with data:
@@ -74,7 +74,7 @@ employee_no | name
If you create a view based on multiple tables with joins, using this view in your queries would significantly simplify the process.
-## Modifying Views
+## Modify views
You can modify the query based on which a view was created by combining the `CREATE VIEW` statement with `OR REPLACE`, as demonstrated by the following syntax:
@@ -106,7 +106,7 @@ The preceding query produces the following output:
1222 | Bette Davis | Sales
```
-## Deleting Views
+## Delete views
You can remove (drop) an existing view by using the `DROP VIEW` statement, as demonstrated by the following syntax:
@@ -124,7 +124,7 @@ DROP VIEW IF EXISTS employees_view;
You can also remove more than one view by providing a comma-separated list of view names.
-## Using Updatable Views
+## Use updatable views
Some YSQL views are updatable. The defining query of such views (1) must have only one entry (either a table or another updatable view) in its `FROM` clause; and (2) cannot contain `DISTINCT`, `GROUP BY`, `HAVING`, `EXCEPT`, `INTERSECT`, or `LIMIT` clauses at the top level. In addition, the view's selection list cannot contain window functions, set-returning or aggregate functions.
@@ -162,7 +162,7 @@ DELETE FROM employees_view
WHERE employee_no = 1227;
```
-## Materialized Views
+## Materialized views
Materialized views are relations that persist the results of a query. They can be created using the `CREATE MATERIALIZED VIEW` command, and their contents can be updated using the `REFRESH MATERIALIZED VIEW` command.
@@ -216,8 +216,10 @@ employee_no | name
1222 | Bette Davis
```
-For detailed documentation on materialized views please refer to the following links:
+## Read more
+
+For detailed documentation on materialized views, refer to the following topics:
-- [`CREATE MATERIALIZED VIEW`](../../../../api/ysql/the-sql-language/statements/ddl_create_matview/)
-- [`REFRESH MATERIALIZED VIEW`](../../../../api/ysql/the-sql-language/statements/ddl_refresh_matview/)
-- [`DROP MATERIALIZED VIEW`](../../../../api/ysql/the-sql-language/statements/ddl_drop_matview/)
+- [CREATE MATERIALIZED VIEW](../../../../api/ysql/the-sql-language/statements/ddl_create_matview/)
+- [REFRESH MATERIALIZED VIEW](../../../../api/ysql/the-sql-language/statements/ddl_refresh_matview/)
+- [DROP MATERIALIZED VIEW](../../../../api/ysql/the-sql-language/statements/ddl_drop_matview/)
diff --git a/docs/content/preview/explore/ysql-language-features/data-manipulation.md b/docs/content/preview/explore/ysql-language-features/data-manipulation.md
index 3bd2ffc4850e..eb82a6171435 100644
--- a/docs/content/preview/explore/ysql-language-features/data-manipulation.md
+++ b/docs/content/preview/explore/ysql-language-features/data-manipulation.md
@@ -13,6 +13,8 @@ type: docs
This section describes how to manipulate data in YugabyteDB using the YSQL `INSERT`, `UPDATE`, and `DELETE` statements.
+{{% explore-setup-single %}}
+
## Insert rows
Initially, database tables are not populated with data. Using YSQL, you can add one or more rows containing complete or partial data by inserting one row at a time.
diff --git a/docs/content/preview/explore/ysql-language-features/data-types.md b/docs/content/preview/explore/ysql-language-features/data-types.md
index 7cbbd0c3d5ee..30c138ebb1dd 100644
--- a/docs/content/preview/explore/ysql-language-features/data-types.md
+++ b/docs/content/preview/explore/ysql-language-features/data-types.md
@@ -15,6 +15,8 @@ This document describes the data types supported in YSQL, from the basic data ty
The [JSONB document data type](../../json-support/jsonb-ysql/) is described in a separate section.
+{{% explore-setup-single %}}
+
## Strings
The following character types are supported:
diff --git a/docs/content/preview/explore/ysql-language-features/databases-schemas-tables.md b/docs/content/preview/explore/ysql-language-features/databases-schemas-tables.md
index 39f66aa66074..40be74bf5e80 100644
--- a/docs/content/preview/explore/ysql-language-features/databases-schemas-tables.md
+++ b/docs/content/preview/explore/ysql-language-features/databases-schemas-tables.md
@@ -13,6 +13,8 @@ type: docs
This section covers basic topics including how to connect to your cluster using the YSQL shell, and use the shell to manage databases, schemas, and tables.
+{{% explore-setup-single %}}
+
## YSQL shell
Use the [ysqlsh shell](../../../admin/ysqlsh/) to interact with a Yugabyte database cluster using the [YSQL API](../../../api/ysql/). Because `ysqlsh` is derived from the PostgreSQL shell `psql` code base, all `psql` commands work as is in `ysqlsh`. Some default settings such as the database default port and the output format of some of the schema commands have been modified for YugabyteDB.
diff --git a/docs/content/preview/explore/ysql-language-features/expressions-operators.md b/docs/content/preview/explore/ysql-language-features/expressions-operators.md
index ad4586dbb3b3..2f1ed1afe4de 100644
--- a/docs/content/preview/explore/ysql-language-features/expressions-operators.md
+++ b/docs/content/preview/explore/ysql-language-features/expressions-operators.md
@@ -13,6 +13,8 @@ type: docs
This document describes how to use boolean, numeric, and date expressions, as well as basic operators. In addition, it provides information on conditional expression and operators.
+{{% explore-setup-single %}}
+
## Basic operators
A large number of YSQL types have corresponding mathematical operators that are typically used for performing comparisons and mathematical operations. Operators allow you to specify conditions in YSQL statements and create links between conditions.
diff --git a/docs/content/preview/explore/ysql-language-features/going-beyond-sql/follower-reads-ycql.md b/docs/content/preview/explore/ysql-language-features/going-beyond-sql/follower-reads-ycql.md
index 94837e8ace27..90cbffa00bfe 100644
--- a/docs/content/preview/explore/ysql-language-features/going-beyond-sql/follower-reads-ycql.md
+++ b/docs/content/preview/explore/ysql-language-features/going-beyond-sql/follower-reads-ycql.md
@@ -51,15 +51,15 @@ You can specify the maximum staleness of data when reading from tablet followers
In this tutorial, you update a single key-value over and over, and read it from the tablet leader. While that workload is running, you start another workload to read from a follower and verify that you are able to read from a tablet follower.
-### Create universe
+### Create a cluster
-If you have a previously running local universe, destroy it by executing the following command:
+If you have a previously running cluster, destroy it by executing the following command:
```sh
$ ./bin/yb-ctl destroy
```
-Start a new local universe with three nodes and a replication factor (RF) of `3`, as follows:
+Start a new local cluster with three nodes and a replication factor (RF) of `3`, as follows:
```sh
$ ./bin/yb-ctl --rf 3 create
@@ -79,9 +79,9 @@ Download the [YugabyteDB workload generator](https://github.com/yugabyte/yb-samp
$ wget https://github.com/yugabyte/yb-sample-apps/releases/download/1.3.9/yb-sample-apps.jar?raw=true -O yb-sample-apps.jar
```
-By default, the YugabyteDB workload generator runs with strong read consistency, where all data is read from the tablet leader. Note that the `yb-sample-apps.jar` sets the [consistency](../../../../admin/ycqlsh/#consistency) level to ONE by default. You can populate exactly one key with a `10KB` value into the system. Because the replication factor is `3`, this key is replicated to only three of the four nodes in the universe.
+By default, the YugabyteDB workload generator runs with strong read consistency, where all data is read from the tablet leader. Note that the `yb-sample-apps.jar` sets the [consistency](../../../../admin/ycqlsh/#consistency) level to ONE by default. You can populate exactly one key with a `10KB` value into the system. Because the replication factor is `3`, this key is replicated to only three of the four nodes in the cluster.
-Run the `CassandraKeyValue` workload application to constantly update this key-value, as well as perform reads with strong consistency against the local universe, as follows:
+Run the `CassandraKeyValue` workload application to constantly update this key-value, as well as perform reads with strong consistency against the local cluster, as follows:
```sh
$ java -jar ./yb-sample-apps.jar --workload CassandraKeyValue \
@@ -154,6 +154,5 @@ $ ./bin/yb-ctl destroy
## Read more
-- [Read replica deployment](../../../../deploy/multi-dc/read-replica-clusters/).
-
-- [Read replicas](../../../multi-region-deployments/read-replicas-ycql/) in YCQL.
+- [Read replica deployment](../../../../deploy/multi-dc/read-replica-clusters/)
+- [Read replicas](../../../multi-region-deployments/read-replicas-ycql/)
diff --git a/docs/content/preview/explore/ysql-language-features/queries.md b/docs/content/preview/explore/ysql-language-features/queries.md
index 5c8a2a947d18..c337b767b623 100644
--- a/docs/content/preview/explore/ysql-language-features/queries.md
+++ b/docs/content/preview/explore/ysql-language-features/queries.md
@@ -39,6 +39,8 @@ The following `SELECT` statement clauses provide flexibility and allow you to fi
### SELECT examples
+{{% explore-setup-single %}}
+
Suppose you work with a database that includes the following table populated with data:
```sql
diff --git a/docs/content/preview/explore/ysql-language-features/sql-feature-support.md b/docs/content/preview/explore/ysql-language-features/sql-feature-support.md
index 0eda32b67ba4..213b282f387a 100644
--- a/docs/content/preview/explore/ysql-language-features/sql-feature-support.md
+++ b/docs/content/preview/explore/ysql-language-features/sql-feature-support.md
@@ -18,23 +18,23 @@ To understand which standard SQL features we support, refer to the following tab
| Data type | Supported | Documentation |
| :-------- | :-------: | :------------ |
-| `ARRAY` | ✓ | [Array documentation](../../../api/ysql/datatypes/type_array/) |
-| `BINARY` | ✓ | [Binary documentation](../../../api/ysql/datatypes/type_binary/) |
+| `ARRAY` | ✓ | [Array data types](../../../api/ysql/datatypes/type_array/) |
+| `BINARY` | ✓ | [Binary data types](../../../api/ysql/datatypes/type_binary/) |
| `BIT`,`BYTES` | ✓ | |
-| `BOOLEAN` | ✓ | [Boolean documentation](../../../api/ysql/datatypes/type_bool/) |
-| `CHAR`, `VARCHAR`, `TEXT` | ✓ | [Character data types documentation](../../../api/ysql/datatypes/type_character/) |
-| `COLLATE` | ✓ | [Collate documentation](../../ysql-language-features/advanced-features/collations/#root) |
-| `DATE`, `TIME`, `TIMESTAMP`, `INTERVAL` | ✓ | [Date and time data types documentation](../../../api/ysql/datatypes/type_datetime/) |
-| `DEC`, `DECIMAL`, `NUMERIC` | ✓ | [Fixed point numbers documentation](../../../api/ysql/datatypes/type_numeric/#fixed-point-numbers) |
-| `ENUM` | ✓ |[ENUM documentation](../../ysql-language-features/data-types/#enumerations-enum-type) |
-| `FLOAT`, `REAL`, `DOUBLE PRECISION` | ✓ | [Floating point numbers documentation](../../../api/ysql/datatypes/type_numeric/) |
-| `JSON`, `JSONB` | ✓ | [JSON data types documentation](../../../api/ysql/datatypes/type_json/) |
-| `MONEY` | ✓ | [Money data type documentation](../../../api/ysql/datatypes/type_money/) |
-| `SERIAL`, `SMALLSERIAL`, `BIGSERIAL`| ✓ | [Serial documentation](../../../api/ysql/datatypes/type_serial/) |
+| `BOOLEAN` | ✓ | [Boolean data types](../../../api/ysql/datatypes/type_bool/) |
+| `CHAR`, `VARCHAR`, `TEXT` | ✓ | [Character data types](../../../api/ysql/datatypes/type_character/) |
+| `COLLATE` | ✓ | [Collations](../../ysql-language-features/advanced-features/collations/#root) |
+| `DATE`, `TIME`, `TIMESTAMP`, `INTERVAL` | ✓ | [Date and time data types](../../../api/ysql/datatypes/type_datetime/) |
+| `DEC`, `DECIMAL`, `NUMERIC` | ✓ | [Fixed point numbers](../../../api/ysql/datatypes/type_numeric/#fixed-point-numbers) |
+| `ENUM` | ✓ |[Enumerations](../../ysql-language-features/data-types/#enumerations-enum-type) |
+| `FLOAT`, `REAL`, `DOUBLE PRECISION` | ✓ | [Floating-point numbers](../../../api/ysql/datatypes/type_numeric/#floating-point-numbers) |
+| `JSON`, `JSONB` | ✓ | [JSON data types](../../../api/ysql/datatypes/type_json/) |
+| `MONEY` | ✓ | [Money data types](../../../api/ysql/datatypes/type_money/) |
+| `SERIAL`, `SMALLSERIAL`, `BIGSERIAL`| ✓ | [Serial data types](../../../api/ysql/datatypes/type_serial/) |
| `SET`| ✗ | |
-| `SMALLINT, INT, INTEGER, BIGINT` | ✓ | [Integers documentation](../../../api/ysql/datatypes/type_numeric/) |
-| `INT4RANGE`, `INT8RANGE`, `NUMRANGE`, `TSRANGE`, `TSTZRANGE`, `DATERANGE` | ✓ | [Range data types documentation](../../../api/ysql/datatypes/type_range/) |
-| `UUID` | ✓ | [UUID documentation](../../../api/ysql/datatypes/type_uuid/) |
+| `SMALLINT, INT, INTEGER, BIGINT` | ✓ | [Integers](../../../api/ysql/datatypes/type_numeric/#integers) |
+| `INT4RANGE`, `INT8RANGE`, `NUMRANGE`, `TSRANGE`, `TSTZRANGE`, `DATERANGE` | ✓ | [Range data types](../../../api/ysql/datatypes/type_range/) |
+| `UUID` | ✓ | [UUID data type](../../../api/ysql/datatypes/type_uuid/) |
| `XML`| ✗ | |
| `TSVECTOR` | ✓ | |
| UDT(Base, Enumerated, Range, Composite, Array, Domain types) | ✓ | |
@@ -43,19 +43,19 @@ To understand which standard SQL features we support, refer to the following tab
| Operation | Supported | Documentation |
| :-------- | :-------: | :------------ |
-| Altering tables | ✓ | [`ALTER TABLE` documentation](../../../api/ysql/the-sql-language/statements/ddl_alter_table/) |
-| Altering databases | ✓ | [`ALTER DATABASE` documentation](../../../api/ysql/the-sql-language/statements/ddl_alter_db/) |
+| Altering tables | ✓ | [ALTER TABLE](../../../api/ysql/the-sql-language/statements/ddl_alter_table/) |
+| Altering databases | ✓ | [ALTER DATABASE](../../../api/ysql/the-sql-language/statements/ddl_alter_db/) |
| Altering columns | ✗ | |
| Altering a column's data type | ✗ | |
-| Adding columns | ✓ | [`ADD COLUMN` documentation](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#add-column-column-name-data-type-constraint-constraints) |
-| Removing columns | ✓ | [`DROP COLUMN` documentation](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#drop-column-column-name-restrict-cascade) |
-| Adding constraints | ✓ | [`ADD CONSTRAINT` documentation](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#add-alter-table-constraint-constraints) |
-| Removing constraints | ✓ | [`DROP CONSTRAINT` documentation](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#drop-constraint-constraint-name-restrict-cascade) |
+| Adding columns | ✓ | [ADD COLUMN](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#add-column-column-name-data-type-constraint-constraints) |
+| Removing columns | ✓ | [DROP COLUMN](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#drop-column-column-name-restrict-cascade) |
+| Adding constraints | ✓ | [ADD CONSTRAINT](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#add-alter-table-constraint-constraints) |
+| Removing constraints | ✓ | [DROP CONSTRAINT](../../../api/ysql/the-sql-language/statements/ddl_alter_table/#drop-constraint-constraint-name-restrict-cascade) |
| Altering indexes | ✗ | |
-| Adding indexes | ✓ | [`CREATE INDEX` documentation](../../../api/ysql/the-sql-language/statements/ddl_create_index/) |
+| Adding indexes | ✓ | [CREATE INDEX](../../../api/ysql/the-sql-language/statements/ddl_create_index/) |
| Removing indexes | ✗ | |
| Altering a primary key | ✗ | |
-| Adding user-defined schemas | ✓ | [`CREATE SCHEMA` documentation](../../../api/ysql/the-sql-language/statements/ddl_create_schema/) |
+| Adding user-defined schemas | ✓ | [CREATE SCHEMA](../../../api/ysql/the-sql-language/statements/ddl_create_schema/) |
| Removing user-defined schemas | ✗ | |
| Altering user-defined schemas | ✗ | |
@@ -63,11 +63,11 @@ To understand which standard SQL features we support, refer to the following tab
| Feature | Supported | Documentation |
| :------ | :-------: | :------------ |
-| Check | ✓ | [Check documentation](../../indexes-constraints/other-constraints/#check-constraint) |
-| Unique | ✓ | [Unique documentation](../../indexes-constraints/other-constraints/#unique-constraint) |
-| Not Null | ✓ | [Not Null documentation](../../indexes-constraints/other-constraints/#not-null-constraint) |
-| Primary Key | ✓ | [Primary Key documentation](../../indexes-constraints/primary-key-ysql/) |
-| Foreign Key | ✓ | [Foreign Key documentation](../../indexes-constraints/foreign-key-ysql/) |
+| Check | ✓ | [Check constraint](../../indexes-constraints/other-constraints/#check-constraint) |
+| Unique | ✓ | [Unique constraint](../../indexes-constraints/other-constraints/#unique-constraint) |
+| Not Null | ✓ | [Not Null constraint](../../indexes-constraints/other-constraints/#not-null-constraint) |
+| Primary Key | ✓ | [Primary keys](../../indexes-constraints/primary-key-ysql/) |
+| Foreign Key | ✓ | [Foreign keys](../../indexes-constraints/foreign-key-ysql/) |
| Default Value | ✗ | |
| Deferrable Foreign Key constraints | ✓ | |
| Deferrable Primary Key and Unique constraints | ✗ | |
@@ -77,9 +77,9 @@ To understand which standard SQL features we support, refer to the following tab
| Component | Supported | Documentation |
| :-------- | :-------: | :------------ |
-| Indexes | ✓ | [Indexes documentation](../../indexes-constraints/) |
-| GIN indexes | ✓ | [GIN Indexes documentation](../../indexes-constraints/gin/) |
-| Partial indexes | ✓ | [Partial indexes documentation](../../indexes-constraints/partial-index-ysql/) |
+| Indexes | ✓ | [Indexes and constraints](../../indexes-constraints/) |
+| GIN indexes | ✓ | [GIN indexes](../../indexes-constraints/gin/) |
+| Partial indexes | ✓ | [Partial indexes](../../indexes-constraints/partial-index-ysql/) |
| Expression indexes | ✓ | [Expression indexes](../../indexes-constraints/expression-index-ysql/) |
| Multi-column indexes | ✓ | |
| Covering indexes | ✓ | [Covering indexes](../../indexes-constraints/covering-index-ysql/) |
@@ -94,50 +94,50 @@ To understand which standard SQL features we support, refer to the following tab
| Feature | Supported | Documentation |
| :------ | :-------: | :------------ |
-| Transactions | ✓ | [Transactions documentation](../../transactions/) |
-| `BEGIN` | ✓ | [`BEGIN` documentation](../../../api/ysql/the-sql-language/statements/txn_begin/) |
-| `COMMIT` | ✓ | [`COMMIT` documentation](../../../api/ysql/the-sql-language/statements/txn_commit/) |
-| `ROLLBACK` | ✓ | [`ROLLBACK` documentation](../../../api/ysql/the-sql-language/statements/txn_rollback/) |
-| `SAVEPOINT` | ✓ | [`SAVEPOINT` documentation](../../../api/ysql/the-sql-language/statements/savepoint_create/) |
-| `ROLLBACK TO SAVEPOINT` | ✓ | [`ROLLBACK TO SAVEPOINT` documentation](../../../api/ysql/the-sql-language/statements/savepoint_create/) |
+| Transactions | ✓ | [Transactions](../../transactions/) |
+| `BEGIN` | ✓ | [BEGIN](../../../api/ysql/the-sql-language/statements/txn_begin/) |
+| `COMMIT` | ✓ | [COMMIT](../../../api/ysql/the-sql-language/statements/txn_commit/) |
+| `ROLLBACK` | ✓ | [ROLLBACK](../../../api/ysql/the-sql-language/statements/txn_rollback/) |
+| `SAVEPOINT` | ✓ | [SAVEPOINT](../../../api/ysql/the-sql-language/statements/savepoint_create/) |
+| `ROLLBACK TO SAVEPOINT` | ✓ | [ROLLBACK TO SAVEPOINT](../../../api/ysql/the-sql-language/statements/savepoint_rollback/) |
### Roles and Permissions
| Component | Supported | Details |
| :-------- | :-------: | :------ |
-| Users | ✓ | |
-| Roles | ✓ | |
+| Users | ✓ | [Manage users and roles](../../../secure/authorization/create-roles/) |
+| Roles | ✓ | [Manage users and roles](../../../secure/authorization/create-roles/) |
| Object ownership | ✓ | |
-| Privileges | ✓ | |
+| Privileges | ✓ | [Grant privileges](../../../secure/authorization/ysql-grant-permissions/) |
| Default privileges | ✗ | |
### Queries
| Component | Supported | Details |
| :-------- | :-------: | :------ |
-| FROM, WHERE, GROUP BY, HAVING, DISTINCT, LIMIT/OFFSET, WITH queries| ✓ | |
-| EXPLAIN query plans| ✓ | |
-| JOINs (INNER/OUTER, LEFT/RIGHT) | ✓ | |
-| Expressions and Operators| ✓ | |
-| Common Table Expressions (CTE) and Recursive Queries| ✓ | |
-| Upserts (INSERT ... ON CONFLICT DO NOTHING/UPDATE) | ✓ | |
+| FROM, WHERE, GROUP BY, HAVING, DISTINCT, LIMIT/OFFSET, WITH queries| ✓ | [Group data](../queries/#group-data) |
+| EXPLAIN query plans| ✓ | [Analyze queries with EXPLAIN](../../query-1-performance/explain-analyze/) |
+| JOINs (INNER/OUTER, LEFT/RIGHT) | ✓ | [Join columns](../queries/#join-columns) |
+| Expressions and Operators| ✓ | [Expressions and operators](../expressions-operators/) |
+| Common Table Expressions (CTE) and Recursive Queries| ✓ | [Recursive queries and CTEs](../queries/#recursive-queries-and-ctes) |
+| Upserts (INSERT ... ON CONFLICT DO NOTHING/UPDATE) | ✓ | [Upsert](../data-manipulation/#upsert) |
### Advanced SQL
| Component | Supported | Details |
| :-------- | :-------: | :------ |
-| Stored procedures | ✓ | |
-| User-defined functions| ✓ | |
-| Cursors | ✓ | |
+| Stored procedures | ✓ | [Stored procedures](../stored-procedures/) |
+| User-defined functions| ✓ | [Functions](../../../api/ysql/user-defined-subprograms-and-anon-blocks/#functions) |
+| Cursors | ✓ | [Cursors](../advanced-features/cursor/) |
| Row-level triggers (BEFORE, AFTER, INSTEAD OF) | ✓ | |
| Statement-level triggers (BEFORE, AFTER, INSTEAD OF) | ✓ | |
| Deferrable triggers | ✗ | |
| Transition tables (REFERENCING clause for triggers) | ✗ | |
-| Sequences | ✓ | |
+| Sequences | ✓ | [Auto-Increment column values](../data-manipulation/#auto-increment-column-values) |
| Identity columns | ✓ | |
-| Views | ✓ | |
-| Materialized views | ✓ | |
-| Window functions | ✓ | |
+| Views | ✓ | [Views](../advanced-features/views/) |
+| Materialized views | ✓ | [Materialized views](../advanced-features/views/#materialized-views)|
+| Window functions | ✓ | [Window functions](../../../api/ysql/exprs/window_functions/)|
| Common table expressions | ✓| |
-| Extensions| ✓| |
-| Foreign data wrappers| ✓| |
+| Extensions| ✓| [PostgreSQL extensions](../pg-extensions/) |
+| Foreign data wrappers| ✓| [Foreign data wrappers](../advanced-features/foreign-data-wrappers/) |
diff --git a/docs/content/preview/explore/ysql-language-features/stored-procedures.md b/docs/content/preview/explore/ysql-language-features/stored-procedures.md
index 8c457b6f4546..32c775107bfe 100644
--- a/docs/content/preview/explore/ysql-language-features/stored-procedures.md
+++ b/docs/content/preview/explore/ysql-language-features/stored-procedures.md
@@ -13,13 +13,9 @@ type: docs
This section describes how to use stored procedures to perform transactions.
-## Overview
-
-Stored procedures, in large part, are just functions that support transactions. PostgreSQL 11 introduced stored procedures, and Yugabyte supports them as well.
-
## Create a stored procedure
-To create a stored procedure in YSQL, use the [`CREATE PROCEDURE`](../../../api/ysql/the-sql-language/statements/ddl_create_procedure/) statement, which has the following syntax:
+Stored procedures, in large part, are just functions that support transactions. To create a stored procedure in YSQL, use the [`CREATE PROCEDURE`](../../../api/ysql/the-sql-language/statements/ddl_create_procedure/) statement, which has the following syntax:
```sql
CREATE [OR REPLACE] PROCEDURE procedure_name(parameter_list)
@@ -74,6 +70,8 @@ If the name of the stored procedure is not unique (for example, if you had two `
## Example workflow
+{{% explore-setup-single %}}
+
In the following example, you create a new table and a stored procedure to perform operations on that table. Finally, you clean up by removing the procedure and the table.
1. Create an `accounts` table with two users, and set the balance of both accounts to $10,000:
diff --git a/docs/content/preview/explore/ysql-language-features/triggers.md b/docs/content/preview/explore/ysql-language-features/triggers.md
index 228799b0bf73..37fe423c04ab 100644
--- a/docs/content/preview/explore/ysql-language-features/triggers.md
+++ b/docs/content/preview/explore/ysql-language-features/triggers.md
@@ -13,6 +13,8 @@ type: docs
This document describes how to use triggers when performing data manipulation and definition.
+{{% explore-setup-single %}}
+
## Overview
In YSQL, a function invoked automatically when an event associated with a table occurs is called a trigger. The event is typically caused by modification of data during `INSERT`, `UPDATE`, and `DELETE`. The even can also be caused by schema changes.
@@ -52,6 +54,8 @@ ON tbl_name [FOR [EACH] { ROW | STATEMENT }]
The trigger *tr_name* fires before or after *event* which can be set to `INSERT` , `DELETE`, `UPDATE`, or `TRUNCATE`. *tbl_name* represents the table associated with the trigger. If you use the `FOR EACH ROW` clause, the scope of the trigger would be one row. If you use the `FOR EACH STATEMENT` clause, the trigger would be fired for each statement. *trigger_function* represents the procedure to be performed when the trigger is fired.
+### Example
+
Suppose you work with a database that includes the following table populated with data:
```sql
@@ -159,6 +163,8 @@ DROP TRIGGER [IF EXISTS] tr_name
*tr_name* represents the trigger to be deleted if it exists. If you try to delete a non-existing trigger without using the `IF EXISTS` statement, the `DROP TRIGGER` statement results in an error, whereas using `IF EXISTS` to delete a non-existing trigger results in a notice. *tbl_name* represents the table associated with the trigger. The `CASCADE` option allows you to automatically delete objects that depend on the trigger and the `RESTRICT` option (default) allows you to refuse to delete the trigger if it has dependent objects.
+### Example
+
The following example demonstrates how to delete the `dept_changes` trigger used in the examples from [Create triggers](#create-triggers):
```sql
@@ -176,6 +182,8 @@ ALTER TABLE tbl_name
*tbl_name* represents the table whose trigger represented by *tr_name* you are disabling. Using the `ALL` option allows you to disable all triggers associated with the table. A disabled trigger, even though it exists in the database, cannot fire on an event associated with this trigger.
+### Examples
+
The following example shows how to disable a trigger on the `employees` table:
```sql
@@ -217,7 +225,7 @@ ALTER TABLE employees
The main difference between regular triggers and event triggers is that the former capture data manipulation events on a single table, whereas the latter can capture data definition events on a database.
-The `CREATE EVENT TRIGGER` statement has he following syntax:
+The `CREATE EVENT TRIGGER` statement has the following syntax:
```sql
CREATE EVENT TRIGGER tr_name ON event
@@ -227,6 +235,8 @@ CREATE EVENT TRIGGER tr_name ON event
*tr_name*, which is unique in the database, represents the new trigger. *event* represents the event that triggers a call to the function *function_name* whose return type is `event_trigger` (optional). You can define more than one trigger for the same event, in which case the triggers fire in alphabetical order based on the name of the trigger. If a `WHEN` condition is included in the `CREATE EVENT TRIGGER` statement, then the trigger is fired for specific commands. *filter_variable* needs to be set to`TAG`, as this is the only supported variable, and *filter_value* represents a list of values for *filter_variable*.
+### Example
+
The following example is based on examples from [Create triggers](#create-triggers), except that the `record_dept_changes` function returns an event trigger instead of a regular trigger. The example shows how to create an `sql_drop` trigger for one of the events currently supported by YSQL:
```sql
diff --git a/docs/content/stable/deploy/multi-dc/async-replication.md b/docs/content/stable/deploy/multi-dc/async-replication.md
index 201c91836ce4..7311411e5496 100644
--- a/docs/content/stable/deploy/multi-dc/async-replication.md
+++ b/docs/content/stable/deploy/multi-dc/async-replication.md
@@ -132,7 +132,7 @@ Replication lag is computed at the tablet level as follows:
*hybrid_clock_time* is the hybrid clock timestamp on the source's tablet-server, and *last_read_hybrid_time* is the hybrid clock timestamp of the latest record pulled from the source.
-To obtain information about the overall maximum lag, you should check `/metrics` or `/prometheus-metrics` for `async_replication_sent_lag_micros` or `async_replication_committed_lag_micros` and take the maximum of these values across each source's T-Server. For information on how to set up the node exporter and Prometheus manually, see [Prometheus integration](../../../explore/observability/prometheus-integration/linux/).
+To obtain information about the overall maximum lag, you should check `/metrics` or `/prometheus-metrics` for `async_replication_sent_lag_micros` or `async_replication_committed_lag_micros` and take the maximum of these values across each source's T-Server. For information on how to set up the node exporter and Prometheus manually, see [Prometheus integration](../../../explore/observability/prometheus-integration/macos/).
## Set up replication with TLS
diff --git a/docs/content/stable/explore/observability/prometheus-integration/docker.md b/docs/content/stable/explore/observability/prometheus-integration/docker.md
deleted file mode 100644
index b7e037207e2f..000000000000
--- a/docs/content/stable/explore/observability/prometheus-integration/docker.md
+++ /dev/null
@@ -1,244 +0,0 @@
----
-title: Prometheus integration
-headerTitle: Prometheus integration
-linkTitle: Prometheus integration
-description: Learn about exporting YugabyteDB metrics and monitoring the cluster with Prometheus.
-menu:
- stable:
- identifier: observability-3-docker
- parent: explore-observability
- weight: 235
-type: docs
----
-
-
-
-You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint. For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
-
-This tutorial uses the [yugabyted](../../../../reference/configuration/yugabyted/) local cluster management utility.
-
-## 1. Create universe
-
-Start a new local universe with replication factor of `3`.
-
-```sh
-$ docker network create -d bridge yb-net
-```
-
-```sh
-$ docker run -d --name yugabyte-node1 \
- --network yb-net \
- -p 127.0.0.1:7000:7000 \
- -p 127.0.0.1:9000:9000 \
- -p 127.0.0.1:5433:5433 \
- -p 127.0.0.1:9042:9042 \
- -p 127.0.0.1:6379:6379 \
- yugabytedb/yugabyte:latest bin/yugabyted start --daemon=false --listen=yugabyte-node1 --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ docker run -d --name yugabyte-node2 \
- --network yb-net \
- -p 127.0.0.2:7000:7000 \
- -p 127.0.0.2:9000:9000 \
- -p 127.0.0.2:5433:5433 \
- -p 127.0.0.2:9042:9042 \
- -p 127.0.0.2:6379:6379 \
- yugabytedb/yugabyte:latest bin/yugabyted start --daemon=false --listen=yugabyte-node2 --join=yugabyte-node1 --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ docker run -d --name yugabyte-node3 \
- --network yb-net \
- -p 127.0.0.3:7000:7000 \
- -p 127.0.0.3:9000:9000 \
- -p 127.0.0.3:5433:5433 \
- -p 127.0.0.3:9042:9042 \
- -p 127.0.0.3:6379:6379 \
- yugabytedb/yugabyte:latest bin/yugabyted start --daemon=false --listen=yugabyte-node3 --join=yugabyte-node1 --tserver_flags="start_redis_proxy=true"
-```
-
-## 2. Run the YugabyteDB workload generator
-
-Pull the [yb-sample-apps](https://github.com/yugabyte/yb-sample-apps) Docker container image. This container image has built-in Java client programs for various workloads including SQL inserts and updates.
-
-```sh
-$ docker pull yugabytedb/yb-sample-apps
-```
-
-Run the `CassandraKeyValue` workload application in a separate shell.
-
-```sh
-$ docker run --name yb-sample-apps --hostname yb-sample-apps --net yb-net yugabytedb/yb-sample-apps \
- --workload CassandraKeyValue \
- --nodes yugabyte-node1:9042 \
- --num_threads_write 1 \
- --num_threads_read 4
-```
-
-## 3. Prepare Prometheus configuration file
-
-Copy the following into a file called `yugabytedb.yml`. Move this file to the `/tmp` directory so that you can bind the file to the Prometheus container later on.
-
-```yaml
-global:
- scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
- evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute.
- # scrape_timeout is set to the global default (10s).
-
-# YugabyteDB configuration to scrape Prometheus time-series metrics
-scrape_configs:
- - job_name: "yugabytedb"
- metrics_path: /prometheus-metrics
- relabel_configs:
- - target_label: "node_prefix"
- replacement: "cluster-1"
- metric_relabel_configs:
- # Save the name of the metric so we can group_by since we cannot by __name__ directly...
- - source_labels: ["__name__"]
- regex: "(.*)"
- target_label: "saved_name"
- replacement: "$1"
- # The following basically retrofit the handler_latency_* metrics to label format.
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "server_type"
- replacement: "$1"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "service_type"
- replacement: "$2"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "service_method"
- replacement: "$3"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "__name__"
- replacement: "rpc_latency$4"
-
- static_configs:
- - targets: ["yugabyte-node1:7000", "yugabyte-node2:7000", "yugabyte-node3:7000"]
- labels:
- export_type: "master_export"
-
- - targets: ["yugabyte-node1:9000", "yugabyte-node2:9000", "yugabyte-node3:9000"]
- labels:
- export_type: "tserver_export"
-
- - targets: ["yugabyte-node1:12000", "yugabyte-node2:12000", "yugabyte-node3:12000"]
- labels:
- export_type: "cql_export"
-
- - targets: ["yugabyte-node1:13000", "yugabyte-node2:13000", "yugabyte-node3:13000"]
- labels:
- export_type: "ysql_export"
-
- - targets: ["yugabyte-node1:11000", "yugabyte-node2:11000", "yugabyte-node3:11000"]
- labels:
- export_type: "redis_export"
-```
-
-## 4. Start Prometheus server
-
-Start the Prometheus server as below. The `prom/prometheus` container image will be pulled from the Docker registry if not already present on the localhost.
-
-```sh
-$ docker run \
- -p 9090:9090 \
- -v /tmp/yugabytedb.yml:/etc/prometheus/prometheus.yml \
- --net yb-net \
- prom/prometheus
-```
-
-Open the Prometheus UI at and then navigate to the Targets page under Status.
-
-![Prometheus Targets](/images/ce/prom-targets-docker.png)
-
-## 5. Analyze key metrics
-
-On the Prometheus Graph UI, you can now plot the read/write throughput and latency for the `CassandraKeyValue` sample app. As you can see from the [source code](https://github.com/yugabyte/yugabyte-db/blob/master/java/yb-loadtester/src/main/java/com/yugabyte/sample/apps/CassandraKeyValue.java) of the app, it uses only SELECT statements for reads and INSERT statements for writes (aside from the initial CREATE TABLE). This means you can measure throughput and latency by simply using the metrics corresponding to the SELECT and INSERT statements.
-
-Paste the following expressions into the **Expression** box and click **Execute** followed by **Add Graph**.
-
-### Throughput
-
-> Read IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-iops.png)
-
-> Write IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-iops.png)
-
-### Latency
-
-> Read Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-latency.png)
-
-> Write Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-latency.png)
-
-## 6. Clean up (optional)
-
-Optionally, you can shut down the local cluster created in Step 1.
-
-```sh
-$ docker stop yugabyte-node1 yugabyte-node2 yugabyte-node3
-$ docker rm yugabyte-node1 yugabyte-node2 yugabyte-node3
-$ docker network remove yb-net
-```
-
-## What's next?
-
-Set up [Grafana dashboards](../../grafana-dashboard/grafana/) for better visualization of the metrics being collected by Prometheus.
diff --git a/docs/content/stable/explore/observability/prometheus-integration/kubernetes.md b/docs/content/stable/explore/observability/prometheus-integration/kubernetes.md
deleted file mode 100644
index 06714ee9ebec..000000000000
--- a/docs/content/stable/explore/observability/prometheus-integration/kubernetes.md
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title: Prometheus integration
-headerTitle: Prometheus integration
-linkTitle: Prometheus integration
-description: Learn about exporting YugabyteDB metrics and monitoring the cluster with Prometheus.
-menu:
- stable:
- identifier: observability-4-kubernetes
- parent: explore-observability
- weight: 235
-type: docs
----
-
-
-
-You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint.
-
-For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
-
-If you haven't installed YugabyteDB yet, do so first by following the [Quick Start](../../../../quick-start/install/macos/) guide.
-
-## 1. Create universe
-
-If you have a previously running local universe, destroy it using the following.
-
-```sh
-$ kubectl delete -f yugabyte-statefulset.yaml
-```
-
-Start a new local cluster - by default, this will create a three-node universe with a replication factor of `3`.
-
-```sh
-$ kubectl apply -f yugabyte-statefulset.yaml
-```
-
-## Step 6. Clean up (optional)
-
-Optionally, you can shut down the local cluster created in Step 1.
-
-```sh
-$ kubectl delete -f yugabyte-statefulset.yaml
-```
-
-Further, to destroy the persistent volume claims (**you will lose all the data if you do this**), run:
-
-```sh
-kubectl delete pvc -l app=yb-master
-kubectl delete pvc -l app=yb-tserver
-```
diff --git a/docs/content/stable/explore/observability/prometheus-integration/linux.md b/docs/content/stable/explore/observability/prometheus-integration/linux.md
deleted file mode 100644
index 9f28f757a640..000000000000
--- a/docs/content/stable/explore/observability/prometheus-integration/linux.md
+++ /dev/null
@@ -1,242 +0,0 @@
----
-title: Prometheus integration
-headerTitle: Prometheus integration
-linkTitle: Prometheus integration
-description: Learn about exporting YugabyteDB metrics and monitoring the cluster with Prometheus.
-menu:
- stable:
- identifier: observability-2-linux
- parent: explore-observability
- weight: 235
-type: docs
----
-
-
-
-You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint. For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
-
-This tutorial uses the [yugabyted](../../../../reference/configuration/yugabyted/) cluster management utility.
-
-## Prerequisite
-
-Prometheus is installed on your local machine. If you have not done so already, follow the links below.
-
-- [Download Prometheus](https://prometheus.io/download/)
-- [Get Started with Prometheus](https://prometheus.io/docs/prometheus/latest/getting_started/)
-
-## 1. Create universe
-
-Start a new local three-node universe with a replication factor of `3`.
-
-```sh
-$ ./bin/yugabyted start \
- --base_dir=node-1 \
- --listen=127.0.0.1 \
- --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ ./bin/yugabyted start \
- --base_dir=node-2 \
- --listen=127.0.0.2 \
- --join=127.0.0.1 \
- --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ ./bin/yugabyted start \
- --base_dir=node-3 \
- --listen=127.0.0.3 \
- --join=127.0.0.1 \
- --tserver_flags="start_redis_proxy=true"
-```
-
-## 2. Run the YugabyteDB workload generator
-
-Download the [YugabyteDB workload generator](https://github.com/yugabyte/yb-sample-apps) JAR file (`yb-sample-apps.jar`) by running the following command.
-
-```sh
-$ wget https://github.com/yugabyte/yb-sample-apps/releases/download/1.3.9/yb-sample-apps.jar?raw=true -O yb-sample-apps.jar
-```
-
-Run the `CassandraKeyValue` workload application in a separate shell.
-
-```sh
-$ java -jar ./yb-sample-apps.jar \
- --workload CassandraKeyValue \
- --nodes 127.0.0.1:9042 \
- --num_threads_read 1 \
- --num_threads_write 1
-```
-
-## 3. Prepare Prometheus configuration file
-
-Copy the following into a file called `yugabytedb.yml`.
-
-```yaml
-global:
- scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
- evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute.
- # scrape_timeout is set to the global default (10s).
-
-# YugabyteDB configuration to scrape Prometheus time-series metrics
-scrape_configs:
- - job_name: "yugabytedb"
- metrics_path: /prometheus-metrics
- relabel_configs:
- - target_label: "node_prefix"
- replacement: "cluster-1"
- metric_relabel_configs:
- # Save the name of the metric so we can group_by since we cannot by __name__ directly...
- - source_labels: ["__name__"]
- regex: "(.*)"
- target_label: "saved_name"
- replacement: "$1"
- # The following basically retrofit the handler_latency_* metrics to label format.
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "server_type"
- replacement: "$1"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "service_type"
- replacement: "$2"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "service_method"
- replacement: "$3"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "__name__"
- replacement: "rpc_latency$4"
-
- static_configs:
- - targets: ["127.0.0.1:7000", "127.0.0.2:7000", "127.0.0.3:7000"]
- labels:
- export_type: "master_export"
-
- - targets: ["127.0.0.1:9000", "127.0.0.2:9000", "127.0.0.3:9000"]
- labels:
- export_type: "tserver_export"
-
- - targets: ["127.0.0.1:12000", "127.0.0.2:12000", "127.0.0.3:12000"]
- labels:
- export_type: "cql_export"
-
- - targets: ["127.0.0.1:13000", "127.0.0.2:13000", "127.0.0.3:13000"]
- labels:
- export_type: "ysql_export"
-
- - targets: ["127.0.0.1:11000", "127.0.0.2:11000", "127.0.0.3:11000"]
- labels:
- export_type: "redis_export"
-```
-
-## 4. Start Prometheus server
-
-Go to the directory where Prometheus is installed and start the Prometheus server as below.
-
-```sh
-$ ./prometheus --config.file=yugabytedb.yml
-```
-
-Open the Prometheus UI at and then navigate to the Targets page under Status.
-
-![Prometheus Targets](/images/ce/prom-targets.png)
-
-## 5. Analyze key metrics
-
-On the Prometheus Graph UI, you can now plot the read/write throughput and latency for the `CassandraKeyValue` sample app. As you can see from the [source code](https://github.com/yugabyte/yugabyte-db/blob/master/java/yb-loadtester/src/main/java/com/yugabyte/sample/apps/CassandraKeyValue.java) of the app, it uses only SELECT statements for reads and INSERT statements for writes (aside from the initial CREATE TABLE). This means you can measure throughput and latency by simply using the metrics corresponding to the SELECT and INSERT statements.
-
-Paste the following expressions into the **Expression** box and click **Execute** followed by **Add Graph**.
-
-### Throughput
-
-> Read IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-iops.png)
-
-> Write IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-iops.png)
-
-### Latency
-
-> Read Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-latency.png)
-
-> Write Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-latency.png)
-
-## 6. Clean up (optional)
-
-Optionally, you can shut down the local cluster created in Step 1.
-
-```sh
-$ ./bin/yugabyted destroy \
- --base_dir=node-1
-```
-
-```sh
-$ ./bin/yugabyted destroy \
- --base_dir=node-2
-```
-
-```sh
-$ ./bin/yugabyted destroy \
- --base_dir=node-3
-```
-
-## What's next?
-
-Set up [Grafana dashboards](../../grafana-dashboard/grafana/) for better visualization of the metrics being collected by Prometheus.
diff --git a/docs/content/stable/explore/observability/prometheus-integration/macos.md b/docs/content/stable/explore/observability/prometheus-integration/macos.md
index 5a8b46b4ef68..f53d404c2304 100644
--- a/docs/content/stable/explore/observability/prometheus-integration/macos.md
+++ b/docs/content/stable/explore/observability/prometheus-integration/macos.md
@@ -11,38 +11,6 @@ menu:
type: docs
---
-
-
You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint. For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
This tutorial uses the [yugabyted](../../../../reference/configuration/yugabyted/) cluster management utility.
diff --git a/docs/content/v2.12/explore/observability/prometheus-integration/docker.md b/docs/content/v2.12/explore/observability/prometheus-integration/docker.md
deleted file mode 100644
index 639ee21bde78..000000000000
--- a/docs/content/v2.12/explore/observability/prometheus-integration/docker.md
+++ /dev/null
@@ -1,244 +0,0 @@
----
-title: Prometheus integration
-headerTitle: Prometheus integration
-linkTitle: Prometheus integration
-description: Learn about exporting YugabyteDB metrics and monitoring the cluster with Prometheus.
-menu:
- v2.12:
- identifier: observability-3-docker
- parent: explore-observability
- weight: 235
-type: docs
----
-
-
-
-You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint. For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
-
-This tutorial uses the [yugabyted](../../../../reference/configuration/yugabyted) local cluster management utility.
-
-## 1. Create universe
-
-Start a new local universe with replication factor of `3`.
-
-```sh
-$ docker network create -d bridge yb-net
-```
-
-```sh
-$ docker run -d --name yugabyte-node1 \
- --network yb-net \
- -p 127.0.0.1:7000:7000 \
- -p 127.0.0.1:9000:9000 \
- -p 127.0.0.1:5433:5433 \
- -p 127.0.0.1:9042:9042 \
- -p 127.0.0.1:6379:6379 \
- yugabytedb/yugabyte:latest bin/yugabyted start --daemon=false --listen=yugabyte-node1 --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ docker run -d --name yugabyte-node2 \
- --network yb-net \
- -p 127.0.0.2:7000:7000 \
- -p 127.0.0.2:9000:9000 \
- -p 127.0.0.2:5433:5433 \
- -p 127.0.0.2:9042:9042 \
- -p 127.0.0.2:6379:6379 \
- yugabytedb/yugabyte:latest bin/yugabyted start --daemon=false --listen=yugabyte-node2 --join=yugabyte-node1 --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ docker run -d --name yugabyte-node3 \
- --network yb-net \
- -p 127.0.0.3:7000:7000 \
- -p 127.0.0.3:9000:9000 \
- -p 127.0.0.3:5433:5433 \
- -p 127.0.0.3:9042:9042 \
- -p 127.0.0.3:6379:6379 \
- yugabytedb/yugabyte:latest bin/yugabyted start --daemon=false --listen=yugabyte-node3 --join=yugabyte-node1 --tserver_flags="start_redis_proxy=true"
-```
-
-## 2. Run the YugabyteDB workload generator
-
-Pull the [yb-sample-apps](https://github.com/yugabyte/yb-sample-apps) Docker container image. This container image has built-in Java client programs for various workloads including SQL inserts and updates.
-
-```sh
-$ docker pull yugabytedb/yb-sample-apps
-```
-
-Run the `CassandraKeyValue` workload application in a separate shell.
-
-```sh
-$ docker run --name yb-sample-apps --hostname yb-sample-apps --net yb-net yugabytedb/yb-sample-apps \
- --workload CassandraKeyValue \
- --nodes yugabyte-node1:9042 \
- --num_threads_write 1 \
- --num_threads_read 4
-```
-
-## 3. Prepare Prometheus configuration file
-
-Copy the following into a file called `yugabytedb.yml`. Move this file to the `/tmp` directory so that you can bind the file to the Prometheus container later on.
-
-```yaml
-global:
- scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
- evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute.
- # scrape_timeout is set to the global default (10s).
-
-# YugabyteDB configuration to scrape Prometheus time-series metrics
-scrape_configs:
- - job_name: "yugabytedb"
- metrics_path: /prometheus-metrics
- relabel_configs:
- - target_label: "node_prefix"
- replacement: "cluster-1"
- metric_relabel_configs:
- # Save the name of the metric so we can group_by since we cannot by __name__ directly...
- - source_labels: ["__name__"]
- regex: "(.*)"
- target_label: "saved_name"
- replacement: "$1"
- # The following basically retrofit the handler_latency_* metrics to label format.
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "server_type"
- replacement: "$1"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "service_type"
- replacement: "$2"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "service_method"
- replacement: "$3"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "__name__"
- replacement: "rpc_latency$4"
-
- static_configs:
- - targets: ["yugabyte-node1:7000", "yugabyte-node2:7000", "yugabyte-node3:7000"]
- labels:
- export_type: "master_export"
-
- - targets: ["yugabyte-node1:9000", "yugabyte-node2:9000", "yugabyte-node3:9000"]
- labels:
- export_type: "tserver_export"
-
- - targets: ["yugabyte-node1:12000", "yugabyte-node2:12000", "yugabyte-node3:12000"]
- labels:
- export_type: "cql_export"
-
- - targets: ["yugabyte-node1:13000", "yugabyte-node2:13000", "yugabyte-node3:13000"]
- labels:
- export_type: "ysql_export"
-
- - targets: ["yugabyte-node1:11000", "yugabyte-node2:11000", "yugabyte-node3:11000"]
- labels:
- export_type: "redis_export"
-```
-
-## 4. Start Prometheus server
-
-Start the Prometheus server as below. The `prom/prometheus` container image will be pulled from the Docker registry if not already present on the localhost.
-
-```sh
-$ docker run \
- -p 9090:9090 \
- -v /tmp/yugabytedb.yml:/etc/prometheus/prometheus.yml \
- --net yb-net \
- prom/prometheus
-```
-
-Open the Prometheus UI at and then navigate to the Targets page under Status.
-
-![Prometheus Targets](/images/ce/prom-targets-docker.png)
-
-## 5. Analyze key metrics
-
-On the Prometheus Graph UI, you can now plot the read/write throughput and latency for the `CassandraKeyValue` sample app. As you can see from the [source code](https://github.com/yugabyte/yugabyte-db/blob/master/java/yb-loadtester/src/main/java/com/yugabyte/sample/apps/CassandraKeyValue.java) of the app, it uses only SELECT statements for reads and INSERT statements for writes (aside from the initial CREATE TABLE). This means you can measure throughput and latency by using the metrics corresponding to the SELECT and INSERT statements.
-
-Paste the following expressions into the **Expression** box and click **Execute** followed by **Add Graph**.
-
-### Throughput
-
-> Read IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-iops.png)
-
-> Write IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-iops.png)
-
-### Latency
-
-> Read Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-latency.png)
-
-> Write Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-latency.png)
-
-## 6. Clean up (optional)
-
-Optionally, you can shut down the local cluster created in Step 1.
-
-```sh
-$ docker stop yugabyte-node1 yugabyte-node2 yugabyte-node3
-$ docker rm yugabyte-node1 yugabyte-node2 yugabyte-node3
-$ docker network remove yb-net
-```
-
-## What's next?
-
-Set up [Grafana dashboards](../../grafana-dashboard/grafana/) for better visualization of the metrics being collected by Prometheus.
diff --git a/docs/content/v2.12/explore/observability/prometheus-integration/kubernetes.md b/docs/content/v2.12/explore/observability/prometheus-integration/kubernetes.md
deleted file mode 100644
index 18e3c454e4cf..000000000000
--- a/docs/content/v2.12/explore/observability/prometheus-integration/kubernetes.md
+++ /dev/null
@@ -1,79 +0,0 @@
----
-title: Prometheus integration
-headerTitle: Prometheus integration
-linkTitle: Prometheus integration
-description: Learn about exporting YugabyteDB metrics and monitoring the cluster with Prometheus.
-menu:
- v2.12:
- identifier: observability-4-kubernetes
- parent: explore-observability
- weight: 235
-type: docs
----
-
-
-
-You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint.
-
-For details on the metrics targets for YugabyteDB, see [Monitoring with Prometheus](../../../reference/configuration/default-ports/#monitoring-with-prometheus).
-
-If you haven't installed YugabyteDB yet, do so first by following the [Quick Start](../../../quick-start/install/) guide.
-
-## 1. Create universe
-
-If you have a previously running local universe, destroy it using the following.
-
-```sh
-$ kubectl delete -f yugabyte-statefulset.yaml
-```
-
-Start a new local cluster - by default, this will create a three-node universe with a replication factor of `3`.
-
-```sh
-$ kubectl apply -f yugabyte-statefulset.yaml
-```
-
-## Step 6. Clean up (optional)
-
-Optionally, you can shut down the local cluster created in Step 1.
-
-```sh
-$ kubectl delete -f yugabyte-statefulset.yaml
-```
-
-Further, to destroy the persistent volume claims (**you will lose all the data if you do this**), run:
-
-```sh
-kubectl delete pvc -l app=yb-master
-kubectl delete pvc -l app=yb-tserver
-```
diff --git a/docs/content/v2.12/explore/observability/prometheus-integration/linux.md b/docs/content/v2.12/explore/observability/prometheus-integration/linux.md
deleted file mode 100644
index 4d036a9bcb1f..000000000000
--- a/docs/content/v2.12/explore/observability/prometheus-integration/linux.md
+++ /dev/null
@@ -1,242 +0,0 @@
----
-title: Prometheus integration
-headerTitle: Prometheus integration
-linkTitle: Prometheus integration
-description: Learn about exporting YugabyteDB metrics and monitoring the cluster with Prometheus.
-menu:
- v2.12:
- identifier: observability-2-linux
- parent: explore-observability
- weight: 235
-type: docs
----
-
-
-
-You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint. For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
-
-This tutorial uses the [yugabyted](../../../../reference/configuration/yugabyted) cluster management utility.
-
-## Prerequisite
-
-Prometheus is installed on your local machine. If you have not done so already, follow the links below.
-
-- [Download Prometheus](https://prometheus.io/download/)
-- [Get Started with Prometheus](https://prometheus.io/docs/prometheus/latest/getting_started/)
-
-## 1. Create universe
-
-Start a new local three-node universe with a replication factor of `3`.
-
-```sh
-$ ./bin/yugabyted start \
- --base_dir=node-1 \
- --listen=127.0.0.1 \
- --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ ./bin/yugabyted start \
- --base_dir=node-2 \
- --listen=127.0.0.2 \
- --join=127.0.0.1 \
- --tserver_flags="start_redis_proxy=true"
-```
-
-```sh
-$ ./bin/yugabyted start \
- --base_dir=node-3 \
- --listen=127.0.0.3 \
- --join=127.0.0.1 \
- --tserver_flags="start_redis_proxy=true"
-```
-
-## 2. Run the YugabyteDB workload generator
-
-Download the [YugabyteDB workload generator](https://github.com/yugabyte/yb-sample-apps) JAR file (`yb-sample-apps.jar`) by running the following command.
-
-```sh
-$ wget https://github.com/yugabyte/yb-sample-apps/releases/download/1.3.9/yb-sample-apps.jar?raw=true -O yb-sample-apps.jar
-```
-
-Run the `CassandraKeyValue` workload application in a separate shell.
-
-```sh
-$ java -jar ./yb-sample-apps.jar \
- --workload CassandraKeyValue \
- --nodes 127.0.0.1:9042 \
- --num_threads_read 1 \
- --num_threads_write 1
-```
-
-## 3. Prepare Prometheus configuration file
-
-Copy the following into a file called `yugabytedb.yml`.
-
-```yaml
-global:
- scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute.
- evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute.
- # scrape_timeout is set to the global default (10s).
-
-# YugabyteDB configuration to scrape Prometheus time-series metrics
-scrape_configs:
- - job_name: "yugabytedb"
- metrics_path: /prometheus-metrics
- relabel_configs:
- - target_label: "node_prefix"
- replacement: "cluster-1"
- metric_relabel_configs:
- # Save the name of the metric so we can group_by since we cannot by __name__ directly...
- - source_labels: ["__name__"]
- regex: "(.*)"
- target_label: "saved_name"
- replacement: "$1"
- # The following basically retrofit the handler_latency_* metrics to label format.
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "server_type"
- replacement: "$1"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(.*)"
- target_label: "service_type"
- replacement: "$2"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "service_method"
- replacement: "$3"
- - source_labels: ["__name__"]
- regex: "handler_latency_(yb_[^_]*)_([^_]*)_([^_]*)(_sum|_count)?"
- target_label: "__name__"
- replacement: "rpc_latency$4"
-
- static_configs:
- - targets: ["127.0.0.1:7000", "127.0.0.2:7000", "127.0.0.3:7000"]
- labels:
- export_type: "master_export"
-
- - targets: ["127.0.0.1:9000", "127.0.0.2:9000", "127.0.0.3:9000"]
- labels:
- export_type: "tserver_export"
-
- - targets: ["127.0.0.1:12000", "127.0.0.2:12000", "127.0.0.3:12000"]
- labels:
- export_type: "cql_export"
-
- - targets: ["127.0.0.1:13000", "127.0.0.2:13000", "127.0.0.3:13000"]
- labels:
- export_type: "ysql_export"
-
- - targets: ["127.0.0.1:11000", "127.0.0.2:11000", "127.0.0.3:11000"]
- labels:
- export_type: "redis_export"
-```
-
-## 4. Start Prometheus server
-
-Go to the directory where Prometheus is installed and start the Prometheus server as below.
-
-```sh
-$ ./prometheus --config.file=yugabytedb.yml
-```
-
-Open the Prometheus UI at and then navigate to the Targets page under Status.
-
-![Prometheus Targets](/images/ce/prom-targets.png)
-
-## 5. Analyze key metrics
-
-On the Prometheus Graph UI, you can now plot the read/write throughput and latency for the `CassandraKeyValue` sample app. As you can see from the [source code](https://github.com/yugabyte/yugabyte-db/blob/master/java/yb-loadtester/src/main/java/com/yugabyte/sample/apps/CassandraKeyValue.java) of the app, it uses only SELECT statements for reads and INSERT statements for writes (aside from the initial CREATE TABLE). This means you can measure throughput and latency by using the metrics corresponding to the SELECT and INSERT statements.
-
-Paste the following expressions into the **Expression** box and click **Execute** followed by **Add Graph**.
-
-### Throughput
-
-> Read IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-iops.png)
-
-> Write IOPS
-
-```sh
-sum(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-iops.png)
-
-### Latency
-
-> Read Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="SelectStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-read-latency.png)
-
-> Write Latency (in microseconds)
-
-```sh
-avg(irate(rpc_latency_sum{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m])) /
-avg(irate(rpc_latency_count{server_type="yb_cqlserver", service_type="SQLProcessor", service_method="InsertStmt"}[1m]))
-```
-
-![Prometheus Read IOPS](/images/ce/prom-write-latency.png)
-
-## 6. Clean up (optional)
-
-Optionally, you can shut down the local cluster created in Step 1.
-
-```sh
-$ ./bin/yugabyted destroy \
- --base_dir=node-1
-```
-
-```sh
-$ ./bin/yugabyted destroy \
- --base_dir=node-2
-```
-
-```sh
-$ ./bin/yugabyted destroy \
- --base_dir=node-3
-```
-
-## What's next?
-
-Set up [Grafana dashboards](../../grafana-dashboard/grafana/) for better visualization of the metrics being collected by Prometheus.
diff --git a/docs/content/v2.12/explore/observability/prometheus-integration/macos.md b/docs/content/v2.12/explore/observability/prometheus-integration/macos.md
index 6213a2ddf889..914a18d2955f 100644
--- a/docs/content/v2.12/explore/observability/prometheus-integration/macos.md
+++ b/docs/content/v2.12/explore/observability/prometheus-integration/macos.md
@@ -11,38 +11,6 @@ menu:
type: docs
---
-
-
You can monitor your local YugabyteDB cluster with a local instance of [Prometheus](https://prometheus.io/), a popular standard for time-series monitoring of cloud native infrastructure. YugabyteDB services and APIs expose metrics in the Prometheus format at the `/prometheus-metrics` endpoint. For details on the metrics targets for YugabyteDB, see [Prometheus monitoring](../../../../reference/configuration/default-ports/#prometheus-monitoring).
This tutorial uses the [yugabyted](../../../../reference/configuration/yugabyted) cluster management utility.
diff --git a/docs/content/v2.6/explore/observability/prometheus-integration/docker.md b/docs/content/v2.6/explore/observability/prometheus-integration/docker.md
index 3fd1366494f6..e332942d46ac 100644
--- a/docs/content/v2.6/explore/observability/prometheus-integration/docker.md
+++ b/docs/content/v2.6/explore/observability/prometheus-integration/docker.md
@@ -14,21 +14,21 @@ type: docs
-
-
+
macOS
-
-
+
Linux
-
-
+
Docker
diff --git a/docs/content/v2.6/explore/observability/prometheus-integration/kubernetes.md b/docs/content/v2.6/explore/observability/prometheus-integration/kubernetes.md
index 2db2cf6b6445..d72c3c0eabe3 100644
--- a/docs/content/v2.6/explore/observability/prometheus-integration/kubernetes.md
+++ b/docs/content/v2.6/explore/observability/prometheus-integration/kubernetes.md
@@ -14,21 +14,21 @@ type: docs
-
-
+
macOS
-
-
+
Linux
-
-
+
Docker
diff --git a/docs/content/v2.6/explore/observability/prometheus-integration/linux.md b/docs/content/v2.6/explore/observability/prometheus-integration/linux.md
index b4b3c7654421..52eeada09700 100644
--- a/docs/content/v2.6/explore/observability/prometheus-integration/linux.md
+++ b/docs/content/v2.6/explore/observability/prometheus-integration/linux.md
@@ -14,21 +14,21 @@ type: docs
-
-
+
macOS
-
-
+
Linux
-
-
+
Docker
diff --git a/docs/content/v2.6/explore/observability/prometheus-integration/macos.md b/docs/content/v2.6/explore/observability/prometheus-integration/macos.md
index 85d671a6ae88..1f40e52ad8e0 100644
--- a/docs/content/v2.6/explore/observability/prometheus-integration/macos.md
+++ b/docs/content/v2.6/explore/observability/prometheus-integration/macos.md
@@ -14,21 +14,21 @@ type: docs
-
-
+
macOS
-
-
+
Linux
-
-
+
Docker
diff --git a/docs/content/v2.8/explore/observability/prometheus-integration/kubernetes.md b/docs/content/v2.8/explore/observability/prometheus-integration/kubernetes.md
index 1871b73483ff..1d76827b4123 100644
--- a/docs/content/v2.8/explore/observability/prometheus-integration/kubernetes.md
+++ b/docs/content/v2.8/explore/observability/prometheus-integration/kubernetes.md
@@ -14,21 +14,21 @@ type: docs
-
-
+
macOS
-
-
+
Linux
-
-
+
Docker
diff --git a/docs/layouts/shortcodes/explore-setup-single.html b/docs/layouts/shortcodes/explore-setup-single.html
new file mode 100644
index 000000000000..96dfce8a4b33
--- /dev/null
+++ b/docs/layouts/shortcodes/explore-setup-single.html
@@ -0,0 +1,4 @@
+
\ No newline at end of file
diff --git a/docs/netlify.toml b/docs/netlify.toml
index 2e6731332399..a605a300cbfb 100644
--- a/docs/netlify.toml
+++ b/docs/netlify.toml
@@ -459,6 +459,12 @@
from = "/preview/explore/fault-tolerance/*"
to = "/preview/explore/fault-tolerance/macos/"
+# Prometheus integration redirects
+
+[[redirects]]
+ from = "/preview/explore/observability/prometheus-integration/*"
+ to = "/preview/explore/observability/prometheus-integration/macos/"
+
# Hugo resource caching plugin configuration
# https://github.com/cdeleeuwe/netlify-plugin-hugo-cache-resources#readme
diff --git a/docs/static/images/section_icons/architecture/Reusing-PostgreSQL-query-layer.png b/docs/static/images/section_icons/architecture/Reusing-PostgreSQL-query-layer.png
index 068000533b05..fd6519586b6c 100644
Binary files a/docs/static/images/section_icons/architecture/Reusing-PostgreSQL-query-layer.png and b/docs/static/images/section_icons/architecture/Reusing-PostgreSQL-query-layer.png differ