Skip to content

Commit

Permalink
Merge branch 'master' into docs/dapper-orm
Browse files Browse the repository at this point in the history
  • Loading branch information
aishwarya24 authored Nov 14, 2022
2 parents 1b3d271 + febc5da commit 81c64d7
Show file tree
Hide file tree
Showing 116 changed files with 1,333 additions and 3,032 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ The recheck steps are as follows:
1. On commit of any conflicting transaction, traverse the chain of updates as described above and re-evaluate the latest version of the row for any conflict. If there is no conflict, `insert` the original row. Else, perform the `do update` part on the latest version of the row.
1. ON CONFLICT DO NOTHING: do nothing if a conflict occurs.

Note that the above methodology in PostgreSQL can lead to two different user visible semantics, one which is the common case and another which is a degenerate situation which can never be seen in practice, but is nevertheless possible and still upholds the semantics of Read Commited isolation. The common case is as follows:
Note that the preceding methodology in PostgreSQL can lead to two different user visible semantics. One is the common case, and the other is a degenerate situation that can never be seen in practice, but is nevertheless possible and still upholds the semantics of Read Committed isolation. The common case is as follows:

```sql
create table test (k int primary key, v int);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,11 @@ There are a number of display widgets and shortcodes available. All the shortcod
## Admonition boxes

Use the note, tip, and warning shortcodes to create admonition boxes.

### tip

{{< tip title="Tip" >}}
A tip box gives a hint or other useful but optional piece of information.
A tip box gives a hint or other helpful but optional piece of information.
{{< /tip >}}

#### tip source
Expand Down Expand Up @@ -63,7 +64,7 @@ An inline section switcher lets you switch between content sections **without a

![Inline section switcher](https://raw.githubusercontent.com/yugabyte/docs/master/contributing/inline-section-switcher.png)

The corresponding code for this widget is shown below. Note that the actual content must be placed in a file with the `.md` extension inside a subdirectory whose name is easy to associate with the switcher title.
The corresponding code for this widget is as follows. Note that the actual content must be placed in a file with the `.md` extension inside a subdirectory; name the subdirectory such that it can be associated with the switcher title.

```html
<ul class="nav nav-tabs-alt nav-tabs-yb">
Expand Down
5 changes: 0 additions & 5 deletions docs/content/preview/deploy/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@ type: indexpage
<a class="section-link icon-offset" href="manual-deployment/">
<div class="head">
<img class="icon" src="/images/section_icons/deploy/manual-deployment.png" aria-hidden="true" />
<div class="articles">5 articles</div>
<div class="title">Manual deployment</div>
</div>
<div class="body">
Expand All @@ -42,7 +41,6 @@ type: indexpage
<a class="section-link icon-offset" href="public-clouds/">
<div class="head">
<img class="icon" src="/images/section_icons/deploy/public-clouds.png" aria-hidden="true" />
<div class="articles">3 articles</div>
<div class="title">Public clouds</div>
</div>
<div class="body">
Expand All @@ -55,7 +53,6 @@ type: indexpage
<a class="section-link icon-offset" href="kubernetes/">
<div class="head">
<img class="icon" src="/images/section_icons/deploy/kubernetes.png" aria-hidden="true" />
<div class="articles">5 chapters</div>
<div class="title">Kubernetes</div>
</div>
<div class="body">
Expand Down Expand Up @@ -83,7 +80,6 @@ type: indexpage
<a class="section-link icon-offset" href="multi-dc/">
<div class="head">
<img class="icon" src="/images/section_icons/explore/planet_scale.png" aria-hidden="true" />
<div class="articles">4 chapters</div>
<div class="title">Multi-DC deployments</div>
</div>
<div class="body">
Expand All @@ -92,5 +88,4 @@ type: indexpage
</a>
</div>


</div>
2 changes: 1 addition & 1 deletion docs/content/preview/deploy/kubernetes/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Deploy YugabyteDB clusters on Kubernetes
headerTitle: Deploy on Kubernetes
linkTitle: Kubernetes
description: Deploy YugabyteDB clusters natively on Kubernetes with various providers
headcontent: This section describes how to deploy YugabyteDB natively on Kubernetes.
headcontent: Deploy YugabyteDB natively on Kubernetes
image: /images/section_icons/deploy/kubernetes.png
aliases:
- /deploy/kubernetes/
Expand Down
6 changes: 3 additions & 3 deletions docs/content/preview/deploy/manual-deployment/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Manual deployment of YugabyteDB clusters
headerTitle: Manual deployment
linkTitle: Manual deployment
description: Deploy a YugabyteDB cluster in a single region or data center with a multi-zone/multi-rack configuration.
headcontent: Instructions for manually deploying YugabyteDB.
headcontent: Deploy a YugabyteDB cluster in a single region or data center
image: /images/section_icons/deploy/manual-deployment.png
menu:
preview:
Expand All @@ -15,7 +15,7 @@ type: indexpage

This section covers the generic manual deployment of a YugabyteDB cluster in a single region or data center with a multi-zone/multi-rack configuration. Note that single zone configuration is a special case of multi-zone where all placement related flags are set to the same value across every node.

For AWS deployments specifically, a <a href="../public-clouds/aws/manual-deployment/">step-by-step guide</a> to deploying a YugabyteDB cluster is also available. These steps can be easily adopted for on-premises deployments or deployments in other clouds.
For AWS deployments specifically, a <a href="../public-clouds/aws/manual-deployment/">step-by-step guide</a> to deploying a YugabyteDB cluster is also available. These steps can be adopted for on-premises deployments or deployments in other clouds.

<div class="row">
<div class="col-12 col-md-6 col-lg-12 col-xl-6">
Expand All @@ -25,7 +25,7 @@ For AWS deployments specifically, a <a href="../public-clouds/aws/manual-deploym
<div class="title">1. System configuration</div>
</div>
<div class="body">
Configure various system parameters such as ulimits correctly in order to run YugabyteDB.
Configure various system parameters such as ulimits correctly to run YugabyteDB.
</div>
</a>
</div>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ $ tar xvfz yugabyte-<version>-<os>.tar.gz && cd yugabyte-<version>/

## Configure

- Run the **post_install.sh** script to make some final updates to the installed software.
Run the **post_install.sh** script to make some final updates to the installed software.

```sh
$ ./bin/post_install.sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -91,4 +91,6 @@ Remember to add the command with which you launched `yb-master` to a cron to res

{{< /tip >}}

Now you are ready to start the YB-TServers.
## Next step

Now you are ready to [start the YB-TServers](../start-tservers/).
24 changes: 14 additions & 10 deletions docs/content/preview/deploy/manual-deployment/start-tservers.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,13 +24,13 @@ This section covers deployment for a single region or data center in a multi-zon

- Create a 6-node cluster with replication factor of 3.
- YB-TServer server should on all the six nodes, and the YB-Master server should run on only three of these nodes.
- Assume the three YB-Master private IP addresses are `172.151.17.130`, `172.151.17.220` and `172.151.17.140`.
- Cloud will be `aws`, region will be `us-west`, and the 3 AZs will be `us-west-2a`, `us-west-2b`, and `us-west-2c`. Two nodes will be placed in each AZ in such a way that 1 replica for each tablet (aka shard) gets placed in any 1 node for each AZ.
- Multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`
- Assume the three YB-Master private IP addresses are `172.151.17.130`, `172.151.17.220`, and `172.151.17.140`.
- Cloud is AWS, region us-west, and the three availability zones us-west-2a, us-west-2b, and us-west-2c. Two nodes will be placed in each AZ in such a way that 1 replica for each tablet (aka shard) gets placed in any 1 node for each AZ.
- Multiple data drives mounted on `/home/centos/disk1`, `/home/centos/disk2`.

## Run YB-TServer with command line flags

Run the `yb-tserver` server on each of the six nodes as shown below. Note that all of the master addresses have to be provided using the `--tserver_master_addrs` flag. Replace the [`--rpc_bind_addresses`](../../../reference/configuration/yb-tserver/#rpc-bind-addresses) value with the private IP address of the host as well as the set the `placement_cloud`,`placement_region` and `placement_zone` values appropriately. For single zone deployment, use the same value for the `--placement_zone` flag.
Run the `yb-tserver` server on each of the six nodes as follows. Note that all of the master addresses have to be provided using the `--tserver_master_addrs` flag. Replace the [`--rpc_bind_addresses`](../../../reference/configuration/yb-tserver/#rpc-bind-addresses) value with the private IP address of the host, and set the `placement_cloud`, `placement_region`, and `placement_zone` values appropriately. For single zone deployment, use the same value for the `--placement_zone` flag.

```sh
$ ./bin/yb-tserver \
Expand All @@ -56,7 +56,7 @@ The number of comma-separated values in the [`--tserver_master_addrs`](../../../

## Run YB-TServer with configuration file

Alternatively, you can also create a `tserver.conf` file with the following flags and then run the `yb-tserver` with the [`--flagfile`](../../../reference/configuration/yb-tserver/#flagfile)) flag as shown here. For each YB-TServer server, replace the RPC bind address flags with the private IP address of the host running the YB-TServer server.
Alternatively, you can also create a `tserver.conf` file with the following flags and then run the `yb-tserver` with the [`--flagfile`](../../../reference/configuration/yb-tserver/#flagfile)) flag. For each YB-TServer server, replace the RPC bind address flags with the private IP address of the host running the YB-TServer server.

```sh
--tserver_master_addrs=172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100
Expand Down Expand Up @@ -99,7 +99,7 @@ Verify by running the following.
$ curl -s http://<any-master-ip>:7000/cluster-config
```

And confirm that the output looks similar to what is shown below with `min_num_replicas` set to `1` for each AZ.
Confirm that the output looks similar to the following, with `min_num_replicas` set to `1` for each AZ.

```output.json
replication_info {
Expand Down Expand Up @@ -135,15 +135,15 @@ replication_info {

## Verify health

Make sure all YB-TServer servers are now working as expected by inspecting the INFO log. The default logs directory is always inside the first directory specified in the [`--fs_data_dirs`](../../../reference/configuration/yb-tserver/#fs-data-dirs) flag.
Make sure all YB-TServer servers are working as expected by inspecting the INFO log. The default logs directory is always inside the first directory specified in the [`--fs_data_dirs`](../../../reference/configuration/yb-tserver/#fs-data-dirs) flag.

You can do this as shown below.
You can do this as follows:

```sh
$ cat /home/centos/disk1/yb-data/tserver/logs/yb-tserver.INFO
```

In each of the four YB-TServer logs, you should see log messages similar to the following.
In each of the four YB-TServer logs, you should see log messages similar to the following:

```output
I0912 16:27:18.296516 8168 heartbeater.cc:305] Connected to a leader master server at 172.151.17.140:7100
Expand All @@ -158,7 +158,7 @@ I0912 16:27:18.311748 8142 rpc_server.cc:158] RPC server started. Bound to: 0.0
I0912 16:27:18.311828 8142 tablet_server_main.cc:128] CQL server successfully started
```

In the current YB-Master leader log, you should see log messages similar to the following.
In the current YB-Master leader log, you should see log messages similar to the following:

```output
I0912 22:26:32.832296 3162 ts_manager.cc:97] Registered new tablet server { permanent_uuid: "766ec935738f4ae89e5ff3ae26c66651" instance_seqno: 1505255192814357 } with Master
Expand All @@ -171,3 +171,7 @@ I0912 22:26:41.055996 3162 ts_manager.cc:97] Registered new tablet server { per
Remember to add the command you used to start the YB-TServer to a `cron` job to restart it if it goes down.

{{< /tip >}}

## Grow the cluster

To grow the cluster, add additional YB-TServer nodes just as you do when creating the cluster.
50 changes: 26 additions & 24 deletions docs/content/preview/deploy/manual-deployment/verify-deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,69 +11,71 @@ menu:
type: docs
---

We now have a cluster/universe on six nodes with a replication factor of `3`. Assume their IP addresses are `172.151.17.130`, `172.151.17.220`, `172.151.17.140`, `172.151.17.150`, `172.151.17.160` and `172.151.17.170`. YB-Master servers are running on only the first three of these nodes.
You now have a cluster/universe on six nodes with a replication factor of `3`. Assume their IP addresses are `172.151.17.130`, `172.151.17.220`, `172.151.17.140`, `172.151.17.150`, `172.151.17.160`, and `172.151.17.170`. YB-Master servers are running on only the first three of these nodes.

## [Optional] Setup YEDIS API
## [Optional] Set up YEDIS API

While the YCQL and YSQL APIs are turned on by default after all of the YB-TServers start, the Redis-compatible YEDIS API is off by default. If you want this cluster to be able to support Redis clients, run the following command from any of the 4 instances. The command adds the special Redis table into the DB and starts the YEDIS server on port 6379 on all instances.

```sh
$ ./bin/yb-admin --master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 setup_redis_table
```

{{< note title="Note" >}}

If you want this cluster to be able to support Redis clients, you **must** perform this step.

{{< /note >}}

While the YCQL and YSQL APIs are turned on by default after all of the YB-TServers start, the Redis-compatible YEDIS API is off by default. If you want this cluster to be able to support Redis clients, run the following command from any of the 4 instances. The command below will add the special Redis table into the DB and also start the YEDIS server on port 6379 on all instances.

```sh
$ ./bin/yb-admin --master_addresses 172.151.17.130:7100,172.151.17.220:7100,172.151.17.140:7100 setup_redis_table
```

## View the master UI dashboard

You should now be able to view the master dashboard on the ip address of any master. In our example, this is one of the following URLs:
You should now be able to view the master dashboard on the IP address of any master. In this example, this is one of the following URLs:

- `http://172.151.17.130:7000`
- `http://172.151.17.220:7000`
- `http://172.151.17.140:7000`

{{< tip title="Tip" >}}

If this is a public cloud deployment, remember to use the public ip for the nodes, or a http proxy to view these pages.
If this is a public cloud deployment, remember to use the public IP for the nodes, or a HTTP proxy to view these pages.

{{< /tip >}}

## Connect clients

- Clients can connect to YSQL API at
Clients can connect to YSQL API at the following addresses:

```sh
172.151.17.130:5433,172.151.17.220:5433,172.151.17.140:5433,172.151.17.150:5433,172.151.17.160:5433,172.151.17.170:5433
```

- Clients can connect to YCQL API at
Clients can connect to YCQL API at the following addresses:

```sh
172.151.17.130:9042,172.151.17.220:9042,172.151.17.140:9042,172.151.17.150:9042,172.151.17.160:9042,172.151.17.170:9042
```

- Clients can connect to YEDIS API at
Clients can connect to YEDIS API at the following addresses:

```sh
172.151.17.130:6379,172.151.17.220:6379,172.151.17.140:6379,172.151.17.150:6379,172.151.17.160:6379,172.151.17.170:6379
```

## Default ports reference

The above deployment uses the various default ports listed below.
The preceding deployment uses the following default ports:

Service | Type | Port
--------|------| -------
`yb-master` | rpc | 7100
`yb-master` | admin web server | 7000
`yb-tserver` | rpc | 9100
`yb-tserver` | admin web server | 9000
`ycql` | rpc | 9042
`ycql` | admin web server | 12000
`yedis` | rpc | 6379
`yedis` | admin web server | 11000
`ysql` | rpc | 5433
`ysql` | admin web server | 13000
`yb-master` | RPC | 7100
`yb-master` | Admin web server | 7000
`yb-tserver` | RPC | 9100
`yb-tserver` | Admin web server | 9000
`ycql` | RPC | 9042
`ycql` | Admin web server | 12000
`yedis` | RPC | 6379
`yedis` | Admin web server | 11000
`ysql` | RPC | 5433
`ysql` | Admin web server | 13000

For more information on ports used by YugabyteDB, refer to [Default ports](../../../reference/configuration/default-ports/).
2 changes: 1 addition & 1 deletion docs/content/preview/deploy/multi-dc/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ menu:
weight: 631
type: indexpage
---
YugabyteDB is a geo-distributed SQL database that can be easily deployed across multiple data centers (DCs) or cloud regions. There are two primary configurations for such multi-DC deployments.
YugabyteDB is a geo-distributed SQL database that can be deployed across multiple data centers (DCs) or cloud regions. There are two primary configurations for such multi-DC deployments.

The first configuration uses a single cluster stretched across 3 or more data centers with data getting automatically sharded across all data centers. This configuration is default for [Spanner-inspired databases](../../architecture/docdb/) like YugabyteDB. Data replication across data centers is synchronous and is based on the Raft consensus protocol. This means writes are globally consistent and reads are either globally consistent or timeline consistent (when application clients use follower reads). Additionally, resilience against data center failures is fully automatic. This configuration has the potential to incur Wide Area Network (WAN) latency in the write path if the data centers are geographically located far apart from each other and are connected through the shared/unreliable Internet.

Expand Down
2 changes: 1 addition & 1 deletion docs/content/preview/deploy/multi-dc/async-replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ Replication lag is computed at the tablet level as follows:

*hybrid_clock_time* is the hybrid clock timestamp on the source's tablet-server, and *last_read_hybrid_time* is the hybrid clock timestamp of the latest record pulled from the source.
To obtain information about the overall maximum lag, you should check `/metrics` or `/prometheus-metrics` for `async_replication_sent_lag_micros` or `async_replication_committed_lag_micros` and take the maximum of these values across each source's T-Server. For information on how to set up the node exporter and Prometheus manually, see [Prometheus integration](../../../explore/observability/prometheus-integration/linux/).
To obtain information about the overall maximum lag, you should check `/metrics` or `/prometheus-metrics` for `async_replication_sent_lag_micros` or `async_replication_committed_lag_micros` and take the maximum of these values across each source's T-Server. For information on how to set up the node exporter and Prometheus manually, see [Prometheus integration](../../../explore/observability/prometheus-integration/macos/).

## Set up replication with TLS

Expand Down
2 changes: 1 addition & 1 deletion docs/content/preview/develop/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ type: indexpage
<div class="row">

<div class="col-12 col-md-6 col-lg-12 col-xl-6">
<a class="section-link icon-offset" href="learn/">
<a class="section-link icon-offset" href="build-apps/">
<div class="head">
<img class="icon" src="/images/section_icons/quick_start/sample_apps.png" aria-hidden="true" />
<div class="title">Build an application</div>
Expand Down
Loading

0 comments on commit 81c64d7

Please sign in to comment.