Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

op-guide: update tikv rolling update policy #592

Merged
merged 1 commit into from
Aug 31, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion op-guide/ansible-deployment-rolling-update.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ wget http://download.pingcap.org/tidb-v2.0.3-linux-amd64-unportable.tar.gz
$ ansible-playbook rolling_update.yml --tags=tikv
```

When you apply a rolling update to the TiKV instance, Ansible migrates the Region leader to other nodes. The concrete logic is as follows: Call the PD API to add the `evict leader scheduler` -> Inspect the `leader_count` of this TiKV instance every 10 seconds -> Wait the `leader_count` to reduce to below 10, or until the times of inspecting the `leader_count` is more than 12 -> Start closing the rolling update of TiKV after two minutes of timeout -> Delete the `evict leader scheduler` after successful start. The operations are executed serially.
When you apply a rolling update to the TiKV instance, Ansible migrates the Region leader to other nodes. The concrete logic is as follows: Call the PD API to add the `evict leader scheduler` -> Inspect the `leader_count` of this TiKV instance every 10 seconds -> Wait the `leader_count` to reduce to below 1, or until the times of inspecting the `leader_count` is more than 18 -> Start closing the rolling update of TiKV after three minutes of timeout -> Delete the `evict leader scheduler` after successful start. The operations are executed serially.

If the rolling update fails in the process, log in to `pd-ctl` to execute `scheduler show` and check whether `evict-leader-scheduler` exists. If it does exist, delete it manually. Replace `{PD_IP}` and `{STORE_ID}` with your PD IP and the `store_id` of the TiKV instance:

Expand Down
8 changes: 3 additions & 5 deletions tispark/tispark-quick-start-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,17 @@ category: User Guide

# TiSpark Quick Start Guide

To make it easy to [try TiSpark](tispark-user-guide.md), the TiDB cluster integrates Spark, TiSpark jar package and TiSpark sample data by default, in both the Pre-GA and master versions installed using TiDB-Ansible.
To make it easy to [try TiSpark](tispark-user-guide.md), the TiDB cluster installed using TiDB-Ansible integrates Spark, TiSpark jar package and TiSpark sample data by default.

## Deployment information

- Spark is deployed by default in the `spark` folder in the TiDB instance deployment directory.
- The TiSpark jar package is deployed by default in the `jars` folder in the Spark deployment directory.

```
spark/jars/tispark-0.1.0-beta-SNAPSHOT-jar-with-dependencies.jar
spark/jars/tispark-SNAPSHOT-jar-with-dependencies.jar
```

- TiSpark sample data and import scripts are deployed by default in the TiDB-Ansible directory.

```
Expand Down Expand Up @@ -108,8 +108,6 @@ MySQL [TPCH_001]> show tables;

## Use example

Assume that the IP of your PD node is `192.168.0.2`, and the port is `2379`.

First start the spark-shell in the spark deployment directory:

```
Expand Down