English | 中文
This document briefly describes how to integrate TiDB with ProxySQL using CentOS 7 as an example. If you have integration needs for other systems, check out the Try Out section, which is an introduction to deploying a test integration environment using Docker and Docker Compose. You can also refer yourself to the following links for more information:
-
Download TiDB code, enter
tidb-server
folder and rungo build
.git clone git@github.com:pingcap/tidb.git cd tidb/tidb-server go build
-
Use the configuration file tidb-config.toml to start TiDB. Note that:
- Use
unistore
as the storage engine, which is the test storage engine for TiDB, so please use it for testing only. TIDB_SERVER_PATH
: The location of the binary compiled withgo build
in the previous step. For example, if you did the previous step under/usr/local
, thenTIDB_SERVER_PATH
should be:/usr/local/tidb/tidb-server/tidb-server
.LOCAL_TIDB_LOG
: The location to export TiDB logs
${TIDB_SERVER_PATH} -config ./tidb-config.toml -store unistore -path "" -lease 0s > ${LOCAL_TIDB_LOG} 2>&1 &
- Use
TiUP, as the package manager, makes it far easier to manage different cluster components in the TiDB ecosystem.
-
Install TiUP
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
-
Test environment start TiDB
tiup playground
-
If you do not have a TiDB Cloud account, click TiDB Cloud to sign up for an account.
-
Sign in with your TiDB Cloud account.
-
To create a Developer Tier cluster for one year free, you can either select the Developer Tier plan on the plan page or click Create a Cluster (Dev Tier).
-
On the Create a Cluster (Dev Tier) page, set up your cluster name, password, cloud provider (for now, only AWS is available for Developer Tier), and region (a nearby region is recommended). Then click Create to create your cluster.
-
Your TiDB Cloud cluster will be created in approximately 5 to 15 minutes. You can check the creation progress at Active Clusters.
-
After creating a cluster, on the Active Clusters page, click the name of your newly created cluster to navigate to the cluster control panel.
-
Click Connect to create a traffic filter (a list of client IPs allowed for TiDB connection).
-
In the popup window, click Add Your Current IP Address to fill in your current IP address, and then click Create Filter to create a traffic filter.
-
Copy the string to connect with a SQL client for later use.
We recommend using TiDB Cloud directly when you need hosting TiDB services (e.g., you can't manage it yourself, or you need a cloud-native environment, etc.) You can check out TiDB Cloud - Create a TiDB Cluster to get a TiDB cluster in TiDB Cloud for a formal environment.
The formal environment is much more complex than the test environment, so we recommend reading the article Deploy a TiDB Cluster Using TiUP and then deploying it based on hardware conditions.
-
Adding repository:
cat > /etc/yum.repos.d/proxysql.repo << EOF [proxysql] name=ProxySQL YUM repository baseurl=https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/centos/\$releasever gpgcheck=1 gpgkey=https://repo.proxysql.com/ProxySQL/proxysql-2.4.x/repo_pub_key EOF
-
Install:
yum install proxysql
-
Startup:
systemctl start proxysql
Please read ProxySQL Github page or the official documentation for installation.
We need to write the host of TiDB in the ProxySQL configuration to use it as a proxy for TiDB. The required configuration items are listed below and the rest of the configuration items can be found in the ProxySQL official documentation.
ProxySQL uses a separate port for configuration management and another port for proxying. We call the entry point for configuration management ProxySQL Admin interface and the entry point for proxying ProxySQL MySQL interface.
- ProxySQL Admin interface: It is possible to connect to the admin interface either using a user with
admin
privileges to read and write configuration or a user withstats
privileges that can only read certain statistics (no read or write configuration). The default credentials areadmin:admin
andstats:stats
, but for security reasons, it is possible to connect locally using the default credentials. To connect remotely a new user needs to configure it, and often it is namedradmin
. - ProxySQL MySQL interface: Used as a proxy to forward SQL to the configured service.
ProxySQL has three layers of configuration: runtime
, memory
, and disk
. You can change the configuration of the memory
layer only. After changing the configuration, you can use load xxx to runtime
to make the configuration effective, and/or you can use save xxx to disk
to save to the disk to prevent configuation loss.
Add TiDB backend in ProxySQL, you can add multiple TiDB backends if you have more than one. Please do this at ProxySQL Admin interface:
insert into mysql_servers(hostgroup_id,hostname,port) values(0,'127.0.0.1',4000);
load mysql servers to runtime;
save mysql servers to disk;
Field Explanation:
hostgroup_id
: ProxySQL manages backend services by hostgroup, you can configure several services that need load balancing as the same hostgroup, so that ProxySQL will distribute SQL to these services evenly. And when you need to distinguish different backend services (such as read/write splitting scenario), you can configure them as different hostgroups.hostname
: IP or domain of the backend service.port
: The port of the backend service.
Add a TiDB backend login user to ProxySQL. ProxySQL will allow this account to login ProxySQL MySQL interface and ProxySQL will use it to create a connection to TiDB, so make sure this account has the appropriate permissions in TiDB. Please do this at ProxySQL Admin interface:
insert into mysql_users(username,password,active,default_hostgroup,transaction_persistent) values('root','',1,0,1);
load mysql users to runtime;
save mysql users to disk;
Field Explanation:
username
: usernamepassword
: passwordactive
:1
is active,0
is inactive, only theactive = 1
user can login.default_hostgroup
: This user default hostgroup, where its traffic will be sent unless query rules route the traffic to a different hostgroup.transaction_persistent
: A value of1
indicates transaction persistence, i.e., when a connection opens a transaction using this user, then until the transaction is committed or rolled back, All statements are routed to the same hostgroup.
In addition to configuration using ProxySQL Admin interface, configuration files can also be used for configuration. In Official Explanation, the configuration file should only be considered as a secondary way of initialization and not as the primary way of configuration. The configuration file is only read when the SQLite database is not created and the configuration file will not continue to be read subsequently. Therefore, when using the config file, you should delete the SQLite database. It will LOSE the changes you made to the configuration in ProxySQL Admin interface:
rm /var/lib/proxysql/proxysql.db
Alternatively, it is also possible to run LOAD xxx FROM CONFIG
to override the current in-memory configuration with the configuration on config file.
The config file is located at /etc/proxysql.cnf
, we will translate the above required configuration to the config file way, only change mysql_servers
, mysql_users
two nodes, the rest of the configuration items can check /etc/proxysql.cnf
:
mysql_servers =
(
{
address="127.0.0.1"
port=4000
hostgroup=0
max_connections=2000
}
)
mysql_users:
(
{
username = "root"
password = ""
default_hostgroup = 0
max_connections = 1000
default_schema = "test"
active = 1
transaction_persistent = 1
}
)
Then use systemctl restart proxysql
to restart the service and it will take effect. The SQLite database will be created automatically after the config file takes effect and the config file will not be read again.
The above config items are required. You can get all the config items' names and their roles in the Global Variables article in the ProxySQL documentation.
You can use Docker and Docker Compose for quick start. Make sure the ports 4000
and 6033
are free.
docker-compose up -d
This has completed the startup of an integrated TiDB and ProxySQL environment, which will start two containers. DO NOT use it to create in a production environment. You can connect to the port 6033
(ProxySQL) using the username root
and an empty password. The container specific configuration can be found in docker-compose.yaml and the ProxySQL specific configuration can be found in proxysql-docker.cnf.
Run:
mysql -u root -h 127.0.0.1 -P 6033 -e "SELECT VERSION()"
Result:
+--------------------+
| VERSION() |
+--------------------+
| 5.7.25-TiDB-v6.1.0 |
+--------------------+
If you satisfy the following dependencies:
-
Permissions for the tidb-test code repository tidb-test
-
The machine needs to be connected to the network
-
Golang SDK
-
Git
-
One of the following:
-
Local Startup
- CentOS 7 machine (can be a physical or virtual machine)
- Yum
-
Docker Startup
- Docker
- Docker Compose
-
Then you can run locally: . /proxysql-initial.sh && . /test-local.sh
or use Docker: . /test-docker.sh
to run integration tests.
There is more information available in the integration test documentation.
Use ProxySQL Admin Interface to configure a load balancing traffic as an example. The example will do:
-
Start 3 TiDB containers through Docker Compose, all the ports in the container are
4000
, and mapped to host ports4001
,4002
,4003
. -
Start one container of ProxySQL through Docker Compose, the port
6033
in the container is for ProxySQL MySQL Interface, and mapped host port 6034. The ProxySQL Admin Interface port is not exposed because it can only log in locally (i.e., inside the container). -
Within the 3 TiDB instances, create the same table structure but write different data:
'tidb-0'
,'tidb-1'
,'tidb-2'
, in order to distinguish between the different database instances. -
Use the
docker-compose exec
command to run the prepared SQL file for configuring ProxySQL in ProxySQL Admin Interface, this SQL file will run:- Add 3 TiDB backend hosts with
hostgroup_id
of0
. - Take effect the TiDB backend configuration and save it on disk.
- Add user
root
with an empty password anddefault_hostgroup
as0
, corresponding to the TiDB backendhostgroup_id
above. - Take effect the user configuration and save it on disk.
- Add 3 TiDB backend hosts with
-
Log in to ProxySQL MySQL Interface with the
root
user and query 5 times, expecting three different returns:'tidb-0'
,'tidb-1'
and'tidb-2'
. -
Stop and clear Docker Compose started resources, such as: containers and network topologies.
Dependencies:
- Docker
- Docker Compose
- MySQL Client
cd example/load-balance-admin-interface/
./test-load-balance.sh
Because of load balancing, it is expected that the output will have three different results: 'tidb-0'
, 'tidb-1'
, and 'tidb-2'
. But the exact order cannot be expected. One of the expected outputs is:
# ./test-load-balance.sh
Creating network "load-balance-admin-interface_default" with the default driver
Creating load-balance-admin-interface_tidb-1_1 ... done
Creating load-balance-admin-interface_tidb-2_1 ... done
Creating load-balance-admin-interface_tidb-0_1 ... done
Creating load-balance-admin-interface_proxysql_1 ... done
+--------+
| db |
+--------+
| tidb-2 |
+--------+
+--------+
| db |
+--------+
| tidb-0 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
Stopping load-balance-admin-interface_proxysql_1 ... done
Stopping load-balance-admin-interface_tidb-0_1 ... done
Stopping load-balance-admin-interface_tidb-2_1 ... done
Stopping load-balance-admin-interface_tidb-1_1 ... done
Removing load-balance-admin-interface_proxysql_1 ... done
Removing load-balance-admin-interface_tidb-0_1 ... done
Removing load-balance-admin-interface_tidb-2_1 ... done
Removing load-balance-admin-interface_tidb-1_1 ... done
Removing network load-balance-admin-interface_default
Use ProxySQL Admin Interface to configure a user split traffic as an example. The different users will use their own TiDB backend. The example will do:
-
Start 2 TiDB containers through Docker Compose, all the ports in the container are
4000
, and mapped to host ports4001
and4002
. -
Start one container of ProxySQL through Docker Compose, the port
6033
in the container is for ProxySQL MySQL Interface, and mapped host port 6034. The ProxySQL Admin Interface port is not exposed because it can only log in locally (i.e., inside the container). -
Within the 2 TiDB instances, create the same table structure but write different data:
'tidb-0'
,'tidb-1'
, in order to distinguish between the different database instances. -
Use the
docker-compose exec
command to run the prepared SQL file for configuring ProxySQL in ProxySQL Admin Interface, this SQL file will run:- Add 2 TiDB backend hosts.
hostgroup_id
oftidb-0
is0
, andhostgroup_id
oftidb-1
is1
. - Take effect the TiDB backend configuration and save it on disk.
- Add user
root
with an empty password anddefault_hostgroup
as0
. It means that the SQL will default route totidb-0
. - Add user
root1
with an empty password anddefault_hostgroup
as1
. It means that the SQL will default route totidb-1
. - Take effect the user configuration and save it on disk.
- Add 2 TiDB backend hosts.
-
Log in to ProxySQL MySQL Interface with the
root
user androot1
user. The expected return is'tidb-0'
and'tidb-1'
. -
Stop and clear Docker Compose started resources, such as: containers and network topologies.
Dependencies:
- Docker
- Docker Compose
- MySQL Client
cd example/user-split-admin-interface/
./test-user-split.sh
# ./test-user-split.sh
Creating network "user-split-admin-interface_default" with the default driver
Creating user-split-admin-interface_tidb-1_1 ... done
Creating user-split-admin-interface_tidb-0_1 ... done
Creating user-split-admin-interface_proxysql_1 ... done
+--------+
| db |
+--------+
| tidb-0 |
+--------+
+--------+
| db |
+--------+
| tidb-1 |
+--------+
Stopping user-split-admin-interface_proxysql_1 ... done
Stopping user-split-admin-interface_tidb-0_1 ... done
Stopping user-split-admin-interface_tidb-1_1 ... done
Removing user-split-admin-interface_proxysql_1 ... done
Removing user-split-admin-interface_tidb-0_1 ... done
Removing user-split-admin-interface_tidb-1_1 ... done
Removing network user-split-admin-interface_default
Use ProxySQL Admin Interface to configure a common read/write separation traffic as an example. It will use the rules to match the SQL that will be run, thus forwarding the read and write SQL to different TiDB backends (if neither match, the user's default_hostgroup
will be used). The example will do:
-
Start 2 TiDB containers through Docker Compose, all the ports in the container are
4000
, and mapped to host ports4001
and4002
. -
Start one container of ProxySQL through Docker Compose, the port
6033
in the container is for ProxySQL MySQL Interface, and mapped host port 6034. The ProxySQL Admin Interface port is not exposed because it can only log in locally (i.e., inside the container). -
Within the 2 TiDB instances, create the same table structure but write different data:
'tidb-0'
,'tidb-1'
, in order to distinguish between the different database instances. -
Use the
docker-compose exec
command to run the prepared SQL file for configuring ProxySQL in ProxySQL Admin Interface, this SQL file will run:- Add 2 TiDB backend hosts.
hostgroup_id
oftidb-0
is0
, andhostgroup_id
oftidb-1
is1
. - Take effect the TiDB backend configuration and save it on disk.
- Add user
root
with an empty password anddefault_hostgroup
as0
. It means that the SQL will default route totidb-0
. - Take effect the user configuration and save it on disk.
- Add the rule
^SELECT.*FOR UPDATE$
withrule_id
as1
anddestination_hostgroup
as0
. It means if SQL statements match this rule, it will be using the TiDB withhostgroup
as0
(this rule is for forwardingSELECT ... FOR UPDATE
statement to the database where it is written). - Add the rule
^SELECT
withrule_id
as2
anddestination_hostgroup
as1
. It means if SQL statements match this rule, it will be using the TiDB withhostgroup
as1
. - Take effect the rule configuration and save it on disk.
- Add 2 TiDB backend hosts.
Note:
About the matching rules:
ProxySQL will try to match the rules one by one in the order of
rule_id
from smallest to largest.
^
matches the beginning of the SQL statement,$
matches the end.
match_digest
is to match the parameterized SQL statement, see query_processor_regex.Important parameters:
digest
: Match the parameterized hash value.match_pattern
: Match the raw SQL statements.negate_match_pattern
: When value is1
, inverse the match formatch_digest
ormatch_pattern
.log
: Whether log the query.replace_pattern
: If it is not empty, the value of this field will be replaced by the content of the matched part of SQL.See mysql_query_rules for full parameters.
-
Log in to ProxySQL MySQL Interface with the
root
, and run:select * from test.test;
: Expect to match rules withrule_id
of2
. Forwarded to the TiDB backendtidb-1
withhostgroup
of1
.select * from test.test for update;
: Expect to match rules withrule_id
of1
. Forwarded to the TiDB backendtidb-0
withhostgroup
of0
.begin;insert into test.test (db) values ('insert this and rollback later'); select * from test.test; rollback;
: Theinsert
statement is expected to not match all rules. It will use thedefault_hostgroup
of the user (It is0
) and thus forward to the TiDB backendtidb-0
(hostgroup
is0
). And ProxySQL turns on usertransaction_persistent
by default, this will cause all statements within the same transaction to run in the samehostgroup
. Soselect * from test.test;
will also be forwarded to the TiDB backendtidb-0
(hostgroup
is0
).
-
Stop and clear Docker Compose started resources, such as: containers and network topologies.
依赖:
- Docker
- Docker Compose
- MySQL Client
cd example/proxy-rule-admin-interface/
./proxy-rule-split.sh
# ./proxy-rule-split.sh
Creating network "proxy-rule-admin-interface_default" with the default driver
Creating proxy-rule-admin-interface_tidb-1_1 ... done
Creating proxy-rule-admin-interface_tidb-0_1 ... done
Creating proxy-rule-admin-interface_proxysql_1 ... done
+--------+
| db |
+--------+
| tidb-1 |
+--------+
+--------+
| db |
+--------+
| tidb-0 |
+--------+
+--------------------------------+
| db |
+--------------------------------+
| tidb-0 |
| insert this and rollback later |
+--------------------------------+
Stopping proxy-rule-admin-interface_proxysql_1 ... done
Stopping proxy-rule-admin-interface_tidb-0_1 ... done
Stopping proxy-rule-admin-interface_tidb-1_1 ... done
Removing proxy-rule-admin-interface_proxysql_1 ... done
Removing proxy-rule-admin-interface_tidb-0_1 ... done
Removing proxy-rule-admin-interface_tidb-1_1 ... done
Removing network proxy-rule-admin-interface_default
Use config file to configure a load balancing traffic as an example. Achieves the same as 5.3 Example of Load Balancing - Admin Interface, only changed using config file to initializing the ProxySQL configuration.
Note:
- The configuration of ProxySQL is stored in SQLite. Config file is only read when SQLite database does not exist.
- ProxySQL does NOT recommend using config file for configuration changes, use them only for initial configuration, do not rely too much on configuration files.
Run
cd example/load-balance-config-file/
./test-load-balance.sh