TenDB Cluster's data distribution, reading and writing are based on sharding rules, so it is very important to maintain routing in scenarios such as cluster deployment, data management and cluster failover. At present, TSpider and Tdbctl nodes have a routing table. The routing tables of TSpider nodes are synchronized by Tdbctl, and ensure global consistency.
At present, routing information is stored in mysql.server. Below is the description of table's field
|Server_name||Sharding name，valid value:TDBCTL0~N, SPIDER0~N, SPT0~N|
|Db||Db name to sharding, default to NULL|
|Username||Username to access sharding|
|Password||Password to access sharding|
|Port||Port to access sharding|
|Socket||Socket to access sharding，N/A|
|Wrapper||Sharding Type，valid value: TDBCTL，mysql，SPIDER, SPIDER_SLAVE|
Server_name default is increment from 0
Routing Table Example
Below is a local TenDB Cluster with 4 TenDB nodes，2 TSpider nodes，1 Tdbctl nodes
At present, the routing is configured with INSERT SQL, which is maintained by Tdbctl node, Tdbctl node will sync to other TSpider nodes.
After Tdbctl reads the SPIDER information in the mysql.servers, it will establish connections to each TSpider node and overwrite their mysql.servers.
After the routing configuration is completed, we need to execute a FLUSH operation on the Tdbctl node to trigger synchronization. The syntax is as follows
tdbctl flush routing;
The cluster internal communication with mysql protocol, so after routing configuration completed, we need to do GRANT SQL to ensure node's privileges.
Do Grant before execute
tdbctl flush routing
SELECT, DELETE, INSERT, DROP privileges
ALL PRIVILEGES to TSpider, TenDB nodes
Detail privilege information refer to[xxxxxxxxxxxxxTODO]
Tdbctl node will regularly check the difference between TSpider and Tdbctl, and periodically synchronize its own configuration to TSpider node, and perform a
FLUSH operation to make the routing effective
interval of regularly check routing's diffrence between TSpider and Tdbctl, default 300(s).
repiar TSpider node's routing if different from Tdbctl node， default to on
Cluster Slave Routing
TenDB Cluster can be configured as a read-write separate cluster solution.
When the master-slave replication is configured in TenDB, the slave cluster can be configured so that read requests only access TenDB slave nodes, and reduce the read pressure of the TenDB master nodes. The Slave cluster's routing is also maintained on the Tdbctl node, which wrapper type is
SPIDER_SLAVE，and all data layer routing point to TenDB slave nodes.
When Tdbctl sync routing to TSpider node, it will skip nodes whose Wrapper is
For DDL SQL, Tdbctl still sync to