1000字范文,内容丰富有趣,学习的好帮手!
1000字范文 > MySQL高可用之PXC搭建及使用

MySQL高可用之PXC搭建及使用

时间:2018-09-09 04:48:21

相关推荐

MySQL高可用之PXC搭建及使用

文章目录

一、安装Percona数据库1. 离线安装Percona2. 在线安装Percona3. 开放防火墙端口4. 修改MySQL配置文件5. 禁止开机启动MySQL6. 初始化MySQL数据库二、创建PXC集群1. 删除MariaDB程序包2. 开放防火墙端口3. 关闭SELINUX4. 离线安装PXC5. 创建PXC集群6. PXC节点启动与关闭三、安装MyCat1. JDK安装与配置2. 创建数据表3. MyCat安装与配置4. 配置父子表5. 创建双机热备的MyCat集群四、Sysbench基准测试1. 安装Sysbench2. 执行测试五、tpcc-mysql 压力测试1. 准备工作2. 安装tpcc-mysql六、导入数据1. 生成1000万条数据2. 执行文件切分3. 准备数据库4. 执行Java程序,多线程导入数据七、大数据归档1. 安装TokuDB2. 配置Replication集群3. 创建归档表4. 配置Haproxy+Keepalived双机热备5. 准备归档数据6. 执行数据归档

一、安装Percona数据库

1. 离线安装Percona

进入RPM安装文件目录,执行下面的脚本

yum localinstall *.rpm

管理MySQL服务

systemctl start mysqldsystemctl stop mysqldsystemctl restart mysqld

2. 在线安装Percona

使用yum命令安装

yum install /downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpmyum install Percona-Server-server-57

管理MySQL服务

service mysql startservice mysql stopservice mysql restart

3. 开放防火墙端口

firewall-cmd --zone=public --add-port=3306/tcp --permanentfirewall-cmd --reload

4. 修改MySQL配置文件

vi /etc/f

[mysqld]character_set_server = utf8bind-address = 0.0.0.0#跳过DNS解析skip-name-resolve

service mysql restart

5. 禁止开机启动MySQL

chkconfig mysqld off

6. 初始化MySQL数据库

查看MySQL初始密码

cat /var/log/mysqld.log | grep "A temporary password"

修改MySQL密码

mysql_secure_installation

创建远程管理员账户

mysql -u root -p

CREATE USER 'admin'@'%' IDENTIFIED BY 'Abc_123456';GRANT all privileges ON *.* TO 'admin'@'%';FLUSH PRIVILEGES;

二、创建PXC集群

1. 删除MariaDB程序包

yum -y remove mari*

2. 开放防火墙端口

firewall-cmd --zone=public --add-port=3306/tcp --permanentfirewall-cmd --zone=public --add-port=4444/tcp --permanentfirewall-cmd --zone=public --add-port=4567/tcp --permanentfirewall-cmd --zone=public --add-port=4568/tcp --permanent

3. 关闭SELINUX

vi /etc/selinux/config

把SELINUX属性值设置成disabled

reboot

4. 离线安装PXC

进入RPM文件目录,执行安装命令

yum localinstall *.rpm

参考第一章内容,修改MySQL配置文件、创建账户等操作

5. 创建PXC集群

停止MySQL服务

修改每个PXC节点的/etc/f文件(在不同节点上,注意调整文件内容)

server-id=1 #PXC集群中MySQL实例的唯一ID,不能重复,必须是数字wsrep_provider=/usr/lib64/galera3/libgalera_smm.sowsrep_cluster_name=pxc-cluster #PXC集群的名称wsrep_cluster_address=gcomm://192.168.99.151,192.168.99.159,192.168.99.215wsrep_node_name=pxc1 #当前节点的名称wsrep_node_address=192.168.99.151 #当前节点的IPwsrep_sst_method=xtrabackup-v2 #同步方法(mysqldump、rsync、xtrabackup)wsrep_sst_auth= admin:Abc_123456 #同步使用的帐户pxc_strict_mode=ENFORCING #同步严厉模式binlog_format=ROW #基于ROW复制(安全可靠)default_storage_engine=InnoDB #默认引擎innodb_autoinc_lock_mode=2 #主键自增长不锁表

主节点的管理命令(第一个启动的PXC节点)

systemctl start mysql@bootstrap.servicesystemctl stop mysql@bootstrap.servicesystemctl restart mysql@bootstrap.service

非主节点的管理命令(非第一个启动的PXC节点)

service start mysqlservice stop mysqlservice restart mysql

查看PXC集群状态信息

show status like 'wsrep_cluster%' ;

按照上述配置方法,创建两组PXC集群

6. PXC节点启动与关闭

如果最后关闭的PXC节点是安全退出的,那么下次启动要最先启动这个节点,而且要以主节点启动如果最后关闭的PXC节点不是安全退出的,那么要先修改/var/lib/mysql/grastate.dat文件,把其中的safe_to_bootstrap属性值设置为1,再安装主节点启动

三、安装MyCat

1. JDK安装与配置

安装JDK

#搜索JDK版本yum search jdk #安装JDK1.8开发版yum install java-1.8.0-openjdk-devel.x86_64

配置环境变量

#查看JDK安装路径ls -lrt /etc/alternatives/javavi /etc/profile#在文件结尾加上JDK路径,例如export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.171-8.b10.el7_5.x86_64/source /etc/profile

2. 创建数据表

在两组PXC集群中分别创建t_user数据表

CREATE TABLE t_user(id INT UNSIGNED PRIMARY KEY,username VARCHAR(200) NOT NULL,password VARCHAR(2000) NOT NULL,tel CHAR(11) NOT NULL,locked TINYINT(1) UNSIGNED NOT NULL DEFAULT 0,INDEX idx_username(username) USING BTREE,UNIQUE INDEX unq_username(username) USING BTREE);

3. MyCat安装与配置

下载MyCat

http://dl.mycat.io/1.6.5/Mycat-server-1.6.5-release-012223-linux.tar.gz

上传MyCat压缩包到虚拟机

安装unzip程序包,解压缩MyCat

yum install unzipunzip MyCAT压缩包名称

开放防火墙8066和9066端口,关闭SELINUX

修改MyCat的bin目录中所有.sh文件的权限

chmod -R 777 ./*.sh

MyCat启动与关闭

#cd MyCat的bin目录./startup_nowrap.sh #启动MyCatps -aux #查看系统进程kill -9 MyCat进程编号

修改server.xml文件,设置MyCat帐户和虚拟逻辑库

<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE mycat:server SYSTEM "server.dtd"><mycat:server xmlns:mycat="http://io.mycat/"><system><property name="nonePasswordLogin">0</property><property name="useHandshakeV10">1</property><property name="useSqlStat">0</property><property name="useGlobleTableCheck">0</property><property name="sequnceHandlerType">2</property><property name="subqueryRelationshipCheck">false</property><property name="processorBufferPoolType">0</property><property name="handleDistributedTransactions">0</property><property name="useOffHeapForMerge">1</property><property name="memoryPageSize">64k</property><property name="spillsFileBufferSize">1k</property><property name="useStreamOutput">0</property><property name="systemReserveMemorySize">384m</property><property name="useZKSwitch">false</property></system><!--这里是设置的admin用户和虚拟逻辑库--><user name="admin" defaultAccount="true"><property name="password">Abc_123456</property><property name="schemas">test</property></user></mycat:server>

修改schema.xml文件,设置数据库连接和虚拟数据表

<?xml version="1.0"?><!DOCTYPE mycat:schema SYSTEM "schema.dtd"><mycat:schema xmlns:mycat="http://io.mycat/"><!--配置数据表--><schema name="test" checkSQLschema="false" sqlMaxLimit="100"><table name="t_user" dataNode="dn1,dn2" rule="mod-long" /></schema><!--配置分片关系--><dataNode name="dn1" dataHost="cluster1" database="test" /><dataNode name="dn2" dataHost="cluster2" database="test" /><!--配置连接信息--><dataHost name="cluster1" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W1" url="192.168.99.151:3306" user="admin" password="Abc_123456"><readHost host="W1R1" url="192.168.99.159:3306" user="admin" password="Abc_123456" /><readHost host="W1R2" url="192.168.99.215:3306" user="admin" password="Abc_123456" /></writeHost><writeHost host="W2" url="192.168.99.159:3306" user="admin" password="Abc_123456"><readHost host="W2R1" url="192.168.99.151:3306" user="admin" password="Abc_123456" /><readHost host="W2R2" url="192.168.99.215:3306" user="admin" password="Abc_123456" /></writeHost></dataHost><dataHost name="cluster2" maxCon="1000" minCon="10" balance="2" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W1" url="192.168.99.121:3306" user="admin"password="Abc_123456"><readHost host="W1R1" url="192.168.99.122:3306" user="admin" password="Abc_123456" /><readHost host="W1R2" url="192.168.99.123:3306" user="admin" password="Abc_123456" /></writeHost><writeHost host="W2" url="192.168.99.122:3306" user="admin"password="Abc_123456"><readHost host="W2R1" url="192.168.99.121:3306" user="admin" password="Abc_123456" /><readHost host="W2R2" url="192.168.99.123:3306" user="admin" password="Abc_123456" /></writeHost></dataHost></mycat:schema>

修改rule.xml文件,把mod-long的count值修改成2

<function name="mod-long" class="io.mycat.route.function.PartitionByMod"><property name="count">2</property></function>

重启MyCat

向t_user表写入数据,感受数据的切分

USE test;#第一条记录被切分到第二个分片INSERT INTO t_user(id,username,password,tel,locked) VALUES(1,"A",HEX(AES_ENCRYPT('123456','HelloWorld')));#第二条记录被切分到第一个分片INSERT INTO t_user(id,username,password,tel,locked) VALUES(2,"B",HEX(AES_ENCRYPT('123456','HelloWorld')));

4. 配置父子表

在conf目录下创建customer-hash-int文件,内容如下:

101=0102=0103=0104=1105=1106=1

在rule.xml文件中加入自定义和

<function name="customer-hash-int"class="io.mycat.route.function.PartitionByFileMap"><property name="mapFile">customer-hash-int.txt</property></function>

<tableRule name="sharding-customer"><rule><columns>sharding_id</columns><algorithm>customer-hash-int</algorithm></rule></tableRule>

修改schema.xml文件,添加父子表定义

<table name="t_customer" dataNode="dn1,dn2" rule="sharding-customer"><childTable name="t_orders" primaryKey="ID" joinKey="customer_id" parentKey="id"/></table>

在MyCat上执行如下SQL:

USE test;CREATE TABLE t_customer(id INT UNSIGNED PRIMARY KEY,username VARCHAR(200) NOT NULL,sharding_id INT NOT NULL);CREATE TABLE t_orders(id INT UNSIGNED PRIMARY KEY,customer_id INT NOT NULL,datetime TIMESTAMP DEFAULT CURRENT_TIMSTAMP);

向t_customer表和t_orders表写入数据,查看字表数据跟随父表切分到同一个分片

5. 创建双机热备的MyCat集群

用两个虚拟机实例,各自部署MyCat

用一个虚拟机实例部署Haproxy

安装Haproxy

yum install -y haproxy

编辑配置文件

vi /etc/haproxy/haproxy.cfg

globallog 127.0.0.1 local2chroot/var/lib/haproxypidfile/var/run/haproxy.pidmaxconn4000user haproxygroup haproxydaemon# turn on stats unix socketstats socket /var/lib/haproxy/statsdefaultsmodehttplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client1mtimeout server1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000listen admin_stats bind 0.0.0.0:4001mode httpstats uri /dbsstats realm Global\ statisticsstats auth admin:abc123456listen proxy-mysqlbind 0.0.0.0:3306 mode tcp balance roundrobinoption tcplog #日志格式server mycat_1 192.168.99.131:3306 check port 8066 maxconn 2000 server mycat_2 192.168.99.132:3306 check port 8066 maxconn 2000 option tcpka #使用keepalive检测死链

启动Haproxy

service haproxy start

访问Haproxy监控画面

http://192.168.99.131:4001/dbs

用另外一个虚拟机同样按照上述操作安装Haproxy

在某个Haproxy虚拟机实例上部署Keepalived

开启防火墙的VRRP协议

#开启VRRPfirewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --protocol vrrp -j ACCEPT#应用设置firewall-cmd --reload

安装Keepalived

yum install -y keepalived

编辑配置文件

vim /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.99.133}}

启动Keepalived

service keepalived start

ping 192.168.99.133

在另外一个Haproxy虚拟机上,按照上述方法部署Keepalived

使用MySQL客户端连接192.168.99.133,执行增删改查数据

四、Sysbench基准测试

1. 安装Sysbench

在线安装

curl -s https://packagecloud.io/install/repositories/akopytov/sysbench/script.rpm.sh | sudo bash

yum -y install sysbench

本地安装

下载压缩文件

/akopytov/sysbench/zip/1.0

安装依赖包

yum install -y automake libtoolyum install -y mysql-devel

执行安装

#cd sysbench./autogen.sh ./configure makemake installsysbench --version

2. 执行测试

准备测试库

sysbench /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-host=192.168.99.131 --mysql-port=3306 --mysql-user=admin --mysql-password=Abc_123456 --oltp-tables-count=10 --oltp-table-size=100000 prepare

执行测试

sysbench /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-host=192.168.99.131 --mysql-port=3306 --mysql-user=admin --mysql-password=Abc_123456 --oltp-test-mode=complex --threads=10 --time=300 --report-interval=10 run >> /home/mysysbench.log

清理数据

sysbench /usr/share/sysbench/tests/include/oltp_legacy/oltp.lua --mysql-host=192.168.99.131 --mysql-port=3306 --mysql-user=admin --mysql-password=Abc_123456 --oltp-tables-count=10 cleanup

五、tpcc-mysql 压力测试

1. 准备工作

修改f配置文件

vi /etc/f

pxc_strict_mode=DISABLED

修改某个Haproxy的配置文件

server mysql_1 192.168.99.151:3306 check port 3306 weight 1 maxconn 2000server mysql_2 192.168.99.159:3306 check port 3306 weight 1 maxconn 2000server mysql_3 192.168.99.215:3306 check port 3306 weight 1 maxconn 2000

重新启动Haproxy

安装依赖程序包

yum install -y gccyum install -y mysql-devel

2. 安装tpcc-mysql

下载压缩包

/Percona-Lab/tpcc-mysql/zip/master

执行安装

#cd tpcc的src目录make

执行create_table.sqladd_fkey_idx.sql两个文件

执行数据初始化

./tpcc_load -h 192.168.99.131 -d tpcc -u admin -p Abc_123456 -w

执行压力测试

./tpcc_start -h 192.168.99.131 -d tpcc -u admin -p Abc_123456 -w 1 -c 5 -r 300 -l 600 ->tpcc-output-log

六、导入数据

1. 生成1000万条数据

import java.io.FileWriterimport java.io.BufferedWriterclass Test {def static void main(String[] args) {var writer=new FileWriter("D:/data.txt")var buff=new BufferedWriter(writer)for(i:1..10000000){buff.write(i+",测试数据\n")}buff.closewriter.close}}

2. 执行文件切分

上传data.txt文件到linux

执行文件切分

split -l 1000000 -d data.txt

3. 准备数据库

每个PXC分片只开启一个节点

修改PXC节点文件,然后重启PXC服务

innodb_flush_log_at_trx_commit = 0innodb_flush_method = O_DIRECTinnodb_buffer_pool_size = 200M

创建t_test数据表

CREATE TABLE t_test(id INT UNSIGNED PRIMARY KEY,name VARCHAR(200) NOT NULL);

配置MyCat

<table name="t_test" dataNode="dn1,dn2" rule="mod-long" />

<dataHost name="cluster1" maxCon="1000" minCon="10" balance="0" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W1" url="192.168.99.151:3306" user="admin" password="Abc_123456"/></dataHost><dataHost name="cluster2" maxCon="1000" minCon="10" balance="0" writeType="1" dbType="mysql" dbDriver="native" switchType="1" slaveThreshold="100"><heartbeat>select user()</heartbeat><writeHost host="W1" url="192.168.99.121:3306" user="admin" password="Abc_123456"/></dataHost>

4. 执行Java程序,多线程导入数据

import org.eclipse.xtend.lib.annotations.Accessorsimport java.io.Fileimport java.sql.DriverManagerclass Task implements Runnable{@AccessorsFile file;override run() {var url="jdbc:mysql://192.168.99.131:8066/test"var username="admin"var password="Abc_123456"var con=DriverManager.getConnection(url,username,password)var sql='''load data local intfile '/home/data/«file.name»' ignore into table t_test character set 'utf8' fields terminated by ',' optionally enclosed by '\"' lines terminated by '\n' (id,name);'''var pst=con.prepareStatement(sql);pst.executecon.closeLoadData.updateNum();}}

import com.mysql.jdbc.Driverimport java.sql.DriverManagerimport java.util.concurrent.LinkedBlockingQueueimport java.util.concurrent.ThreadPoolExecutorimport java.util.concurrent.TimeUnitimport java.io.Fileclass LoadData {var static int num=0;var static int end=0;var static pool=new ThreadPoolExecutor(1,5,60,TimeUnit.SECONDS,new LinkedBlockingQueue(200))def static void main(String[] args) {DriverManager.registerDriver(new Driver)var folder=new File("/home/data")var files=folder.listFilesend=files.length //线程池结束条件files.forEach[one|var task=new Task();task.file=one;pool.execute(task)]}synchronized def static updateNum(){num++;if(num==end){pool.shutdown();println("执行结束")}}}

七、大数据归档

1. 安装TokuDB

安装jemlloc

yum install -y jemalloc

编辑配置文件

vi /etc/f

……[mysqld_safe]malloc-lib=/usr/lib64/libjemalloc.so.1……

重启MySQL

开启Linux大页内存

echo never > /sys/kernel/mm/transparent_hugepage/enabledecho never > /sys/kernel/mm/transparent_hugepage/defrag

安装TokuDB

yum install -y Percona-Server-tokudb-57.x86_64ps-admin --enable -uroot -pservice mysql restartps-admin --enable -uroot -p

查看安装结果

show engines ;

2. 配置Replication集群

在两个TokuDB数据库上创建用户

CREATE USER 'backup'@'%' IDENTIFIED BY 'Abc_123456' ;

GRANT super, reload, replication slave ON *.* TO 'backup'@'%' ;

FLUSH PRIVILEGES ;

修改两个TokuDB的配置文件,如下:

[mysqld]server_id = 101log_bin = mysql_binrelay_log = relay_bin……

[mysqld]server_id = 102log_bin = mysql_binrelay_log = relay_bin

重新启动两个TokuDB节点

分别在两个TokuDB上执行下面4句SQL

#关闭同步服务stop slave;#设置同步的Master节点change master to master_host="192.168.99.155",master_port=3306,master_user="backup",master_password="Abc_123456";#启动同步服务start slave;#查看同步状态show slave status;

#关闭同步服务stop slave;#设置同步的Master节点change master to master_host="192.168.99.102",master_port=3306,master_user="backup",master_password="Abc_123456";#启动同步服务start slave;#查看同步状态show slave status;

3. 创建归档表

CREATE TABLE t_purchase (id INT UNSIGNED PRIMARY KEY,purchase_price DECIMAL(10,2) NOT NULL,purchase_num INT UNSIGNED NOT NULL,purchase_sum DECIMAL (10,2) NOT NULL,purchase_buyer INT UNSIGNED NOT NULL,purchase_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,company_id INT UNSIGNED NOT NULL,goods_id INT UNSIGNED NOT NULL,KEY idx_company_id(company_id),KEY idx_goods_id(goods_id))engine=TokuDB;

4. 配置Haproxy+Keepalived双机热备

在两个节点上安装Haproxy

yum install -y haproxy

修改配置文件

vi /etc/haproxy/haproxy.cfg

globallog 127.0.0.1 local2chroot/var/lib/haproxypidfile/var/run/haproxy.pidmaxconn4000user haproxygroup haproxydaemon# turn on stats unix socketstats socket /var/lib/haproxy/statsdefaultsmodehttplog globaloption httplogoption dontlognulloption http-server-closeoption forwardfor except 127.0.0.0/8option redispatchretries 3timeout http-request 10stimeout queue 1mtimeout connect 10stimeout client1mtimeout server1mtimeout http-keep-alive 10stimeout check 10smaxconn 3000listen admin_stats bind 0.0.0.0:4001mode httpstats uri /dbsstats realm Global\ statisticsstats auth admin:abc123456listen proxy-mysqlbind 0.0.0.0:4002 mode tcp balance roundrobinoption tcplog #日志格式server backup_1 192.168.99.102:3306 check port 3306 maxconn 2000 server backup_2 192.168.99.155:3306 check port 3306 maxconn 2000 option tcpka #使用keepalive检测死链

重启Haproxy

开启防火墙的VRRP协议

firewall-cmd --direct --permanent --add-rule ipv4 filter INPUT 0 --protocol vrrp -j ACCEPT

firewall-cmd --reload

在两个节点上安装Keepalived

yum install -y keepalived

编辑Keepalived配置文件

vim /etc/keepalived/keepalived.conf

vrrp_instance VI_1 {state MASTERinterface ens33virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 123456}virtual_ipaddress {192.168.99.211}}

重启Keepalived

5. 准备归档数据

在两个PXC分片上创建进货表

CREATE TABLE t_purchase (id INT UNSIGNED PRIMARY KEY,purchase_price DECIMAL(10,2) NOT NULL,purchase_num INT UNSIGNED NOT NULL,purchase_sum DECIMAL (10,2) NOT NULL,purchase_buyer INT UNSIGNED NOT NULL,purchase_date TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,company_id INT UNSIGNED NOT NULL,goods_id INT UNSIGNED NOT NULL,KEY idx_company_id(company_id),KEY idx_goods_id(goods_id))

配置MyCat的schema.xml文件,并重启MyCat

<table name="t_purchase" dataNode="dn1,dn2" rule="mod-long" />

6. 执行数据归档

安装pt-archiver

yum install percona-toolkitpt-archiver --versionpt-archiver --help

执行数据归档

pt-archiver --source h=192.168.99.102,P=8066,u=admin,p=Abc_123456,D=test,t=t_purchase --dest h=192.168.99.102,P=3306,u=admin,p=Abc_123456,D=test,t=t_purchase --no-check-charset --where 'purchase_date<"-09"' --progress 5000 --bulk-delete --bulk-insert --limit=10000 --statistics

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。