Install Percona XtraDB Cluster on CentOS 7 - CentLinux

Latest

Sunday, 22 September 2019

Install Percona XtraDB Cluster on CentOS 7

Install Percona XtraDB Cluster on CentOS 7

Percona XtraDB Cluster (PXC) is a free and open source, high availability solution for MySQL database servers. It integrates Percona Server and Percona XtraBackup with the Galera library to enable synchronous multi-master replication. PXC is supported by Percona Experts and it is distributed under Attribution-ShareAlike 2.0 Generic (CC BY-SA 2.0) license.

In Percona XtraDB Cluster, there can be two or more nodes. Each node has the same data that was synchronized across the Cluster. Each node is an actual MySQL database server (Percona, MariaDB, MySQL, etc.) and we can easily add it to a Percona XtraDB Cluster.

In this article, we are installing Percona XtraDB Cluster on CentOS 7. We are using just two nodes here, but the configurations do not vary, even if you are scaling to hundreds of nodes.

If you are new to MySQL database then, we recommend you to read Murach's MySQL (3rd Edition) by Mike Murach & Associates. This book is a very good starting point for newbies.

 

This Article Provides:

     

    Features in Percona XtraDB Cluster:

    PXC provides the following MySQL Cluster advantages.

    • Cost-effective HA and scalability for MySQL
    • Higher availability
    • Multi-master replication
    • Automatic node provisioning
    • Percona XtraDB Cluster "strict-mode"
    • Zero data Loss
    • Increased read/write scalability
    • ProxySQL load balancer
    • ProxySQL-assisted Percona XtraDB Cluster maintenance mode
    • Percona Monitoring and Management compatibility

    For more details please visit PXC Documentation.

     

    Environment Specification:

    We are using two CentOS 7 virtual machines with following specifications.

    Node 1

    • CPU - 3.4 Ghz (2 Cores)
    • Memory - 1 GB
    • Storage - 60 GB
    • Hostname - percona-01.example.com
    • IP Address - 192.168.116.204 /24
    • Operating System - CentOS 7.6

    Node 2

    • CPU - 3.4 Ghz (2 Cores)
    • Memory - 1 GB
    • Storage - 60 GB
    • Hostname - percona-02.example.com
    • IP Address - 192.168.116.205 /24
    • Operating System - CentOS 7.6

     

    Allow Percona XtraDB Cluster Service Ports in CentOS 7 Firewall:

    Connect with percona-01.example.com using ssh as root user.

    Percona XtraDB Cluster requires following service ports for communication, therefore, we are allowing these service ports in CentOS 7 firewall.

    [root@percona-01 ~]# firewall-cmd --permanent --add-port={3306,4444,4567,4568}/tcp success [root@percona-01 ~]# firewall-cmd --reload success

    Percona XtraDB Cluster is not fully compatible with SELinux, and it is recommended in PXC official documentation to put SELinux in permissive mode prior to installation.

    [root@percona-01 ~]# setenforce 0 [root@percona-01 ~]# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config && cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=permissive # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted

     

    Install Percona Yum Repository on CentOS 7:

    Download and install rpm package for Percona yum repository as follows.

    [root@percona-01 ~]# yum install -y https://repo.percona.com/yum/percona-release-latest.noarch.rpm Loaded plugins: fastestmirror percona-release-latest.noarch.rpm | 17 kB 00:00 Examining /var/tmp/yum-root-umE9yV/percona-release-latest.noarch.rpm: percona-release-1.0-13.noarch Marking /var/tmp/yum-root-umE9yV/percona-release-latest.noarch.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package percona-release.noarch 0:1.0-13 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: percona-release noarch 1.0-13 /percona-release-latest.noarch 20 k Transaction Summary ================================================================================ Install 1 Package Total size: 20 k Installed size: 20 k Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : percona-release-1.0-13.noarch 1/1 * Enabling the Percona Original repository <*> All done! The percona-release package now contains a percona-release script that can enable additional repositories for our newer products. For example, to enable the Percona Server 8.0 repository use: percona-release setup ps80 Note: To avoid conflicts with older product versions, the percona-release setup command may disable our original repository for some products. For more information, please visit: https://www.percona.com/doc/percona-repo-config/percona-release.html Verifying : percona-release-1.0-13.noarch 1/1 Installed: percona-release.noarch 0:1.0-13 Complete!

    Build cache for all yum repositories.

    [root@percona-01 ~]# yum makecache fast Loaded plugins: fastestmirror Determining fastest mirrors * base: mirrors.ges.net.pk * extras: mirrors.ges.net.pk * updates: mirrors.ges.net.pk base | 3.6 kB 00:00 extras | 2.9 kB 00:00 percona-release-noarch | 2.9 kB 00:00 percona-release-x86_64 | 2.9 kB 00:00 updates | 2.9 kB 00:00 (1/6): base/7/x86_64/group_gz | 165 kB 00:00 (2/6): extras/7/x86_64/primary_db | 152 kB 00:01 (3/6): percona-release-noarch/7/primary_db | 22 kB 00:01 (4/6): updates/7/x86_64/primary_db | 1.1 MB 00:05 (5/6): percona-release-x86_64/7/primary_db | 991 kB 00:07 (6/6): base/7/x86_64/primary_db | 6.0 MB 01:11 Metadata Cache Created

    We have installed Percona yum repository.

    Now, we can easily install Percona XtraDB Cluster using yum command.

    [root@percona-01 ~]# yum install -y Percona-XtraDB-Cluster-57 ... Installed: Percona-XtraDB-Cluster-57.x86_64 0:5.7.27-31.39.1.el7 Percona-XtraDB-Cluster-shared-57.x86_64 0:5.7.27-31.39.1.el7 Percona-XtraDB-Cluster-shared-compat-57.x86_64 0:5.7.27-31.39.1.el7 Dependency Installed: Percona-XtraDB-Cluster-client-57.x86_64 0:5.7.27-31.39.1.el7 Percona-XtraDB-Cluster-server-57.x86_64 0:5.7.27-31.39.1.el7 libev.x86_64 0:4.15-7.el7 lsof.x86_64 0:4.87-6.el7 percona-xtrabackup-24.x86_64 0:2.4.15-1.el7 perl.x86_64 4:5.16.3-294.el7_6 perl-Carp.noarch 0:1.26-244.el7 perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7 perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7 perl-DBD-MySQL.x86_64 0:4.023-6.el7 perl-DBI.x86_64 0:1.627-4.el7 perl-Data-Dumper.x86_64 0:2.145-3.el7 perl-Digest.noarch 0:1.17-245.el7 perl-Digest-MD5.x86_64 0:2.52-3.el7 perl-Encode.x86_64 0:2.51-7.el7 perl-Exporter.noarch 0:5.68-3.el7 perl-File-Path.noarch 0:2.09-2.el7 perl-File-Temp.noarch 0:0.23.01-3.el7 perl-Filter.x86_64 0:1.49-3.el7 perl-Getopt-Long.noarch 0:2.40-3.el7 perl-HTTP-Tiny.noarch 0:0.033-3.el7 perl-IO-Compress.noarch 0:2.061-2.el7 perl-Net-Daemon.noarch 0:0.48-5.el7 perl-PathTools.x86_64 0:3.40-5.el7 perl-PlRPC.noarch 0:0.2020-14.el7 perl-Pod-Escapes.noarch 1:1.04-294.el7_6 perl-Pod-Perldoc.noarch 0:3.20-4.el7 perl-Pod-Simple.noarch 1:3.28-4.el7 perl-Pod-Usage.noarch 0:1.63-3.el7 perl-Scalar-List-Utils.x86_64 0:1.27-248.el7 perl-Socket.x86_64 0:2.010-4.el7 perl-Storable.x86_64 0:2.45-3.el7 perl-Text-ParseWords.noarch 0:3.29-4.el7 perl-Time-HiRes.x86_64 4:1.9725-3.el7 perl-Time-Local.noarch 0:1.2300-2.el7 perl-constant.noarch 0:1.27-2.el7 perl-libs.x86_64 4:5.16.3-294.el7_6 perl-macros.x86_64 4:5.16.3-294.el7_6 perl-parent.noarch 1:0.225-244.el7 perl-podlators.noarch 0:2.5.1-3.el7 perl-threads.x86_64 0:1.87-4.el7 perl-threads-shared.x86_64 0:1.43-6.el7 qpress.x86_64 0:11-1.el7 rsync.x86_64 0:3.1.2-6.el7_6.1 socat.x86_64 0:1.7.3.2-2.el7 Replaced: mariadb-libs.x86_64 1:5.5.60-1.el7_5 Complete!

    Enable and start Percona database service.

    [root@percona-01 ~]# systemctl enable --now mysql.service

    Percona installer generates a temporary password for root user in /var/log/mysqld.log file.

    We can obtain this password using grep command.

    [root@percona-01 ~]# grep 'temporary password' /var/log/mysqld.log 2019-09-18T17:12:24.404976Z 1 [Note] A temporary password is generated for root@localhost: EAFR7kje,_HR

    Login to Percona instance using this temporary password.

    [root@percona-01 ~]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 11 Server version: 5.7.27-30-57-log Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>

    Set a new password for root user.

    mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123'; Query OK, 0 rows affected (0.00 sec) mysql> flush privileges; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye

    Stop Percona database service.

    [root@percona-01 ~]# systemctl stop mysql.service

    Repeat the above steps on percona-02.example.com.

     

    Configure PXC nodes for Write-set Replication:

    Configure Percona XtraDB Cluster settings on percona-01.example.com.

    [root@percona-01 ~]# vi /etc/percona-xtradb-cluster.conf.d/wsrep.cnf

    Find and set following directives therein.

    wsrep_cluster_address=gcomm://192.168.116.204,192.168.116.205 wsrep_node_address=192.168.116.204 wsrep_node_name=percona-01 wsrep_sst_auth="sstuser:Ahm3r"

    Configure Percona XtraDB Cluster settings on percona-02.example.com.

    [root@percona-02 ~]# vi /etc/percona-xtradb-cluster.conf.d/wsrep.cnf

    Find and set following directives therein.

    wsrep_cluster_address=gcomm://192.168.116.204,192.168.116.205 wsrep_node_address=192.168.116.205 wsrep_node_name=percona-02 wsrep_sst_auth="sstuser:Ahm3r"

     

    Bootstrap the first node:

    We have configured both PXC nodes on CentOS 7.

    Now its time to bootstap or initialize the first node of the Percona XtraDB Cluster.

    First node of Percona XtraDB Cluster is the one that contains the data, that you want to replicate to other nodes.

    Bootstrap the percona-01.example.com node using following command.

    (Note: Please ensure that the mysql.service is stopped on all nodes.)

    [root@percona-01 ~]# systemctl start mysql@bootstrap.service

    Connect with Percona database instance and check the configurations.

    [root@percona-01 ~]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 11 Server version: 5.7.27-30-57-log Percona XtraDB Cluster (GPL), Release rel30, Revision 64987d4, WSREP version 31.39, wsrep_31.39 Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show status like 'wsrep%'; +----------------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------------+ | wsrep_local_state_uuid | 83b4278c-da37-11e9-82ae-1e52dc50d51c | | wsrep_protocol_version | 9 | | wsrep_last_applied | 2 | | wsrep_last_committed | 2 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_repl_keys | 0 | | wsrep_repl_keys_bytes | 0 | | wsrep_repl_data_bytes | 0 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 2 | | wsrep_received_bytes | 149 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 2 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.500000 | | wsrep_local_cached_downto | 0 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 100, 100 ] | | wsrep_flow_control_interval_low | 100 | | wsrep_flow_control_interval_high | 100 | | wsrep_flow_control_status | OFF | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 1320 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_open_transactions | 0 | | wsrep_open_connections | 0 | | wsrep_ist_receive_status | | | wsrep_ist_receive_seqno_start | 0 | | wsrep_ist_receive_seqno_current | 0 | | wsrep_ist_receive_seqno_end | 0 | | wsrep_incoming_addresses | 192.168.116.204:3306 | | wsrep_cluster_weight | 1 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | f4826ade-dba8-11e9-a126-13c9944ecef9 | | wsrep_cluster_conf_id | 1 | | wsrep_cluster_size | 1 | | wsrep_cluster_state_uuid | 83b4278c-da37-11e9-82ae-1e52dc50d51c | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 3.39(rb3295e6) | | wsrep_ready | ON | +----------------------------------+--------------------------------------+ 71 rows in set (0.00 sec)

    Before adding any node to our cluster, we must create a user for SST (Snapshot State Transfer) for complete synchronization of new nodes with cluster.

    mysql> CREATE USER 'sstuser'@'%' IDENTIFIED BY 'Ahm3r'; Query OK, 0 rows affected (0.06 sec) mysql> GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'%'; Query OK, 0 rows affected (0.01 sec) mysql> FLUSH PRIVILEGES; Query OK, 0 rows affected (0.00 sec) mysql> EXIT Bye

     

    Adding Nodes to Percona XtraDB Cluster on CentOS 7:

    Connect to percona-02.example.com using ssh as root user.

    Start Percona service using systemctl command.

    [root@percona-02 ~]# systemctl start mysql.service

    If our configurations are correct then the percona-02 node should receive the SST automatically.

    [root@percona-02 ~]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 11 Server version: 5.7.27-30-57-log Percona XtraDB Cluster (GPL), Release rel30, Revision 64987d4, WSREP version 31.39, wsrep_31.39 Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show status like 'wsrep%'; +----------------------------------+-------------------------------------------+ | Variable_name | Value | +----------------------------------+-------------------------------------------+ | wsrep_local_state_uuid | 83b4278c-da37-11e9-82ae-1e52dc50d51c | | wsrep_protocol_version | 9 | | wsrep_last_applied | 5 | | wsrep_last_committed | 5 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_repl_keys | 0 | | wsrep_repl_keys_bytes | 0 | | wsrep_repl_data_bytes | 0 | | wsrep_repl_other_bytes | 0 | | wsrep_received | 3 | | wsrep_received_bytes | 234 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_max | 1 | | wsrep_local_send_queue_min | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_max | 1 | | wsrep_local_recv_queue_min | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_local_cached_downto | 0 | | wsrep_flow_control_paused_ns | 0 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_flow_control_interval | [ 141, 141 ] | | wsrep_flow_control_interval_low | 141 | | wsrep_flow_control_interval_high | 141 | | wsrep_flow_control_status | OFF | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced | | wsrep_cert_index_size | 0 | | wsrep_cert_bucket_count | 22 | | wsrep_gcache_pool_size | 1456 | | wsrep_causal_reads | 0 | | wsrep_cert_interval | 0.000000 | | wsrep_open_transactions | 0 | | wsrep_open_connections | 0 | | wsrep_ist_receive_status | | | wsrep_ist_receive_seqno_start | 0 | | wsrep_ist_receive_seqno_current | 0 | | wsrep_ist_receive_seqno_end | 0 | | wsrep_incoming_addresses | 192.168.116.205:3306,192.168.116.204:3306 | | wsrep_cluster_weight | 2 | | wsrep_desync_count | 0 | | wsrep_evs_delayed | | | wsrep_evs_evict_list | | | wsrep_evs_repl_latency | 0/0/0/0/0 | | wsrep_evs_state | OPERATIONAL | | wsrep_gcomm_uuid | b979106d-dbaa-11e9-b19a-ce0cf7b20e5d | | wsrep_cluster_conf_id | 2 | | wsrep_cluster_size | 2 | | wsrep_cluster_state_uuid | 83b4278c-da37-11e9-82ae-1e52dc50d51c | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_bf_aborts | 0 | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy <info@codership.com> | | wsrep_provider_version | 3.39(rb3295e6) | | wsrep_ready | ON | +----------------------------------+-------------------------------------------+ 71 rows in set (0.07 sec)

    You can see that the wsrep_cluster_size is now 2, it shows that the percona-02 node has joined our Percona XtraDB Cluster.

     

    Verify Replication in our Percona XtraDB Cluster:

    We can verify replication by manipulating data on one node and check if it replicated on the other node.

    Connect with percona-02.example.com using ssh as root user.

    Connect to Percona database instance and execute following commands.

    mysql> CREATE DATABASE RECIPES; Query OK, 1 row affected (0.06 sec) mysql> USE RECIPES; Database changed mysql> CREATE TABLE TAB1 (CONTACT_ID INT PRIMARY KEY, CONTACT_NAME VARCHAR(20)); Query OK, 0 rows affected (0.05 sec) mysql> INSERT INTO TAB1 VALUES (1,'Ahmer'); Query OK, 1 row affected (0.04 sec) mysql> mysql> INSERT INTO TAB1 VALUES (2,'Mansoor'); Query OK, 1 row affected (0.00 sec) mysql> INSERT INTO TAB1 VALUES (3,'Salman'); Query OK, 1 row affected (0.00 sec)

    Connect with percona-01.example.com using ssh as root user.

    Connect with Percona database instance and query the data that we have inserted on percona-02 node.

    [root@percona-01 ~]# mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 14 Server version: 5.7.27-30-57-log Percona XtraDB Cluster (GPL), Release rel30, Revision 64987d4, WSREP version 31.39, wsrep_31.39 Copyright (c) 2009-2019 Percona LLC and/or its affiliates Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> USE RECIPES; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> SELECT * FROM TAB1; +------------+--------------+ | CONTACT_ID | CONTACT_NAME | +------------+--------------+ | 1 | Ahmer | | 2 | Mansoor | | 3 | Salman | +------------+--------------+ 3 rows in set (0.00 sec)

    It shows that, our Percona XtraDB Cluster is working fine.

    We have successfully installed and configured two node Percona XtraDB Cluster on CentOS 7.

    2 comments:

    1. hi friend, i am using Percona cluster with three node(two database db1,db2 nodes, one arbiter node). After running smoothly one month, cluster down service down. now only one node up.
      when i try to add problematic db2 node , it can not add to cluster but other arbiter node can join to cluster. db1 has 20gb database, how can add this problematic node db2 to cluster ?

      ReplyDelete