Monday, 16 February 2015

Getting Started with Percona XtraDB Cluster

Percona XtraDB Cluster

As a DevOps activist I am exploring Percona XtraDB. In a series of blogs I will share my learnings. This blog intends to capture step by step details of installation of Percona XtraDB in Cluster mode. 

Why Cluster Mode Introduction:

Percona XtraDB cluster is High Availability and Scalability solution for MySQL users which provides
          Synchronous replication : Transaction either committed on all nodes or none.
          Multi-master replication : You can write to any node.
          Parallel applying events on slave : parallel event application on all slave nodes
          Automatic node provisioning
          Data consistency

    Straight into the Act:Installing Percona XtraDB Cluster

    Pre-requisites/Assumptions
    1. OS - Ububtu
    2. 3 Ubuntu nodes are available
    For the sake of this discussion lets name the nodes as
    node 1
    hostname:
    percona_xtradb_cluster1
    IP: 192.168.1.2

    node 2
    hostname: percona_xtradb_cluster2
    IP: 192.168.1.3

    node 3
    hostname: percona_xtradb_cluster3
    IP: 192.168.1.4
    Repeat the below steps on all nodes

    STEP 1 : Add the Percona repository
    $ echo "deb http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
    $ echo "deb-src http://repo.percona.com/apt precise main" >> /etc/apt/sources.list.d/percona.list
    $ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
    STEP 2 : After adding percona repository, Update apt cache so that new packages can be included in our apt-cache.
    $ apt-get update
    STEP 3 : Install Percona XtraDB Cluster :
    $ apt-get install -y percona-xtradb-cluster-56 qpress xtrabackup
    STEP 4 : Install additional package for editing files, downloading etc :
    $ apt-get install -y python-software-properties vim wget curl netcat

    With the above steps we have, installed Percona XtraDB Cluster on every node. Now we'll configure each node, so that a cluster of three nodes can be formed.

    Node Configuration:

    Add/Modify file /etc/mysql/my.cnf on first node :
    [MYSQLD] #This section is for mysql configuration
    user = mysql
    default_storage_engine = InnoDB
    basedir = /usr
    datadir = /var/lib/mysql
    socket = /var/run/mysqld/mysqld.sock
    port = 3306
    innodb_autoinc_lock_mode = 2
    log_queries_not_using_indexes = 1
    max_allowed_packet = 128M
    binlog_format = ROW
    wsrep_provider = /usr/lib/libgalera_smm.so
    wsrep_node_address = 192.168.1.2
    wsrep_cluster_name="newcluster"
    wsrep_cluster_address = gcomm://192.168.1.2,192.168.1.3,192.168.1.4
    wsrep_node_name = cluster1
    wsrep_slave_threads = 4
    wsrep_sst_method = xtrabackup-v2
    wsrep_sst_auth = sst:secret

    [sst] #This section is for sst(state snapshot transfer) configuration
    streamfmt = xbstream

    [xtrabackup] #This section is defines tuning configuration for xtrabackup
    compress
    compact
    parallel = 2
    compress_threads = 2
    rebuild_threads = 2
    Note :
             wsrep_node_address = {IP of current node}
             wsrep_cluster_name= {Name of cluster}
             wsrep_cluster_address = gcomm://{Comma separated IP address’s which are in cluster}
             wsrep_node_name = {This is name of current node which is used to identify it in cluster}

    Now as we have done node configuration. Now start first node services:
    Start the node :
    $ service mysql bootstrap-pxc
    Make sst user for authentication of cluster nodes :
    $ mysql -e "GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst'@'localhost' IDENTIFIED BY 'secret';"
    Check cluster status :
    $ mysql -e "show global status like 'wsrep%';"

    Configuration file for second node:
    [MYSQLD]
    user = mysql
    default_storage_engine = InnoDB
    basedir = /usr
    datadir = /var/lib/mysql
    socket = /var/run/mysqld/mysqld.sock
    port = 3306
    innodb_autoinc_lock_mode = 2
    log_queries_not_using_indexes = 1
    max_allowed_packet = 128M
    binlog_format = ROW
    wsrep_provider = /usr/lib/libgalera_smm.so
    wsrep_node_address = 192.168.1.3
    wsrep_cluster_name="newcluster"
    wsrep_cluster_address = gcomm://192.168.1.2,192.168.1.3,192.168.1.4
    wsrep_node_name = cluster2
    wsrep_slave_threads = 4
    wsrep_sst_method = xtrabackup-v2
    wsrep_sst_auth = sst:secret

    [sst]
    streamfmt = xbstream

    [xtrabackup]
    compress
    compact
    parallel = 2
    After doing configuration, start services of node 2.
    Start node 2 :
    $ service mysql start
    Check cluster status :
    $ mysql -e "show global status like 'wsrep%';"
    Now similarly you have to configure node 3. Changes are listed below.
    Changes in configuration for node 3 :
    wsrep_node_address = 192.168.1.4

    wsrep_node_name = cluster3
    Start node 3 :
    $ service mysql start

    Test percona XtraDb cluster:

    Log-in by mysql client in any node:
    mysql>create database opstree;
    mysql>use opstree;
    mysql>create table nu113r(name varchar(50));
    mysql>insert into nu113r values("zukin");
    mysql>select * from nu113r;
    Check the database on other node by mysql client:
    mysql>show databases;
    Note : There should be a database named “opstree”.
    mysql>use opstree;
    mysql>select * from nu113r; 
    Note : Data will be same as in the previous node.

    Let me know if you have any suggestions. You can also contact me at belwal.mohit@gmail.com.

    1 comment:

    1. Thanks for the post. I used similar post for installation - http://sysadm.pp.ua/linux/px-cluster.html . Did you made some performance tests(high load) ? Thanks in advance.

      ReplyDelete