Installation and Configuration of MySQL Cluster in Linux
Installation and Configuration of MySQL Cluster in Linux
Having set up a DHCP server, I am planning to deploy a DNS server in our network. It is always a better option to have DNS and DHCP on the same server, so that IP addresses allocated by DHCP server to a particular host can be updated in DNS database at the very moment. By some means, if this DNS-DHCP server goes down, it impacts whole production environment adversely. As a preventive measure, we have to introduce secondary DNS-DHCP server, which has to be configured in high availability mode (HA), so that if primary server goes down, secondary server takes over and cater the incoming requests.
PowerDNS is our prime choice for configuring the authoritative DNS server, with MySQL database as backend, as this combination has its own merits. This setup handles incoming queries, looks through DNS records in MySQL database and provides appropriate responses. The DNS servers being in HA, the databases in both the servers must always be kept in sync. Moreover, both the DHCP servers work in active-active mode, such that they divide IP address pool among themselves and cater incoming DHCP requests working together. As a result, there are multiple read/write happening from both the server in their own MySQL databases. In order to keep both the databases in sync, there has to be such mechanism that if any server makes any changes in database, it should be reflected in the database of another server and they maintain same DNS records.
To create a high availability environment, as mentioned above, MySQL provides two solutions – MySQL replication and MySQL Cluster. Master-Slave replication, wherein we have one read/write and one or more read-only slaves, is not useful in this scenario, as the replication is one way (from Master to Slave). While MySQL Master-Master replication is one of the alternatives, it is not a good choice, especially when there are multiple masters receiving write requests simultaneously. Circular replication (A to B, B to C, C to D, and D to A) has a big disadvantage that if any node fails, replication halts for subsequent nodes in the chain.
On the other hand, MySQL Cluster is-
PowerDNS is our prime choice for configuring the authoritative DNS server, with MySQL database as backend, as this combination has its own merits. This setup handles incoming queries, looks through DNS records in MySQL database and provides appropriate responses. The DNS servers being in HA, the databases in both the servers must always be kept in sync. Moreover, both the DHCP servers work in active-active mode, such that they divide IP address pool among themselves and cater incoming DHCP requests working together. As a result, there are multiple read/write happening from both the server in their own MySQL databases. In order to keep both the databases in sync, there has to be such mechanism that if any server makes any changes in database, it should be reflected in the database of another server and they maintain same DNS records.
To create a high availability environment, as mentioned above, MySQL provides two solutions – MySQL replication and MySQL Cluster. Master-Slave replication, wherein we have one read/write and one or more read-only slaves, is not useful in this scenario, as the replication is one way (from Master to Slave). While MySQL Master-Master replication is one of the alternatives, it is not a good choice, especially when there are multiple masters receiving write requests simultaneously. Circular replication (A to B, B to C, C to D, and D to A) has a big disadvantage that if any node fails, replication halts for subsequent nodes in the chain.
On the other hand, MySQL Cluster is-
- In-memory (or main-memory) database system – relies on main memory for data storage, management and manipulation, to achieve better performance while querying the data.
- Shared-nothing architecture database – stores data over multiple independent data nodes in a cluster, instead of shared data storage, with no single point of failure (SPOF).
- Distributed database system – stores portions of database on multiple nodes in the cluster, which is managed by central distributed database management system. So that, even if any node goes down, data integrity is maintained. Moreover, scaling becomes a handy task with almost 99.99% availability.
- Synchronous replication – database updates are synchronously replicated between all the data nodes in the cluster, guaranteeing data availability upon node failures.
Cluster Components
Management Node/Server:
It maintains the cluster’s global configuration file and provides cluster information whenever required. It also maintains logs of events happening in the cluster. Management client in the management node does all the administrative work, like starting/stopping nodes, starting/stopping backups and checking cluster status.
MySQL Nodes/Servers:
These servers contain local configuration files, they run mysqld daemon and group together to form a cluster, thus achieving high performance (due to parallelism) and high availability. These nodes cater all incoming queries, communicate with data nodes and provide the application access to the cluster.
Data Nodes:
These nodes run ndbd daemon and are responsible for data storage and retrieval. Multiple data nodes come together to provide storage for entire cluster, so that clients see them as a single database. Besides data storage, they keep monitoring other data nodes in the cluster and inform the management server in case of failures.
How does it work..?
At the heart of the MySQL cluster, there lies NDB (Network Database) storage engine, which actually is responsible for high-available environment and data redundancy. In the basic scenario, we have an application that sends a query, usually an INSERT/UPDATE/DELETE-like SQL statements, to MySQL server. In MySQL cluster, one of the MySQL server running NDB storage engine (or NDBCluster), which receives incoming SQL queries and communicates with data nodes to store the data. After confirming successful writing of the data into the data nodes, MySQL server acknowledges the application with an OK status.
In order to keep data available even after a node failure, whole data is divided into number of chunks – ‘Partitions’, and number of partitions is equal to number of nodes present in cluster. So, each node has to store one partition along with copy of a partition – ‘Replica’. Number of the replica is mentioned in the configuration file on Management node. MySQL cluster boasts about its 99.999% availability and replicas are key elements.
Handling Failures
When a MySQL node fails, being a shared-nothing architecture, no other node (MySQL/Data or Management) in the cluster is affected, they will keep doing their tasks. It’s up to the application as it needs to manage to connect to another MySQL node in the cluster. On the other hand, if a data node fails, other data node in the cluster takes over the responsibility and due to data redundancy (replicas), data will also be available. While MySQL cluster takes care of node failures, you need to take care that failed data node wakes up as early as possible, as you never know when other node(s) will stop to work. Failure of Management node doesn’t hamper the setup much, as it only deals with monitoring and backup tasks, but then you might not be able to start/stop other cluster nodes. Having 2 management nodes is definitely a solution and will certainly help a lot.
In order to keep data available even after a node failure, whole data is divided into number of chunks – ‘Partitions’, and number of partitions is equal to number of nodes present in cluster. So, each node has to store one partition along with copy of a partition – ‘Replica’. Number of the replica is mentioned in the configuration file on Management node. MySQL cluster boasts about its 99.999% availability and replicas are key elements.
Handling Failures
When a MySQL node fails, being a shared-nothing architecture, no other node (MySQL/Data or Management) in the cluster is affected, they will keep doing their tasks. It’s up to the application as it needs to manage to connect to another MySQL node in the cluster. On the other hand, if a data node fails, other data node in the cluster takes over the responsibility and due to data redundancy (replicas), data will also be available. While MySQL cluster takes care of node failures, you need to take care that failed data node wakes up as early as possible, as you never know when other node(s) will stop to work. Failure of Management node doesn’t hamper the setup much, as it only deals with monitoring and backup tasks, but then you might not be able to start/stop other cluster nodes. Having 2 management nodes is definitely a solution and will certainly help a lot.
Implementation
Considering that I have three subnets and I do not have any budget issues, I will opt to deploy 4 DNS-DHCP servers, out of which 3 will be primary for their respective networks and other will be secondary. I will have MySQL databases on all these 4 nodes (MySQL + Data nodes) and these are to be clustered for being in sync. For setting up MySQL cluster, I will need another two nodes to be configured as Management Nodes. So, the scenario is as below:
Management Nodes:
Management Nodes:
- mgmtsrv01 172.22.100.10
- mgmtsrv02 172.22.96.10
MySQL and Data Nodes:
- dhcpsrv01 172.22.11.100 (Primary DHCP for Region 1)
- dhcpsrv02 172.22.22.100 (Primary DHCP for Region 2)
- dhcpsrv03 172.22.33.100 (Primary DHCP for Region 3)
- dhcpsrv04 172.22.44.100 (Secondary DHCP for all)
System – Linux based virtual machines
Operating System – CentOS release 6.7
RAM – 8GB
CPU Cores – 4
Packages Dependencies –
Operating System – CentOS release 6.7
RAM – 8GB
CPU Cores – 4
Packages Dependencies –
libaio.x86_64 0:0.3.107-10.el6.rpm
libaio-devel.x86_64 0:0.3.107-10.el6.rpm
numactl-2.0.9-2.el6.x86_64.rpm
cryptopp-5.6.1-8.1.x86_64.rpm (required for PDNS)
php-pear-MDB2-Driver-mysql-1.5.0-0.8.b4.el6.noarch.rpm (required for PDNS)
php-pear-MDB2-2.5.0-0.7.b4.el6.remi.noarch.rpm (required for PDNS)
php-mysql-5.3.3-46.el6_6.x86_64 (required for PDNS)
php-pear-1.9.4-4.el6.noarch (required for PDNS)
php-pdo-5.3.3-46.el6_6.x86_64 (required for PDNS)
perl-DBD-MySQL-4.013-3.el6.x86_64 (required for PDNS)
Packages Installed –
MySQL-Cluster-client-gpl-7.3.11-1.el6.x86_64.rpm
MySQL-Cluster-server-gpl-7.3.11-1.el6.x86_64.rpm
MySQL-Cluster-shared-compat-gpl-7.3.11-1.el6.x86_64.rpm (required for PDNS)
Packages need to be removed (if any) –
mysql-server
mysql
mysql-libs
Configuration for Data Nodes:
Edit the
Edit the
/etc/my.cnf
file as below: [mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=172.22.100.10,172.22.96.10
socket=/var/lib/mysql/mysql.sock
user=mysql
innodb_data_file_path = ibdata1:10M:autoextend
pid-file=/var/run/mysqld/mysqld.pid
explicit_defaults_for_timestamp = 1
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=172.22.100.10,172.22.96.10
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
Configuration for Management Nodes:
Edit
Edit
/var/lib/mysql-cluster/config.ini
file as shown below: [ndbd default]
NoOfReplicas=4
DataDir=/opt/mysql-cluster/data
MaxNoOfAttributes=10000
MaxNoOfConcurrentOperations=150000
MaxNoOfOrderedIndexes=512
# MANAGEMENT NODES
[ndb_mgmd]
NodeId=1
hostname=172.22.100.10
datadir= /var/lib/mysql-cluster
[ndb_mgmd]
NodeId=2
hostname=172.22.96.10
datadir= /var/lib/mysql-cluster
# DATA NODES
[ndbd]
NodeId=3
hostname=172.22.11.100
datadir=/usr/local/mysql/data
[ndbd]
NodeId=4
hostname=172.22.44.100
datadir=/usr/local/mysql/data
[ndbd]
NodeId=5
hostname=172.22.33.100
datadir=/usr/local/mysql/data
[ndbd]
NodeId=6
hostname=172.22.22.100
datadir=/usr/local/mysql/data
#MYSQL NODES
[mysqld]
NodeId=7
hostname=172.22.11.100
[mysqld]
NodeId=8
hostname=172.22.44.100
[mysqld]
NodeId=9
hostname=172.22.33.100
[mysqld]
NodeId=10
hostname=172.22.22.100
Note: Before proceeding, it is recommended that MySQL root password should be set.
MySQL Cluster Setup
On Management Nodes-
To start the cluster,
To start the cluster,
ndb_mgmd -f /var/lib/mysql-cluster/config.ini
It should show output as below:
MySQL Cluster Management Server mysql-5.6.27 ndb-7.3.11
2016-01-13 11:21:54 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2016-01-13 11:21:54 [MgmtSrvr] INFO -- Sucessfully created config directory
On Data Nodes –
To start NDB cluster engine,
To start NDB cluster engine,
ndbd
It should show output as shown below:
2016-01-13 11:22:40 [ndbd] INFO -- Angel connected to '172.22.100.10:1186'
2016-01-13 11:22:40 [ndbd] INFO -- Angel allocated nodeid: 3
On MySQL Nodes –
Start MySQL service,
Start MySQL service,
service mysql start
Check the cluster status, from Management node,
ndb_mgm
show
If cluster is healthy, it must display,
mgmtsrv02 root [mysql-cluster] > ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: 172.22.100.10:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 4 node(s)
id=3 @172.22.11.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0, *)
id=4 @172.22.44.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0)
id=5 @172.22.33.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0)
id=6 @172.22.22.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @172.22.100.10 (mysql-5.6.27 ndb-7.3.11)
id=2 @172.22.96.10 (mysql-5.6.27 ndb-7.3.11)
[mysqld(API)] 4 node(s)
id=7 @172.22.11.100 (mysql-5.6.27 ndb-7.3.11)
id=8 @172.22.44.100 (mysql-5.6.27 ndb-7.3.11)
id=9 @172.22.33.100 (mysql-5.6.27 ndb-7.3.11)
id=10 @172.22.22.100 (mysql-5.6.27 ndb-7.3.11)
Otherwise, in case if any node goes down, it would show,
[ndbd(NDB)] 4 node(s) id=3 @172.22.11.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0, *) id=4 @172.22.44.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0)
id=5 (not connected, accepting connect from 172.22.33.100)
id=6 @172.22.22.100 (mysql-5.6.27 ndb-7.3.11, Nodegroup: 0)
Any DBMS uses Storage engines or database engines to write, read, update or delete data from database. InnoDB is the default storage engine used by MySQL since its version 5.5, such that whenever a table is created without ENGINE clause, it would create an InnoDB table by default. With InnoDB, data is read and written from hard disk, where MySQL server runs, and hence it necessitates configuring the disks in RAID, to achieve data redundancy.
On the other hand, MySQL cluster uses NDBCluster engine, that uses network connectivity in order to access data spread across different data nodes (not on MySQL servers like InnoDB). Hence, while creating tables, one must explicitly mention NDBCluster storage engine in order to instruct MySQL servers that data has to be stored on the data nodes.
On the other hand, MySQL cluster uses NDBCluster engine, that uses network connectivity in order to access data spread across different data nodes (not on MySQL servers like InnoDB). Hence, while creating tables, one must explicitly mention NDBCluster storage engine in order to instruct MySQL servers that data has to be stored on the data nodes.
Nhận xét
Đăng nhận xét