When you are using a load balancer with two or more backend nodes(web servers) you will probably need some data to be mirrored between the two nodes. A high availability solution is offered by GlusterFS.
Within this article, I am going to show how you can set volume replication between two CentOS 7 servers.
Let’s assume this:
- node1.domain.com – 172.31.0.201
- node2.domain.com – 172.31.0.202
First, we edit /etc/hosts
of each of the servers and append this:
172.31.0.201 node1.domain.com node1 172.31.0.202 node2.domain.com node2
We should now be able to ping between the nodes.
PING node2.domain.com (172.31.0.202) 56(84) bytes of data. 64 bytes from node2.domain.com (172.31.0.202): icmp_seq=1 ttl=64 time=0.482 ms 64 bytes from node2.domain.com (172.31.0.202): icmp_seq=2 ttl=64 time=0.261 ms 64 bytes from node2.domain.com (172.31.0.202): icmp_seq=3 ttl=64 time=0.395 ms --- node2.domain.com ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.261/0.379/0.482/0.092 ms
Installation:
Run these on both nodes:
yum -y install epel-release yum-priorities
Add priority=10
to the [epel]
section in /etc/yum.repos.d/epel.repo
[epel] name=Extra Packages for Enterprise Linux 7 - $basearch #baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch failovermethod=priority enabled=1 priority=10 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Update packages and install:
yum -y update yum -y install centos-release-gluster yum -y install glusterfs-server
Start glusterd service, also enable it to start at boot:
service glusterd start systemctl enable glusterd
You can use service glusterd status
and glusterfsd --version
to check all is working properly.
Remember, all the installation steps should be executed on both servers!
Setup:
On node1 server run:
[root@node1 ~]# gluster peer probe node2 peer probe: success. [root@node1 ~]# gluster peer status Number of Peers: 1 Hostname: node2 Uuid: 42ee3ddb-e3e3-4f3d-a3b6-5c809e589b76 State: Peer in Cluster (Connected)
On node2 server run:
[root@node2 ~]# gluster peer probe node1 peer probe: success. [root@node2 ~]# gluster peer status Number of Peers: 1 Hostname: node1.domain.com Uuid: 68209420-3f9f-4c1a-8ce6-811070616dd4 State: Peer in Cluster (Connected) Other names: node1 [root@node2 ~]# gluster peer status Number of Peers: 1 Hostname: node1.domain.com Uuid: 68209420-3f9f-4c1a-8ce6-811070616dd4 State: Peer in Cluster (Connected) Other names: node1
We need to create now the shared volume, and this can be done from any of the two servers.
[root@node1 ~]# gluster volume create shareddata replica 2 transport tcp node1:/shared-folder node2:/shared-folder force volume create: shareddata: success: please start the volume to access data [root@node1 ~]# gluster volume start shareddata volume start: shareddata: success [root@node1 ~]# gluster volume info Volume Name: shareddata Type: Replicate Volume ID: 30a97b23-3f8d-44d6-88db-09c61f00cd90 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node1:/shared-folder Brick2: node2:/shared-folder Options Reconfigured: transport.address-family: inet nfs.disable: on
This creates a shared volume named shareddata
, with two replicas on node1 and node2 servers, under /shared-folder
path. It will also silently create the shared-folder
directory if it doesn’t exist.If there are more servers in the cluster, do adjust the replica number in the above command. The “force” parameter was needed, because we replicated in the root partition. It is not needed when creating under another partition.
Mount:
In order for the replication to work, mounting the volume is needed. Create a mount point:
mkdir /mnt/glusterfs
On node1 run:
[root@node1 ~]# echo "node1:/shareddata /mnt/glusterfs/ glusterfs defaults,_netdev 0 0" >> /etc/fstab [root@node1 ~]# mount -a [root@node1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 38G 1.1G 37G 3% / devtmpfs 236M 0 236M 0% /dev tmpfs 245M 0 245M 0% /dev/shm tmpfs 245M 4.4M 240M 2% /run tmpfs 245M 0 245M 0% /sys/fs/cgroup /dev/sda2 1014M 88M 927M 9% /boot tmpfs 49M 0 49M 0% /run/user/1000 node1:/shareddata 38G 1.1G 37G 3% /mnt/glusterfs
On node2 run:
[root@node2 ~]# echo "node2:/shareddata /mnt/glusterfs/ glusterfs defaults,_netdev 0 0" >> /etc/fstab [root@node2 ~]# mount -a [root@node2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 38G 1.1G 37G 3% / devtmpfs 236M 0 236M 0% /dev tmpfs 245M 0 245M 0% /dev/shm tmpfs 245M 4.4M 240M 2% /run tmpfs 245M 0 245M 0% /sys/fs/cgroup /dev/sda2 1014M 88M 927M 9% /boot tmpfs 49M 0 49M 0% /run/user/1000 node2:/shareddata 38G 1.1G 37G 3% /mnt/glusterfs
Testing:
On node1:
touch /mnt/glusterfs/file01 touch /mnt/glusterfs/file02
on node2:
[root@node2 ~]# ls /mnt/glusterfs/ -l total 0 -rw-r--r--. 1 root root 0 Sep 24 19:35 file01 -rw-r--r--. 1 root root 0 Sep 24 19:35 file02
This is how you mirror one folder between two servers. Just keep in mind, you will need to use the mount point /mnt/glusterfs
in your projects, for the replication to work.