{"id":3022,"date":"2017-09-25T06:32:35","date_gmt":"2017-09-25T06:32:35","guid":{"rendered":"https:\/\/intelligentbee.com\/blog\/?p=3022"},"modified":"2024-12-19T09:43:10","modified_gmt":"2024-12-19T09:43:10","slug":"glusterfs-replicate-volume-two-nodes","status":"publish","type":"post","link":"https:\/\/intelligentbee.com\/blog\/glusterfs-replicate-volume-two-nodes\/","title":{"rendered":"GlusterFS &#8211; Replicate a volume over two nodes"},"content":{"rendered":"<p>When you are using a load balancer with two or more backend nodes(web servers) you will probably need some data to be mirrored between the two nodes. A high availability solution is offered by GlusterFS.<\/p>\n<p>Within this article, I am going to show how you can set volume replication between two CentOS 7 servers.<\/p>\n<p>Let&#8217;s assume this:<\/p>\n<ul>\n<li>node1.domain.com &#8211;\u00a0172.31.0.201<\/li>\n<li>node2.domain.com &#8211;\u00a0172.31.0.202<\/li>\n<\/ul>\n<p>First, we edit <code>\/etc\/hosts<\/code> of each of the servers and append this:<\/p>\n<pre class=\"toolbar:2 nums:false lang:vim decode:true\">172.31.0.201     node1.domain.com     node1\r\n172.31.0.202     node2.domain.com     node2<\/pre>\n<p>We should now be able to ping between the nodes.<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true\">PING node2.domain.com (172.31.0.202) 56(84) bytes of data.\r\n64 bytes from node2.domain.com (172.31.0.202): icmp_seq=1 ttl=64 time=0.482 ms\r\n64 bytes from node2.domain.com (172.31.0.202): icmp_seq=2 ttl=64 time=0.261 ms\r\n64 bytes from node2.domain.com (172.31.0.202): icmp_seq=3 ttl=64 time=0.395 ms\r\n\r\n--- node2.domain.com ping statistics ---\r\n3 packets transmitted, 3 received, 0% packet loss, time 2001ms\r\nrtt min\/avg\/max\/mdev = 0.261\/0.379\/0.482\/0.092 ms<\/pre>\n<h4>Installation:<\/h4>\n<p>Run these on both nodes:<\/p>\n<pre class=\"toolbar:2 nums:false lang:default decode:true \">yum -y install epel-release yum-priorities<\/pre>\n<p>Add <code>priority=10<\/code> to the <code>[epel]<\/code>section in\u00a0\u00a0<code>\/etc\/yum.repos.d\/epel.repo<\/code><\/p>\n<pre class=\"toolbar:2 nums:false lang:vim decode:true\">[epel]\r\nname=Extra Packages for Enterprise Linux 7 - $basearch\r\n#baseurl=http:\/\/download.fedoraproject.org\/pub\/epel\/7\/$basearch\r\nmirrorlist=https:\/\/mirrors.fedoraproject.org\/metalink?repo=epel-7&amp;arch=$basearch\r\nfailovermethod=priority\r\nenabled=1\r\npriority=10\r\ngpgcheck=1\r\ngpgkey=file:\/\/\/etc\/pki\/rpm-gpg\/RPM-GPG-KEY-EPEL-7<\/pre>\n<p>Update packages and install:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true \">yum -y update\r\nyum -y install centos-release-gluster\r\nyum -y install glusterfs-server<\/pre>\n<p>Start glusterd service, also enable it to start at boot:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true \">service glusterd start\r\nsystemctl enable glusterd<\/pre>\n<p>You can use <code>service glusterd status<\/code> and <code>glusterfsd --version<\/code> to check all is working properly.<\/p>\n<p><span style=\"color: #ff0000;\">Remember, all the installation steps should be executed on both servers!<\/span><\/p>\n<h4>Setup:<\/h4>\n<p>On node1 server run:<\/p>\n<pre class=\"toolbar:2 nums:false lang:default decode:true \">[root@node1 ~]# gluster peer probe node2\r\npeer probe: success.\r\n[root@node1 ~]# gluster peer status\r\nNumber of Peers: 1\r\n\r\nHostname: node2\r\nUuid: 42ee3ddb-e3e3-4f3d-a3b6-5c809e589b76\r\nState: Peer in Cluster (Connected)<\/pre>\n<p>On node2 server run:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true \">[root@node2 ~]# gluster peer probe node1\r\npeer probe: success.\r\n[root@node2 ~]# gluster peer status\r\nNumber of Peers: 1\r\n\r\nHostname: node1.domain.com\r\nUuid: 68209420-3f9f-4c1a-8ce6-811070616dd4\r\nState: Peer in Cluster (Connected)\r\nOther names:\r\nnode1\r\n[root@node2 ~]# gluster peer status\r\nNumber of Peers: 1\r\n\r\nHostname: node1.domain.com\r\nUuid: 68209420-3f9f-4c1a-8ce6-811070616dd4\r\nState: Peer in Cluster (Connected)\r\nOther names:\r\nnode1<\/pre>\n<p>We need to create now the shared volume, and this can be done from any of the two servers.<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true\">[root@node1 ~]# gluster volume create shareddata replica 2 transport tcp node1:\/shared-folder node2:\/shared-folder force\r\nvolume create: shareddata: success: please start the volume to access data\r\n[root@node1 ~]# gluster volume start shareddata\r\nvolume start: shareddata: success\r\n[root@node1 ~]# gluster volume info\r\n\r\nVolume Name: shareddata\r\nType: Replicate\r\nVolume ID: 30a97b23-3f8d-44d6-88db-09c61f00cd90\r\nStatus: Started\r\nSnapshot Count: 0\r\nNumber of Bricks: 1 x 2 = 2\r\nTransport-type: tcp\r\nBricks:\r\nBrick1: node1:\/shared-folder\r\nBrick2: node2:\/shared-folder\r\nOptions Reconfigured:\r\ntransport.address-family: inet\r\nnfs.disable: on<\/pre>\n<p>This creates a shared volume named <code>shareddata<\/code>, with two replicas on node1 and node2 servers, under <code>\/shared-folder<\/code> path. It will also silently create the <code>shared-folder<\/code> directory if it doesn&#8217;t exist.If there are more servers in the cluster, do adjust the replica number in the above command. The &#8220;force&#8221; parameter was needed, because we replicated in the root partition. It is not needed when creating under another partition.<\/p>\n<h4>Mount:<\/h4>\n<p>In order for the replication to work, mounting the volume is needed. \u00a0Create a mount point:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true\">mkdir \/mnt\/glusterfs\r\n<\/pre>\n<p>On node1 run:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true\">[root@node1 ~]# echo \"node1:\/shareddata    \/mnt\/glusterfs\/  glusterfs       defaults,_netdev        0 0\" &gt;&gt; \/etc\/fstab\r\n[root@node1 ~]# mount -a\r\n[root@node1 ~]# df -h\r\nFilesystem                       Size  Used Avail Use% Mounted on\r\n\/dev\/mapper\/VolGroup00-LogVol00   38G  1.1G   37G   3% \/\r\ndevtmpfs                         236M     0  236M   0% \/dev\r\ntmpfs                            245M     0  245M   0% \/dev\/shm\r\ntmpfs                            245M  4.4M  240M   2% \/run\r\ntmpfs                            245M     0  245M   0% \/sys\/fs\/cgroup\r\n\/dev\/sda2                       1014M   88M  927M   9% \/boot\r\ntmpfs                             49M     0   49M   0% \/run\/user\/1000\r\nnode1:\/shareddata                 38G  1.1G   37G   3% \/mnt\/glusterfs<\/pre>\n<p>On node2 run:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true\">[root@node2 ~]# echo \"node2:\/shareddata    \/mnt\/glusterfs\/  glusterfs       defaults,_netdev        0 0\" &gt;&gt; \/etc\/fstab\r\n[root@node2 ~]# mount -a\r\n[root@node2 ~]# df -h\r\nFilesystem                       Size  Used Avail Use% Mounted on\r\n\/dev\/mapper\/VolGroup00-LogVol00   38G  1.1G   37G   3% \/\r\ndevtmpfs                         236M     0  236M   0% \/dev\r\ntmpfs                            245M     0  245M   0% \/dev\/shm\r\ntmpfs                            245M  4.4M  240M   2% \/run\r\ntmpfs                            245M     0  245M   0% \/sys\/fs\/cgroup\r\n\/dev\/sda2                       1014M   88M  927M   9% \/boot\r\ntmpfs                             49M     0   49M   0% \/run\/user\/1000\r\nnode2:\/shareddata                 38G  1.1G   37G   3% \/mnt\/glusterfs\r\n<\/pre>\n<h4>Testing:<\/h4>\n<p>On node1:<\/p>\n<pre class=\"toolbar:2 nums:false lang:default decode:true\">touch \/mnt\/glusterfs\/file01\r\ntouch \/mnt\/glusterfs\/file02<\/pre>\n<p>on node2:<\/p>\n<pre class=\"toolbar:2 nums:false lang:sh decode:true\">[root@node2 ~]# ls \/mnt\/glusterfs\/ -l\r\ntotal 0\r\n-rw-r--r--. 1 root root 0 Sep 24 19:35 file01\r\n-rw-r--r--. 1 root root 0 Sep 24 19:35 file02<\/pre>\n<p>This is how you mirror one folder between two servers. Just keep in mind, you will need to use the mount point <code>\/mnt\/glusterfs<\/code> in your \u00a0projects, for the replication to work.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>When you are using a load balancer with two or more backend nodes(web servers) you will probably need some data [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":3025,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[86],"tags":[],"yst_prominent_words":[969,977,976,975,974,973,972,971,970,275,968,967,966,798,729,432,408],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/3022"}],"collection":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/comments?post=3022"}],"version-history":[{"count":3,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/3022\/revisions"}],"predecessor-version":[{"id":75270,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/3022\/revisions\/75270"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/media\/3025"}],"wp:attachment":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/media?parent=3022"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/categories?post=3022"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/tags?post=3022"},{"taxonomy":"yst_prominent_words","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/yst_prominent_words?post=3022"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}