GlusterFS - Replicate a volume over two nodes

When you are using a load balancer with two or more backend nodes(web servers) you will probably need some data to be mirrored between the two nodes. A high availability solution is offered by GlusterFS.

Within this article, I am going to show how you can set volume replication between two CentOS 7 servers.

Let's assume this:

  • node1.domain.com - 172.31.0.201
  • node2.domain.com - 172.31.0.202

First, we edit /etc/hosts of each of the servers and append this:
[crayon-5b5243c849baf534380278/]
We should now be able to ping between the nodes.
[crayon-5b5243c849bbc596408380/]

Installation:

Run these on both nodes:
[crayon-5b5243c849bc2412115577/]
Add priority=10 to the [epel]section in  /etc/yum.repos.d/epel.repo
[crayon-5b5243c849bc8856614230/]
Update packages and install:
[crayon-5b5243c849bcd420861391/]
Start glusterd service, also enable it to start at boot:
[crayon-5b5243c849bd2919573740/]
You can use service glusterd status and glusterfsd --version to check all is working properly.

Remember, all the installation steps should be executed on both servers!

Setup:

On node1 server run:
[crayon-5b5243c849bd8705782430/]
On node2 server run:
[crayon-5b5243c849bdd246956019/]
We need to create now the shared volume, and this can be done from any of the two servers.
[crayon-5b5243c849be3724073849/]
This creates a shared volume named shareddata, with two replicas on node1 and node2 servers, under /shared-folder path. It will also silently create the shared-folder directory if it doesn't exist.If there are more servers in the cluster, do adjust the replica number in the above command. The "force" parameter was needed, because we replicated in the root partition. It is not needed when creating under another partition.

Mount:

In order for the replication to work, mounting the volume is needed.  Create a mount point:
[crayon-5b5243c849be9651298916/]
On node1 run:
[crayon-5b5243c849bee586874737/]
On node2 run:
[crayon-5b5243c849bf4633423995/]

Testing:

On node1:
[crayon-5b5243c849bfa305616516/]
on node2:
[crayon-5b5243c849bfe807109865/]
This is how you mirror one folder between two servers. Just keep in mind, you will need to use the mount point /mnt/glusterfs in your  projects, for the replication to work.


How to install a .pxf windows ssl certificate on your linux web server

Windows uses .pfx for a PKCS #12 file. PFX stands for Personal eXhange Format. This is like a bag containing multiple cryptographic information. It can store private keys, certificate chains, certificates and root authority certificates. It is password protected to preserve the integrity of the contained data.
In order to install it on our apache/nginx web server we need to convert it PEM.

Upload first the .pfx to your linux server. You will need OpenSSL installed.

On Centos run:
[crayon-5b5243c84afc3678621366/]
On Ubuntu run:
[crayon-5b5243c84afcd844618606/]
To decript the .pfx use:
[crayon-5b5243c84afd3301620892/]
You will be prompted for the password that was used to encrypt the certificate. After providing it, you will need to enter a new password that will encrypt the private key.
The .pem file resulted will contain the encrypted public key, the certificate and some other information we will not use.Copy the key from inside and paste it to a new .key file.
Also copy the certificate from the .pem and put it in a new .cert file.

Remember to copy the whole blocks, including the dashed lines.
The private key file is still encrypted, so we have to decrypt it with:
[crayon-5b5243c84afd9300050452/]
You will now be prompted for the password you set to encrypt the key. This will decrypt the private key file to itself.

To install the certificate to Nginx, you will need to import your .key and .cert in Nginx configuration file like this:
[crayon-5b5243c84afde067052313/]
For Apache use:
[crayon-5b5243c84afe3674289534/]


Mysqldump Through a HTTP Request with Golang

So, in a previous post I explained how one can backup all databases on a server, each in its own dump file. Let's take it to the next level and make a Golang program that will let us run the dump process with a HTTP request.

Assuming you already have Go installed on the backup server, create first a project directory in your home folder for example. Copy the mysql dump script from here and save it as dump.sh in your project folder. Modify ROOTDIR="/backup/mysql/" inside dump.sh to reflect current project directory.

We will create a Golang script with two functions. One will launch the backup script when a specific HTTP request is done. The other one will put the HTTP call behind a authentication, so only people with credentials will be able to make the backup request.
[crayon-5b5243c84b96a117468433/]
This uses DB_BACKUP_USER and DB_BACKUP_PASSWORD that you will have to set as environment variables. Just append this to your ~/.bashrc file
[crayon-5b5243c84b975969340419/]
Now run source ~/.bashrc to load them.

Build the executable with go build http-db-backup.go where http-db-backup.go is the name of your Go file. Now you need to run the executable with sudo, but while preserving the environment: sudo -E ./http-db-backup

Now if you open your browser and open http://111.222.333.444:8080/backup (where 111.222.333.444 is your backup machine IP) the backup process will start, and you will get the output of the dump.sh in your browser when backup finishes.

We can furthermore add another function to list the directory in browser, so you can download the needed backup or backups.
[crayon-5b5243c84b97c890544352/]
All you need to do is to add http.HandleFunc("/", lister) to your main() and navigate to http://111.222.333.444:8080/ . You will be able to navigate the backup directory to download the dump files.


Mysqldump Command - Useful Usage Examples

One of the tasks a sysadmin will always have on their list is backing up databases. These backups are also called dump files because, usually, they are generated with mysqldump command.

I am going to share a few tricks on mysqldump that will help when handling servers with many relatively small databases.

 

The most simple way to backup databases would be using mysqldump command with the the --all-databases attribute. But I find that having each database saved in its own file more convenient to use.

Lets first suppose that you need to run a script that alters in databases, and that you just need a simple way to have a rollback point, just in case. I used to run something like this before:
[crayon-5b5243c84c45d282595580/]
where *** is your root password. The aditional parameters --skip-lock-tables --skip-add-locks --quick --single-transaction  assure availability and consistency of dump file for InnoDB databases (the default storage engine as of MySQL 5.5.5).

Mysql stores databases in folders using same name as database name in /var/lib/mysql. The command picks database names from the listing of /var/lib/mysql folder and exports to files using same name adding the .sql.

There are 2 issues with the above command:

  1. It will try to execute a dump for every file/folder listed in /var/lib/mysql. So if you have error logs or whatever other files it will create .sql dumps for them too. This will send just directory names as database names to export:
    [crayon-5b5243c84c476242878246/]
    I find this to be hard to type and prefer to use one I will explain in point 2, since it also covers this.
  2. When database names have characters like - the folder name will have @002 instead. If that is the case, you can use something like:
    [crayon-5b5243c84c47d083135872/]
    This picks database names to export form mysql show databases command.

But, one time I had to export databases with /  in their names. And there is no way to export as I showed above, since / can't be used in file names since it is actually a markup for directories.  So I did this:
[crayon-5b5243c84c483034289902/]
This wil replace / with _ for the dump file names.

For all of the above, we could (for obvious reasons) not use root mysql user.  We could also run the backing up from a different location. In order to do this, we would need to create a mysql user with the right privileges on the machine we want to back up.
[crayon-5b5243c84c488474806906/]
where 111.222.333.444 is the ip of the remote machine.

Now you can issue mysqldump command from the other machine like this:
[crayon-5b5243c84c48e907293644/]
where 444.333.222.111 is the ip of the machine we want to backup.

Lets take it to the next step , and put all our knowledge in a shell script.
[crayon-5b5243c84c493516897641/]
This will also compress the dump files to save storage.

Save the script as backup-mysql.sh somewhere on the machine you want backups saved, ensure you have the mysql user with the right credentials on the server hosting the mysql. You will also need mysql installed on the backup server. Executesudo chmod 700 backup-mysql.sh. Run the script with sudo sh backup-mysql.sh . After making sure it works properly, you can also add it to your crontab, so that it runs on a regular schedule.


How to save a file in Vim after forgetting to use sudo

Many of you probably experienced this. You edit a file in Linux with Vim or Nano or whatever else text editor you might be using and when you try to save, you realise you forgot to launch the editor with sudo privileges, and file can't be written. Exiting without saving and editing again with sudo privileges is not always an option, especially if you lose a lot of work.

Solutions

There are some workarounds. You can open a new terminal and grant the right permissions on the file with:
[crayon-5b5243c84cf2b174286478/]
Now you can go back to your editor and saving will be possible. Don't forget to change back the right permissions for your file.

Also, you could save in a new file and then copy the newly created file over the old one.

But these are workarounds. Vim editor actually offers a solution.

In normal(command line) mode of Vim you can issue:
[crayon-5b5243c84cf35546304592/]
And that works like magic. For the geeks, here is how the "magic" works:

  • :w - writes the contents of the buffer
  • !sudo - pipes it to the stdin of sudo
  • tee % - sudo executes tee that writes to "%" file
  • % - Vim expands "%" to current file name

So Vim actually makes magic happen.