GlusterFS - Replicate a volume over two nodes

When you are using a load balancer with two or more backend nodes(web servers) you will probably need some data to be mirrored between the two nodes. A high availability solution is offered by GlusterFS.

Within this article, I am going to show how you can set volume replication between two CentOS 7 servers.

Let's assume this:

  • node1.domain.com - 172.31.0.201
  • node2.domain.com - 172.31.0.202

First, we edit /etc/hosts of each of the servers and append this:

172.31.0.201     node1.domain.com     node1
172.31.0.202     node2.domain.com     node2

We should now be able to ping between the nodes.

PING node2.domain.com (172.31.0.202) 56(84) bytes of data.
64 bytes from node2.domain.com (172.31.0.202): icmp_seq=1 ttl=64 time=0.482 ms
64 bytes from node2.domain.com (172.31.0.202): icmp_seq=2 ttl=64 time=0.261 ms
64 bytes from node2.domain.com (172.31.0.202): icmp_seq=3 ttl=64 time=0.395 ms

--- node2.domain.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.261/0.379/0.482/0.092 ms

Installation:

Run these on both nodes:

yum -y install epel-release yum-priorities

Add priority=10 to the [epel]section in  /etc/yum.repos.d/epel.repo

[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
mirrorlist=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch
failovermethod=priority
enabled=1
priority=10
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

Update packages and install:

yum -y update
yum -y install centos-release-gluster
yum -y install glusterfs-server

Start glusterd service, also enable it to start at boot:

service glusterd start
systemctl enable glusterd

You can use service glusterd status and glusterfsd --version to check all is working properly.

Remember, all the installation steps should be executed on both servers!

Setup:

On node1 server run:

[root@node1 ~]# gluster peer probe node2
peer probe: success.
[root@node1 ~]# gluster peer status
Number of Peers: 1

Hostname: node2
Uuid: 42ee3ddb-e3e3-4f3d-a3b6-5c809e589b76
State: Peer in Cluster (Connected)

On node2 server run:

[root@node2 ~]# gluster peer probe node1
peer probe: success.
[root@node2 ~]# gluster peer status
Number of Peers: 1

Hostname: node1.domain.com
Uuid: 68209420-3f9f-4c1a-8ce6-811070616dd4
State: Peer in Cluster (Connected)
Other names:
node1
[root@node2 ~]# gluster peer status
Number of Peers: 1

Hostname: node1.domain.com
Uuid: 68209420-3f9f-4c1a-8ce6-811070616dd4
State: Peer in Cluster (Connected)
Other names:
node1

We need to create now the shared volume, and this can be done from any of the two servers.

[root@node1 ~]# gluster volume create shareddata replica 2 transport tcp node1:/shared-folder node2:/shared-folder force
volume create: shareddata: success: please start the volume to access data
[root@node1 ~]# gluster volume start shareddata
volume start: shareddata: success
[root@node1 ~]# gluster volume info

Volume Name: shareddata
Type: Replicate
Volume ID: 30a97b23-3f8d-44d6-88db-09c61f00cd90
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node1:/shared-folder
Brick2: node2:/shared-folder
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

This creates a shared volume named shareddata, with two replicas on node1 and node2 servers, under /shared-folder path. It will also silently create the shared-folder directory if it doesn't exist.If there are more servers in the cluster, do adjust the replica number in the above command. The "force" parameter was needed, because we replicated in the root partition. It is not needed when creating under another partition.

Mount:

In order for the replication to work, mounting the volume is needed.  Create a mount point:

mkdir /mnt/glusterfs

On node1 run:

[root@node1 ~]# echo "node1:/shareddata    /mnt/glusterfs/  glusterfs       defaults,_netdev        0 0" >> /etc/fstab
[root@node1 ~]# mount -a
[root@node1 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   38G  1.1G   37G   3% /
devtmpfs                         236M     0  236M   0% /dev
tmpfs                            245M     0  245M   0% /dev/shm
tmpfs                            245M  4.4M  240M   2% /run
tmpfs                            245M     0  245M   0% /sys/fs/cgroup
/dev/sda2                       1014M   88M  927M   9% /boot
tmpfs                             49M     0   49M   0% /run/user/1000
node1:/shareddata                 38G  1.1G   37G   3% /mnt/glusterfs

On node2 run:

[root@node2 ~]# echo "node2:/shareddata    /mnt/glusterfs/  glusterfs       defaults,_netdev        0 0" >> /etc/fstab
[root@node2 ~]# mount -a
[root@node2 ~]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00   38G  1.1G   37G   3% /
devtmpfs                         236M     0  236M   0% /dev
tmpfs                            245M     0  245M   0% /dev/shm
tmpfs                            245M  4.4M  240M   2% /run
tmpfs                            245M     0  245M   0% /sys/fs/cgroup
/dev/sda2                       1014M   88M  927M   9% /boot
tmpfs                             49M     0   49M   0% /run/user/1000
node2:/shareddata                 38G  1.1G   37G   3% /mnt/glusterfs

Testing:

On node1:

touch /mnt/glusterfs/file01
touch /mnt/glusterfs/file02

on node2:

[root@node2 ~]# ls /mnt/glusterfs/ -l
total 0
-rw-r--r--. 1 root root 0 Sep 24 19:35 file01
-rw-r--r--. 1 root root 0 Sep 24 19:35 file02

This is how you mirror one folder between two servers. Just keep in mind, you will need to use the mount point /mnt/glusterfs in your  projects, for the replication to work.


Mysqldump Command - Useful Usage Examples

One of the tasks a sysadmin will always have on their list is backing up databases. These backups are also called dump files because, usually, they are generated with mysqldump command.

I am going to share a few tricks on mysqldump that will help when handling servers with many relatively small databases.

 

The most simple way to backup databases would be using mysqldump command with the the --all-databases attribute. But I find that having each database saved in its own file more convenient to use.

Lets first suppose that you need to run a script that alters in databases, and that you just need a simple way to have a rollback point, just in case. I used to run something like this before:

for i in \
`ls /var/lib/mysql/`; \
do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > $i.sql; done

where *** is your root password. The aditional parameters --skip-lock-tables --skip-add-locks --quick --single-transaction  assure availability and consistency of dump file for InnoDB databases (the default storage engine as of MySQL 5.5.5).

Mysql stores databases in folders using same name as database name in /var/lib/mysql. The command picks database names from the listing of /var/lib/mysql folder and exports to files using same name adding the .sql.

There are 2 issues with the above command:

  1. It will try to execute a dump for every file/folder listed in /var/lib/mysql. So if you have error logs or whatever other files it will create .sql dumps for them too. This will send just directory names as database names to export:
    for i in \
    `find /var/lib/mysql/ -type d | sed 's/\/var\/lib\/mysql\///g'`;\
    do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > $i.sql; done

    I find this to be hard to type and prefer to use one I will explain in point 2, since it also covers this.

  2. When database names have characters like - the folder name will have @002 instead. If that is the case, you can use something like:
    for i in \
    `mysql -u root -p*** -e 'show databases'`;\
    do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > $i.sql;done

    This picks database names to export form mysql show databases command.

But, one time I had to export databases with /  in their names. And there is no way to export as I showed above, since / can't be used in file names since it is actually a markup for directories.  So I did this:

for i in \
`mysql -u root -p*** -e 'show databases'`;\
do mysqldump -u root -p*** --skip-lock-tables --skip-add-locks --quick --single-transaction $i > `echo $i | sed "s/\//_/g"`.sql;done

This wil replace / with _ for the dump file names.

For all of the above, we could (for obvious reasons) not use root mysql user.  We could also run the backing up from a different location. In order to do this, we would need to create a mysql user with the right privileges on the machine we want to back up.

create user 'backupuser'@'111.222.333.444' identified by 'backuppassword';

grant select, show view, trigger, lock tables, reload, show databases on *.* to 'backupuser'@'111.222.333.444';
flush privileges;

where 111.222.333.444 is the ip of the remote machine.

Now you can issue mysqldump command from the other machine like this:

for i in \
`mysql -u backupuser -pbackuppassword -e 'show databases'`;\
do mysqldump -u backupuser -pbackuppassword -h 444.333.222.111 --skip-lock-tables --skip-add-locks --quick --single-transaction $i > `echo $i | sed "s/\//_/g"`.sql;done

where 444.333.222.111 is the ip of the machine we want to backup.

Lets take it to the next step , and put all our knowledge in a shell script.

#!/bin/bash
echo "Starting the backup script..."
ROOTDIR="/backup/mysql/"
YEAR=`date +%Y`
MONTH=`date +%m`
DAY=`date +%d`
HOUR=`date +%H`
SERVER="444.333.222.111"
BLACKLIST="information_schema performance_schema"
ADDITIONAL_MYSQLDUMP_PARAMS="--skip-lock-tables --skip-add-locks --quick --single-transaction"
MYSQL_USER="backupuser"
MYSQL_PASSWORD="backuppassword"

# Read MySQL password from stdin if empty
if [ -z "${MYSQL_PASSWORD}" ]; then
 echo -n "Enter MySQL ${MYSQL_USER} password: "
 read -s MYSQL_PASSWORD
 echo
fi

# Check MySQL credentials
echo exit | mysql --user=${MYSQL_USER} --password=${MYSQL_PASSWORD} --host=${SERVER} -B 2>/dev/null
if [ "$?" -gt 0 ]; then
 echo "MySQL ${MYSQL_USER} - wrong credentials"
 exit 1
else
 echo "MySQL ${MYSQL_USER} - was able to connect."
fi

#creating backup path
if [ ! -d "$ROOTDIR/$YEAR/$MONTH/$DAY/$HOUR" ]; then
    mkdir -p "$ROOTDIR/$YEAR/$MONTH/$DAY/$HOUR"
    chmod -R 700 $ROOTDIR
fi

echo "running mysqldump"
dblist=`mysql -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -h $SERVER -e "show databases" | sed -n '2,$ p'`
for db in $dblist; do
    echo "Backuping $db"
    isBl=`echo $BLACKLIST |grep $db`
    if [ $? == 1 ]; then
        mysqldump ${ADDITIONAL_MYSQLDUMP_PARAMS} -u ${MYSQL_USER} -p${MYSQL_PASSWORD} -h $SERVER $db | gzip --best > "$ROOTDIR/$YEAR/$MONTH/$DAY/$HOUR/`echo $db | sed 's/\//_/g'`.sql.gz"
        echo "Backup of $db ends with $? exit code"
    else
        echo "Database $db is blacklisted, skipped"
    fi
done
echo 
echo "dump completed"

This will also compress the dump files to save storage.

Save the script as backup-mysql.sh somewhere on the machine you want backups saved, ensure you have the mysql user with the right credentials on the server hosting the mysql. You will also need mysql installed on the backup server. Executesudo chmod 700 backup-mysql.sh. Run the script with sudo sh backup-mysql.sh . After making sure it works properly, you can also add it to your crontab, so that it runs on a regular schedule.


OOP in Golang vs C++

Before I started to learn Go, every online opinion I would read about the language complained about the lack of generics and how OOP was dumbed down  and so on.

It made me put off learning it for quite some time, more than I would like to admit. Coming from a C++ background, OOP, generics and meta-programming was my daily bread.

It wasn't until I had to actually learn Go that I saw what it offered me in terms of OOP and it was just enough. As such, I wanted to put a side-by-side comparison of typical C++ code that deals with classes, and it's corresponding implementation in Go that does more or less the same thing.

This is by no means an exhaustive list of examples, but I thought it might prove useful for someone trying to figure out Go.

To run all Go examples, copy them to a file and run go run filename.go .

To run all C++  examples, copy them to a file and run g++ -o filename filename.cpp -std=c++14 && ./filename .

Class declaration

In C++:

#include <iostream>
#include <memory>
#include <string>

class MyClass {
    private:
        std::string property1;

        void Method1(std::string param1, int param2);

    public:
        std::string property2;

        MyClass(std::string constructor_argument);
        int Method2(int param);
};

MyClass::MyClass(std::string constructor_argument) {
    this->property1 = constructor_argument;
}

void MyClass::Method1(std::string param1, int param2) {
    std::cout << param1 << std::endl << param2 << std::endl;
    std::cout << this->property2 << std::endl;
}

int MyClass::Method2(int param) {
    this->Method1(this->property1, param);

    return param + 1;
}

int main(int argc, char *argv[]) {
    auto obj = std::make_unique<MyClass>("property 1 value");

    obj->property2 = "property 2 value";

    std::cout << obj->Method2(4) << std::endl;

    return 0;
}

Go equivalent:

package main

import "fmt"

type MyClass struct {
	// properties that start with a lowercase character are private
	property1 string
	// properties that start with an uppercase character are public
	Property2 string

	/*
		Keep in mind that public and private in Golang actually
		means exported by package. Code from the same package
		can access a structure's private properties and methods.
	*/
}

func NewMyClass(constructor_argument string) *MyClass {
	return &MyClass{property1: constructor_argument}
}

func (mc *MyClass) method1(param1 string, param2 int) {
	fmt.Printf("%s\n%d\n", param1, param2)
	fmt.Printf("%s\n", mc.property1)
}

func (mc *MyClass) Method2(param int) int {
	mc.method1(mc.property1, param)

	return param + 1
}

func main() {
	obj := NewMyClass("property 1 value")

	obj.Property2 = "property 2 value"

	fmt.Printf("%d\n", obj.Method2(4))

	// No return needed
}

Inheritance (sort of)

In C++:

#include <iostream>
#include <memory>
#include <string>

class BaseClass {
    public:
        std::string property1;
        void method1();
};

void BaseClass::method1() {
    std::cout << this->property1 << std::endl;
}

class DerivedClass : public BaseClass {
    public:
        std::string property2;
        void method2();
};

void DerivedClass::method2() {
    std::cout << this->property2 << std::endl;
}

int main(int argc, char *argv[]) {
    auto obj = std::make_unique<DerivedClass>();
    obj->property1 = "property 1 value";
    obj->property2 = "property 2 value";

    obj->method1();
    obj->method2();

    return 0;
}

Go equivalent:

package main

import "fmt"

type BaseClass struct {
	Property1 string

	// no need for method declaration here
}

func (bc *BaseClass) Method1() {
	fmt.Printf("%s\n", bc.Property1)
}

type DerivedClass struct {
	BaseClass // this is actually composition

	Property2 string
}

func (dc *DerivedClass) Method2() {
	fmt.Printf("%s\n", dc.Property2)
}

func main() {
	obj := &DerivedClass{}
	obj.Property1 = "property 1 value"
	obj.Property2 = "property 2 value"

	obj.Method1()
	obj.Method2()

	// no need to return
}

Interfaces

In C++:

#include <iostream>
#include <memory>
#include <string>

class MyInterface {
    public:
        virtual void method() = 0;
};

class Class1 : public MyInterface {
    public:
        void method() override;
};

void Class1::method() {
    std::cout << "Class 1" << std::endl;
}

class Class2 : public MyInterface {
    public:
        void method() override;
};

void Class2::method() {
    std::cout << "Class 2" << std::endl;
}

std::shared_ptr<MyInterface> NewClass1() {
    return std::make_shared<Class1>();
}

std::shared_ptr<MyInterface> NewClass2() {
    return std::make_shared<Class2>();
}

int main(int argc, char *argv[]) {
    auto obj1 = NewClass1();
    auto obj2 = NewClass2();

    obj1->method();
    obj2->method();

    return 0;
}

Go equivalent:

package main

import "fmt"

type MyInterface interface {
	Method()
}

type Class1 struct {
}

func (c1 *Class1) Method() {
	fmt.Println("Class 1")
}

type Class2 struct {
}

func (c2 *Class2) Method() {
	fmt.Println("Class 2")
}

func NewClass1() MyInterface {
	return &Class1{}
}

func NewClass2() MyInterface {
	return &Class2{}
}

func main() {
	obj1 := NewClass1()
	obj2 := NewClass2()

	obj1.Method()
	obj2.Method()
}

Conclusion

There are basic equivalences between traditional OOP languages like C++ and the syntax and functionality that Golang provides.

In it's own simple way, Golang provides ways to implement encapsulation, inheritance and polymorphism. In my humble opinion, these mechanisms are enough for most object-oriented projects.


How to Save a File in Vim After Forgetting to Use Sudo

Many of you probably experienced this. You edit a file in Linux with Vim or Nano or whatever else text editor you might be using and when you try to save, you realise you forgot to launch the editor with sudo privileges, and file can't be written. Exiting without saving and editing again with sudo privileges is not always an option, especially if you lose a lot of work.

Solutions

There are some workarounds. You can open a new terminal and grant the right permissions on the file with:

sudo chmod 666 <filename>

Now you can go back to your editor and saving will be possible. Don't forget to change back the right permissions for your file.

Also, you could save in a new file and then copy the newly created file over the old one.

But these are workarounds. Vim editor actually offers a solution.

In normal(command line) mode of Vim you can issue:

:w !sudo tee %

And that works like magic. For the geeks, here is how the "magic" works:

  • :w - writes the contents of the buffer
  • !sudo - pipes it to the stdin of sudo
  • tee % - sudo executes tee that writes to "%" file
  • % - Vim expands "%" to current file name

So Vim actually makes magic happen.


Emulate a Redis Failover with Docker

Reading the Redis documentation can be a bit confusing without the hands-on experience. You could in theory create multiple processes of the Redis Server on your machine and configure each of them in part, but what if you could do it in a few commands? Not only that but emulate the network they’re connected to as well.

I’ve been looking into this and there’s a few examples out there on Web, the best one I could find was this one: https://github.com/AliyunContainerService/redis-cluster

So, starting from that example, I’ve tried to do the next best thing, which is to create a single docker-compose.yml file for all of it. Removing the need to build each image, just to do a docker-compose up and scale as needed.

Here’s what I got:

master:
  image: redis
slave:
  image: redis
  command: redis-server --slaveof master 6379
  links:
    - master
sentinel:
  image: redis
  command: >
    bash -c "echo 'port 26379' > sentinel.conf &&
    echo 'dir /tmp' >> sentinel.conf &&
    echo 'sentinel monitor master master 6379 2' >> sentinel.conf &&
    echo 'sentinel down-after-milliseconds master 5000' >> sentinel.conf &&
    echo 'sentinel parallel-syncs master 1' >> sentinel.conf &&
    echo 'sentinel failover-timeout master 5000' >> sentinel.conf &&
    cat sentinel.conf &&
    redis-server sentinel.conf --sentinel"
  links:
    - master
    - slave

Basically, after saving this into a docker-compose.yml file and running docker-compose up in that folder you’ll get this:

You can now scale as needed. For example, by running:

docker-compose scale slave=2 sentinel=3

You’ll end up with:

To initiale a failover, you’ll need to take the master out of the picture, you can do that with:

docker-compose pause master

You can now observe the communication between the sentinels and slaves. After the down-after-milliseconds and failover timeout passes, one of the slaves will be selected for promotion.

After the sentinels agree on the selection, the slave will become the new master.

You can now unpause the old master by doing this:

docker-compose unpause master

The old master will now become a slave of the new master and perform a sync.

That’s about it. As an exercise you could try setting up a cluster starting from this and observe failovers there.


should your online business use API?

Why Should Your Online Business Offer API

There are several ways to extend a business model but API is a hot topic right now as the online world is expanding very fast. If you’re a developer or at least interacted with APIs before, you probably know why public APIs are so important, but there’s a big chance you didn’t hear or care about them before and now you’re wondering why everyone talks about them.

What is an API

In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building application software. In general terms, it's a set of clearly defined methods of communication between various software components. (Wikipedia)

There is a simple way of saying this: an API is like a contract between two computer software agreeing to share information in a more or less standardised way.

Now it’s pretty clear what are we talking about, but why are them so important? How can APIs help us? In the following rows I will try to argument some good reasons.

Getting Started with Building APIs in Symfony2

Grow your business

You can grow your online business by integrating with additional tools or apps and engaging with others. This can be done using public APIs.

Let’s take Uber and Google Maps: everytime you search directions in Google Maps (from home to work, i.e.) you can automatically request an Uber, see your driver on the map or even contact him, all without having to leave Maps app thanks to Uber’s API.

Or if you have an online store, you might wanna offer public APIs so other apps can request price offers and display your products on their platforms.

Get ready for scaling

It’s all sweet and fun to start a new business and you probably want to do it faster and cost effective. Usually this means a monolithic application.

Success means scaling and this can be done by breaking the app into microservices. This will enable multiple options for you.

Let’s say you have a microservice that is being used very often and affects your server. That microservice can be moved on a new server with dedicated resources only for it and it will be accessible for the rest of the app via an API.

Or there is the case when you want to rewrite a single microservice (some languages are more efficient than others). This is the beauty of standardised API - you only have to make sure you accept the same API call as before and return the answer in the same format, so other dependent services won’t be affected.

Time saving

UX/UI is very important and we strongly advise you to continue to invest in that area, but there are cases when having to explore an UI for some actions is time consuming for some (more technical) users.

Let’s take SendGrid and their Marketing Campaigns app. You can create a new campaign by going through the UI process or you can simply make a call to their API. Second option is usually faster (but for more technical people or at least you need to develop an integration first) and the flow will always be the same, while UI can suffer some modifications over the time.

Mobile app

At some point you will probably want to add a dedicated mobile app to your business. Having APIs makes it possible. You are free to develop a new design or a new template without any changes on the API side.

Providing APIs must be a concern for every company, whether they focus on internal operations, partner integrations or public access. Join the revolution! Add API support to your product! Get a free quote now.


Counting lines and words using Go

For those who need to count words and lines in text files, an easy approach for this matter is to use bufio.ScanWords and bufio.ScanLine in order to quickly solve the problem.

To count words:

input := "Spicy jalapeno pastrami ut ham turducken.\n Lorem sed ullamco, leberkas sint short loin strip steak ut shoulder shankle porchetta venison prosciutto turducken swine.\n Deserunt kevin frankfurter tongue aliqua incididunt tri-tip shank nostrud.\n"
scanner := bufio.NewScanner(strings.NewReader(input))

// Set the split function for the scanning operation.
scanner.Split(bufio.ScanWords)

// Count the words.
count := 0
for scanner.Scan() {
    count++
}

if err := scanner.Err(); err != nil {
    fmt.Fprintln(os.Stderr, "reading input:", err)
}

fmt.Printf("%d\n", count)

ScanWords is a split function for a Scanner that returns each space-separated (checks unicode.IsSpace) word of text, with trimmed whitespace.

To count lines:

input := "Spicy jalapeno pastrami ut ham turducken.\n Lorem sed ullamco, leberkas sint short loin strip steak ut shoulder shankle porchetta venison prosciutto turducken swine.\n Deserunt kevin frankfurter tongue aliqua incididunt tri-tip shank nostrud.\n"
scanner := bufio.NewScanner(strings.NewReader(input))

// Set the split function for the scanning operation.
scanner.Split(bufio. ScanLines)

// Count the lines.
count := 0
for scanner.Scan() {
    count++
}

if err := scanner.Err(); err != nil {
    fmt.Fprintln(os.Stderr, "reading input:", err)
}

fmt.Printf("%d\n", count)

ScanLines is a split function for a Scanner that returns each line of text (separated by "\r?\n"). It returns also empty lines and the last line is returned even if it has no newline at the end.


Go debugging

Debugging Golang apps in Docker with Visual Studio Code

Context

We’ve recently had some problems with a Go application that was running inside a Docker container in a very big Docker Compose setup.

After getting fed up with writing console prints and rebuilding the Docker image for that container and spinning up all the containers to debug things, we started investigating how we could speed up our debugging process.

Enter Visual Studio Code and its wonderful Go extension which supports Delve.

Now if you read through the pages linked above you will find out how to install and setup all these things. It’s pretty straight forward. The Docker part, however, is not. As such, I will show you a basic Go application which mimics what we had to deal with and how to set up debugging for it.

The application

The following is the main.go of our app. It will connect to a Redis server, set and get a value.

package main


import (
    "fmt"


    "github.com/garyburd/redigo/redis"
)


func main() {
    client, err := redis.Dial("tcp", "redis:6379")
    if err != nil {
        panic(err)
    }
    defer client.Close()


    result, err := client.Do("SET", "key1", "value1")
    if err != nil {
        panic(err)
    }
    fmt.Printf("%v\n", result)


    result, err = client.Do("GET", "key1")
    if err != nil {
        panic(err)
    }
    fmt.Printf("%v\n", result)
}

 

As you can see, it relies on the Redigo package, so make sure you get it and place it in your vendor folder.

To make sure you have everything setup the right way, go ahead and build it locally by running :

go build -o main main.go

If you run the application built this way, it will fail of course, because you need to connect to Redis. I’ve set the hostname for the server to redis which will point to an IP on the docker-machine when we docker-compose up.

The Dockerfile

Now we have to build the image for this application.

FROM golang


ENV GOPATH /opt/go:$GOPATH
ENV PATH /opt/go/bin:$PATH
ADD . /opt/go/src/local/myorg/myapp
WORKDIR /opt/go/src/local/myorg/myapp


RUN go get github.com/derekparker/delve/cmd/dlv
RUN go build -o main main.go
CMD ["./main"]

 

When this image will be built, it will basically copy the application code, set up the environment and build the Go application. The application’s entrypoint will be the main executable that will be built. We also install the Delve command line tool but we won’t use it if we run a container from this image directly (i.e. docker run).

Note the GOPATH variable and the path to which we copy our code. This path is very important for Delve and our debug configuration.

The Docker Compose file

Now that we have the Dockerfile to build the image, we have to define the docker-compose.yml file. Here, however we will overwrite the entrypoint for the container to launch Delve. Also the code that we copied will be replaced with a volume that will point to the code on the host machine, and we will also remove some security constraints that prevent Delve from forking the process.

Essentially, for the context I mentioned above we try not to touch the base image for the application since it might get accidentally pushed to the Docker Hub with debugging parameters. So in order to avoid that we have our Docker Compose process override the image with what we need to go about debugging.

Here’s the docker-compose.yml file :

version: '2'
services:
  redis:
    image: redis
    ports:
      - "6379:6379"
    expose:
      - "6379"
  myapp:
    build: .
    security_opt:
      - seccomp:unconfined
    entrypoint: dlv debug local/myorg/myapp -l 0.0.0.0:2345 --headless=true --log=true -- server
    volumes:
      - .:/opt/go/src/local/myorg/myapp
    ports:
      - "2345:2345"
    expose:
      - "2345"

 

It's here that we introduce the Redis server dependency we have.  Note that for the myapp container we’ve exposed the ports that the Delve command line tool listens to.

So to see that everything is working, you can now run :

docker-compose up --build

This will build the image and start up the redis and myapp containers.

You should see the following output coming from the myapp container:

myapp_1  | 2016/12/15 08:50:39 server.go:71: Using API v1
myapp_1  | 2016/12/15 08:50:39 debugger.go:65: launching process with args: [/opt/go/src/local/myorg/myapp/debug server]
myapp_1  | API server listening at: [::]:2345

Which means that the Delve command line tool compiled our Go code into a debug executable, started it, and it’s listening for remote connections to the debugger on port 2345.

Now we just have to set up our launch.json config in the .vscode folder of our project.

The launch configuration

Here’s how our launch.json should look like:

{
    "version": "0.2.0",
    "configurations": [
       {
           "name": "Remote Docker",
           "type": "go",
           "request": "launch",
           "mode": "remote",
           "remotePath": "/opt/go/src/local/myorg/myapp",
           "port": 2345,
           "host": "192.168.99.100",
           "program": "${workspaceRoot}",
           "env": {},
           "args": []
       }
    ]
}

You might have to change the host IP  to what your docker-machine ip output is.

Now all we have to do is set up a few breakpoints and start the debugger using the Remote Docker configuration.

Our docker compose terminal should print something like this from the myapp container :

myapp_1  | 2016/12/15 08:50:45 debugger.go:242: created breakpoint: &api.Breakpoint{ID:1, Name:"", Addr:0x4010af, File:"/opt/go/src/local/myorg/myapp/main.go", Line:11, FunctionName:"main.main", Cond:"", Tracepoint:false, Goroutine:false, Stacktrace:0, Variables:[]string(nil), LoadArgs:(*api.LoadConfig)(nil), LoadLocals:(*api.LoadConfig)(nil), HitCount:map[string]uint64{}, TotalHitCount:0x0}
myapp_1  | 2016/12/15 08:50:45 debugger.go:242: created breakpoint: &api.Breakpoint{ID:2, Name:"", Addr:0x401116, File:"/opt/go/src/local/myorg/myapp/main.go", Line:16, FunctionName:"main.main", Cond:"", Tracepoint:false, Goroutine:false, Stacktrace:0, Variables:[]string(nil), LoadArgs:(*api.LoadConfig)(nil), LoadLocals:(*api.LoadConfig)(nil), HitCount:map[string]uint64{}, TotalHitCount:0x0}
myapp_1  | 2016/12/15 08:50:45 debugger.go:242: created breakpoint: &api.Breakpoint{ID:3, Name:"", Addr:0x4013d1, File:"/opt/go/src/local/myorg/myapp/main.go", Line:22, FunctionName:"main.main", Cond:"", Tracepoint:false, Goroutine:false, Stacktrace:0, Variables:[]string(nil), LoadArgs:(*api.LoadConfig)(nil), LoadLocals:(*api.LoadConfig)(nil), HitCount:map[string]uint64{}, TotalHitCount:0x0}
myapp_1  | 2016/12/15 08:50:45 debugger.go:397: continuing

You can Next and Continue, look at the callstack, see the locals, view contents of specific variables, etc.

Final thoughts

I hope this proves to be as useful to you as it did for us. The tools mentioned in this post really save us a heap of trouble.

We really have to thank the open source community that brought us these tools. They are the real heroes.

Happy debugging!


Ruby-gems

Start writing tests in Ruby: useful gems

Being a QA engineer is a continuous struggle in finding the right resources in order to get the job done easier and more efficiently. If you are planning to write automated tests in RSpec (Ruby's testing framework), then you should take a look over these gems. Please notice that I am most of the time automating backend tests only, so the libraries I am using are for this purpose mainly.

 

1. RestClient

 

gem install rest-client

If you want to make API calls on RESTful endpoints this should definitely be your choice. This library is easy to use and the response includes code, cookies, headers and body.

Let me show you how to make some calls (GET, PUT, POST, DELETE):

response = RestClient.get(url, header){|response, request, result | response}
response = RestClient.put(url, payload, header){|response, request, result | response}
response = RestClient.post(url, payload, header){|response, request, result | response}
response = RestClient.delete(url, header){|response, request, result | response}

Now you simply use this response for your purposes (response.code, response.body, etc.).

 

2. JSON

 

gem install json

If I told you about RestClient, then the next one should be json. RESTful services will return JSON format in body most of the times so you should parse that response to be easier to work with.

response = RestClient.post(url, payload, header){|response, request, result | response}
parsed_response = JSON.parse(response.body)
expect(parsed_response['errors'][0]['message']).to eq "Not Found"

See how simple this is? You only JSON.parse that response and that's all!

Since we are talking about JSON, let me show you how to build one:

payload_hash = {
            :key1 => :value1,
            :key2 => :value2
        }
payload_json = payload_hash.to_json

 

3. Nokogiri

 

JSON and XML are the most used formats in web development. So you probably guessed that now I will show you some tricks on how to use XML in your awesome tests.

gem install nokogiri

When I have installed this gem on my ubuntu (v14.04) virtual machine, I have had the following error:

ERROR: Error installing nokogiri:
ERROR: Failed to build gem native extension.

/usr/bin/ruby1.9.1 extconf.rb
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- mkmf (LoadError)
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from extconf.rb:4:in `'

But this was quickly fixed after installing ruby-dev and ruby1.9.1-dev:

sudo apt-get install ruby-dev
sudo apt-get install ruby1.9.1-dev

Now let's say you have the following XML:

<Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/">
    <Body>
        <Login>
            <username>username</name>
            <password>secret_password</password>
        </Login>
    </Body>
</Envelope>

If you want to access the values for username and password, simply do this:

your_file = Nokogiri::XML(your_XML_file)
your_file.remove_namespaces!
puts your_file.css('username').text
puts your_file.css('password').text

Also, you can use xpath instead of css.

Let me show you how to build the previous XML file using Nokogiri:

builder = Nokogiri::XML::Builder.new do |xml|
  xml.Envelope {
    xml.Body {
      xml.Login {
        xml.username "username"
        xml.password "secret_password"
      }
    }
  }
end
puts builder.to_xml

 

4. Sinatra

 

This gem is used to mock endpoints. See more about it here.

 

5. Dotenv

 

gem install dotenv

It is recommended to keep environment variables and stuff like usernames, passwords and URLs in a .env file. In order to load those variables in your tests, you must use this gem.

Dotenv.load

login_url = ENV['VAR_NAME']
signup_url = ENV['VAR_NAME']

First you load the .env file, then use those variables in your tests.

 

6. Mysql

 

gem install mysql

The name itself says what this is used for. See below how to open a connection to a MySql database and do a simple operation:

con = Mysql.new(db_host, db_user, db_pass, db_schema, db_port)

rs = con.query("UPDATE table_references SET col_name1= ... WHERE where_condition")

con.close

 

I will update this post when I will use some new awesome Ruby library. What gems are you using?


Face Detector using OpenCV and C

Build a Face Detector on OS X Using OpenCV and C++

Building and using C++ libraries can be a daunting task, even more so for big libraries like OpenCV. This article should get you started with a minimal build of OpenCV and a sample application written in C++.

This application will get images from the webcam, draw rectangles around the faces in the images and show them to you on screen.

    Requirements

I've built this on a MacBook Pro running OS X El Capitan Version 10.11.1.

We'll be using the GNU C++ compiler (g++) from the command line. Note that you should still have Xcode installed (I have Xcode 7.1 installed).

Here's what you need to do :

  1. Get "OpenCV for Linux/Mac" from the OpenCV Downloads Page I got version 3.0.
  2. Extract the contents of the zip file from step 1 to a folder of your choosing (I chose ~/opencv-3.0.0).
  3. Get a binary distribution of Cmake from the Cmake Downloads Page I got cmake-3.4.0-Darwin-x86_64.dmg.
  4. Install Cmake.

 

    Building OpenCV

OpenCV uses CMake files to describe how the project needs to be built. CMake can transform these files into actual project settings (e.g. an Xcode project, Unix makefiles, a Visual Studio project, etc.) depending on the generator you choose.

First open CMake and a small window will pop-up that will let you choose your build options based on the CMakeList.txt files in the opencv source directory. First click on the Browse Source... button and choose the path to the opencv source folder (the folder you extracted the zip file to at step 2). Then click on the Browse Build... button and choose a path to a build folder, I'm going to create a new folder called build in the previously mentioned source folder.

If at any point you are prompted to choose a generator, pick Unix Makefiles. If the paths you chose were correct, after you click the Configure button, you should be looking at something like this :

cmake_options

For a somewhat minimal OpenCV build, make sure you only have the following options enabled :

  • BUILD_JASPER
  • BUILD_JPEG
  • BUILD_OPENEXR
  • BUILD_PACKAGE
  • BUILD_PNG
  • BUILD_TIFF
  • BUILD_WITH_DEBUG_INFO
  • BUILD_ZLIB
  • BUILD_opencv_apps
  • BUILD_opencv_calib3d
  • BUILD_opencv_core
  • BUILD_opencv_features2d
  • BUILD_opencv_flann
  • BUILD_opencv_hal
  • BUILD_opencv_highgui
  • BUILD_opencv_imgcodecs
  • BUILD_opencv_imgproc
  • BUILD_opencv_ml
  • BUILD_opencv_objdetect
  • BUILD_opencv_photo
  • BUILD_opencv_python2
  • BUILD_opencv_shape
  • BUILD_opencv_stitching
  • BUILD_opencv_superres
  • BUILD_opencv_ts
  • BUILD_opencv_video
  • BUILD_opencv_videoio
  • BUILD_opencv_videostab
  • ENABLE_SSE
  • ENABLE_SSE2
  • WITH_1394
  • WITH_JASPER
  • WITH_JPEG
  • WITH_LIBV4L
  • WITH_OPENEXR
  • WITH_PNG
  • WITH_TIFF
  • WITH_V4L
  • WITH_WEBP

You should disable the options that are not in the list, especially the BUILD_SHARED_LIBS one. Don't touch the options that are text fields unless you know what you're doing.

Most of these options you don't need for this particular exercise, but it will save you time by not having to rebuild OpenCV should you decide to try something else.

Once you have selected the settings above, click Generate. Now you can navigate to the build folder, I'll do so with cd ~/opencv-3.0.0/build/ and run make to build OpenCV.

    Installing OpenCV

If everything goes well, after the build finishes, run make install to add the OpenCV includes to the /usr/local/include folder and the libraries to the /usr/local/lib and /usr/local/share/OpenCV/3rdparty/lib folders.

After that's done, you should be able to build your own C++ applications that link against OpenCV.

    The Face Detector Application

Now let's try to build our first application with OpenCV

Here's the code and comments that explain how to do just that :

#include <iostream>

//Include OpenCV
#include <opencv2/opencv.hpp>

int main(void )
{
   //Capture stream from webcam.
   cv::VideoCapture capture(0);

   //Check if we can get the webcam stream.
   if(!capture.isOpened())
   {
      std::cout << "Could not open camera" << std::endl;
      return -1;
   }

   //OpenCV saves detection rules as something called a CascadeClassifier which
   //    can be used to detect objects in images.
   cv::CascadeClassifier faceCascade;

   //We'll load the lbpcascade_frontalface.xml containing the rules to detect faces.
   //The file should be right next to the binary.
   if(!faceCascade.load("lbpcascade_frontalface.xml"))
   {
      std::cout << "Failed to load cascade classifier" << std::endl;
      return -1;
   }

   while (true)
   {
      //This variable will hold the image from the camera.
      cv::Mat cameraFrame;

      //Read an image from the camera.
      capture.read(cameraFrame);

      //This vector will hold the rectangle coordinates to a detection inside the image.
      std::vector<cv::Rect> faces;

      //This function detects the faces in the image and
      // places the rectangles of the faces in the vector.
      //See the detectMultiScale() documentation for more details
      // about the rest of the parameters.
      faceCascade.detectMultiScale(
        cameraFrame, 
        faces, 
        1.09, 
        3,
        0 | CV_HAAR_SCALE_IMAGE,
        cv::Size(30, 30));

      //Here we draw the rectangles onto the image with a red border of thikness 2.
      for( size_t i = 0; i < faces.size(); i++ )
            cv::rectangle(cameraFrame, faces[i], cv::Scalar(0, 0, 255), 2);

      //Here we show the drawn image in a named window called "output".
      cv::imshow("output", cameraFrame);

      //Waits 50 miliseconds for key press, returns -1 if no key is pressed during that time
      if (cv::waitKey(50) >= 0)
          break;
   }

   return 0;
}

 

I saved this as main.cpp. To build it I used the following command :

g++ -o main main.cpp -I/usr/local/include \
-L/usr/local/lib -lopencv_core -lopencv_imgproc -lopencv_objdetect \
-lopencv_imgcodecs -lopencv_highgui -lopencv_hal -lopencv_videoio \
-L/usr/local/share/OpenCV/3rdparty/lib -llibpng -llibjpeg -llibwebp \
-llibtiff -lzlib -lIlmImf -llibjasper -framework AVFoundation -framework QuartzCore \
-framework CoreMedia -framework Cocoa -framework QTKit

Hopefully, no errors should occur.

    Conclusions

Before running the application, you have to copy the lbpcascade_frontalface.xml next to the main file.  You can find this file in the OpenCV source folder under /data/lbpcascades/. You can also find some other cascades to detect eyes, cat faces, etc.

Now just run the ./main and enjoy!