testing and fixtures

Setup Testing and Fixtures in Symfony2: The Easy Way

Setting up testing and fixtures in Symfony2 is vital if you plan on starting Test Driven Development or you simply want to start covering your code with properly written tests that can access mock data.

 

1. Install PHPUnit and php 5.6

The first thing you need to do is to install PHPUnit on your machine:
[crayon-5b558c87a72eb022787721/]
and then, if needed, also upgrade your PHP version to 5.6:
[crayon-5b558c87a730f304345036/]
and make sure everything's ok by running: phpunit -c app/

Please note that this is not a testing tutorial so if you'd like to learn more about how to actually test your Symfony2 app, then please read their documentation.

 

2. Setup and create a test database

In order to be able to configure testing and fixtures in a Symfony2 app a separate, independent database is needed so that your dev environment is not affected.

In the config_test.yml file you simply need to add:
[crayon-5b558c87a7316305773652/]
then simply run php app/console doctrine:database:create --env=test

 

RelatedSend Emails in Symfony2: The Right Way

 

3. Build the BaseTestSetup class

Once our test database is ready we can start building our BaseTestSetup class, one which will serve as parent for all of our tests.
[crayon-5b558c87a731d921550370/]
 

4. Install the LiipFunctionalTestBundle

Even though Symfony does have an out of the box solution to setup and test your app, the LiipFunctionalTestBundle provides base classes for functional tests to assist in setting up testing and fixtures and HTML5 validation.

After you install and configure the bundle, go back to your BaseTestSetup class and make the necessary modifications:
[crayon-5b558c87a7323998684651/]
The new code above simply drops and creates the database each them tests run. Then it loads the fixtures that you'll need. The LoadUserData class does not exist yet so we'll go right ahead and add it by following the handy tutorial found on Symfony's website.

 

5. Write the very first test

Now that have your fixtures ready, you can go ahead and write your first test. Create a new file in your AppBundle/Tests folder called UserTest.php, next to BaseTestSetup.php:
[crayon-5b558c87a732a218767250/]
 

And that's about it! You now have everything you need to properly test your awesome app.

 


Building a Face Detection Web API in Node.js

Introduction

As a follow-up to my previous article on how to use your webcam for face detection with OpenCV, I’d like to show you how you can create your own web API for that.

There are a few Node.js modules out there that do just that. A few of them even provide bindings for OpenCV so you can use it directly from Javascript.

The catch is that most of these modules either rely directly on binaries or they need to be built for your machine from a makefile or a Visual Studio project, etc. That’s why some of them work on Windows for example, but not on Mac, or vice-versa.

The objective of this article is to show you the steps needed to create such a module for yourself so that you can customize it for your machine specifically. What we’re going to do is create a native Node.js add-on and a web server that will use that add-on to detect faces and show them to you.

Prerequisites

I’ve built this on a MacBook Pro running OS X El Capitan Version 10.11.1.

Since we’re going to use OpenCV you’ll need to set this up for your machine, I’ve described how to do this in this article.

Next, we’ll need Node.js which you can get from here. This will also install NPM (the package manager for node) which we need to install some extra node modules.

The next thing we need is node-gyp which you can install using npm. But before you do that make sure you have all the dependencies required which are described here. For Mac they are python 2.7, xcode, gcc and make. So basically if you followed the OpenCV installation guide you should be good on everything except python which you should install. After that you can install node-gyp like this :
[crayon-5b558c87a8aa5852807543/]
Node-gyp is used to generate the appropriate files needed to build a native node.js add-on.

That’s pretty much it. Next up, we’ll generate a simple native add-on.

Setting up

First, we need to create a folder for the node project, I’m doing this in my home directory :
[crayon-5b558c87a8ab2753657014/]
Now we need a folder to hold the native module and navigate to it :
[crayon-5b558c87a8ab8190892427/]
Node-gyp uses a file which specifies the target module name, source files, includes and libraries and other cflags to use when building the module. We need to create that file and call it binding.gyp. It’s contents should look like this :
[crayon-5b558c87a8abe719685060/]
Node-gyp still has some hiccups on Mac OS X and will use only either cc or c++ by default when building (instead of gcc/g++ or whatever you have configured).

Now we use node-gyp to generate the project files :
[crayon-5b558c87a8ac6582572388/]

The native module

As specified in the binding.gyp file, we now need to create the source file of the native module i.e. src/face-detect.cpp.

Here is the source code for that :
[crayon-5b558c87a8acc227651518/]
 

Basically what this code does is register a method to our module. The method gets the first parameter as a buffer, decodes it to an OpenCV Mat image, detects the faces within the image using the classifier (which should be placed next to the binary), and returns a JSON string containing the coordinates of the faces found in the image.

Now that we have all the pieces in place for the native module, we can build it using :
[crayon-5b558c87a8ad4173366607/]
If everything goes well, in the folder ./build/Release you should find a file called face-detect.node. This file represents our native module and we should now be able to require it in our javascript files. Also, next to this file, we need to copy the lbpcascade_frontalface.xml from the OpenCV source folder under /data/lbpcascades/.

The Server

Now we have to create the server.js file for the node server. We should load the native add-on for face detection, create a server that will listen to PUT requests and call the native add-on on the contents of these requests. The code for that should look like this :
[crayon-5b558c87a8ada819416764/]
To start the server just run :
[crayon-5b558c87a8ae0661097046/]

Test it out

Save an image containing human faces as image.jpg. Then, using curl from the command line send the image via a PUT request to the node server like this :
[crayon-5b558c87a8ae6799747832/]
Depending on the image you send, you should see something like this :
[crayon-5b558c87a8aeb055018607/]

Conclusion

Sometimes Node.js libraries might not meet your application needs or they might not fit your machine resulting in errors during npm install. When that happens, you can write your own custom native Node.js add-on to address those needs and hopefully, this article showed you that it’s possible.

As an exercise you can try changing this application to return an image with rectangles surrounding the detected faces. If you’re having trouble returning a new buffer from inside the native add-on, try returning the image as Data URI string.