{"id":1912,"date":"2016-03-04T10:38:32","date_gmt":"2016-03-04T10:38:32","guid":{"rendered":"https:\/\/intelligentbee.com\/blog\/?p=1912"},"modified":"2024-11-28T13:56:46","modified_gmt":"2024-11-28T13:56:46","slug":"building-a-face-detection-web-api-in-node-js","status":"publish","type":"post","link":"https:\/\/intelligentbee.com\/blog\/building-a-face-detection-web-api-in-node-js\/","title":{"rendered":"Building a Face Detection Web API in Node.js"},"content":{"rendered":"<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_68_1 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title \" >Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/intelligentbee.com\/blog\/building-a-face-detection-web-api-in-node-js\/#Introduction\" title=\"Introduction\">Introduction<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/intelligentbee.com\/blog\/building-a-face-detection-web-api-in-node-js\/#Prerequisites\" title=\"Prerequisites\">Prerequisites<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Introduction\"><\/span>Introduction<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>As a follow-up to my <a href=\"https:\/\/intelligentbee.com\/blog\/build-a-face-detector-on-os-x-using-opencv-and-c\/\">previous article<\/a> on how to use your webcam for face detection with <i>OpenCV<\/i>, I\u2019d like to show you how you can create your own web API for that.<\/p>\n<p>There are a few Node.js modules out there that do just that. A few of them even provide bindings for <i>OpenCV<\/i> so you can use it directly from Javascript.<\/p>\n<p>The catch is that most of these modules either rely directly on binaries or they need to be built for your machine from a <i>makefile<\/i> or a <i>Visual Studio project<\/i>, etc. That\u2019s why some of them work on <i>Windows<\/i> for example, but not on <i>Mac<\/i>, or vice-versa.<\/p>\n<p>The objective of this article is to show you the steps needed to create such a module for yourself so that you can customize it for your machine specifically. What we\u2019re going to do is create a native Node.js add-on and a web server that will use that add-on to detect faces and show them to you.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"Prerequisites\"><\/span>Prerequisites<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>I\u2019ve built this on a <i>MacBook Pro <\/i>running <i>OS X El Capitan Version 10.11.1<\/i>.<\/p>\n<p>Since we\u2019re going to use <i>OpenCV<\/i> you\u2019ll need to set this up for your machine, I\u2019ve described how to do this in <a href=\"https:\/\/intelligentbee.com\/blog\/build-a-face-detector-on-os-x-using-opencv-and-c\/\">this article<\/a>.<\/p>\n<p>Next, we\u2019ll need Node.js which you can get from <a href=\"https:\/\/nodejs.org\/en\/\">here<\/a>. This will also install NPM (the package manager for node) which we need to install some extra node modules.<\/p>\n<p>The next thing we need is <i>node-gyp <\/i>which you can install using <i>npm<\/i>. But before you do that make sure you have all the dependencies required which are described <a href=\"https:\/\/github.com\/nodejs\/node-gyp\">here<\/a>. For <i>Mac<\/i> they are <i>python 2.7<\/i>, <i>xcode<\/i>, <i>gcc<\/i> and <i>make<\/i>. So basically if you followed the <i>OpenCV<\/i> installation guide you should be good on everything except <em>python<\/em> which you should install. After that you can install <i>node-gyp<\/i> like this :<\/p>\n<pre class=\"lang:default decode:true\" title=\"Installing node-gyp\">npm install -g node-gyp<\/pre>\n<p>Node-gyp is used to generate the appropriate files needed to build a native node.js add-on.<\/p>\n<p>That\u2019s pretty much it. Next up, we\u2019ll generate a simple native add-on.<\/p>\n<h3>Setting up<\/h3>\n<p>First, we need to create a folder for the node project, I\u2019m doing this in my home directory :<\/p>\n<pre class=\"lang:default decode:true\" title=\"Project folder\">mkdir ~\/node-face-detect &amp;&amp; cd ~\/node-face-detect<\/pre>\n<p>Now we need a folder to hold the native module and navigate to it :<\/p>\n<pre class=\"lang:default decode:true \" title=\"Add-on folder\">mkdir face-detect &amp;&amp; cd face-detect<\/pre>\n<p><i>Node-gyp<\/i> uses a file which specifies the target module name, source files, includes and libraries and other cflags to use when building the module. We need to create that file and call it <i>binding.gyp<\/i>. It\u2019s contents should look like this :<\/p>\n<pre class=\"lang:js decode:true\" title=\"binding.gyp\">{\r\n\u00a0 \"targets\": [\r\n\u00a0\u00a0\u00a0 {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"target_name\": \"face-detect\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"cflags\" : [ \"-std=c++1\", \"-stdlib=libc++\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"conditions\": [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 [ 'OS!=\"win\"', {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"cflags+\": [ \"-std=c++11\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"cflags_c+\": [ \"-std=c++11\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"cflags_cc+\": [ \"-std=c++11\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 }],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 [ 'OS==\"mac\"', {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"xcode_settings\": {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"OTHER_CPLUSPLUSFLAGS\" : [ \"-std=c++11\", \"-stdlib=libc++\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"OTHER_LDFLAGS\": [ \"-stdlib=libc++\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"MACOSX_DEPLOYMENT_TARGET\": \"10.11\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 },\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 }],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"sources\": [ \"src\/face-detect.cpp\" ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"include_dirs\": [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"include\", \"\/usr\/local\/include\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 \"libraries\": [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_core\", \r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_imgproc\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_objdetect\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_imgcodecs\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_highgui\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_hal\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lopencv_videoio\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-L\/usr\/local\/lib\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-llibpng\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-llibjpeg\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-llibwebp\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-llibtiff\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lzlib\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-lIlmImf\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-llibjasper\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-L\/usr\/local\/share\/OpenCV\/3rdparty\/lib\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-framework AVFoundation\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-framework QuartzCore\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-framework CoreMedia\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-framework Cocoa\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"-framework QTKit\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0 ]\r\n\u00a0\u00a0\u00a0 }\r\n\u00a0 ]\r\n}<\/pre>\n<p><i>Node-gyp<\/i> still has some hiccups on Mac OS X and will use only either <i>cc<\/i> or <i>c++<\/i> by default when building (instead of gcc\/g++ or whatever you have configured).<\/p>\n<p>Now we use <i>node-gyp<\/i> to generate the project files :<\/p>\n<pre class=\"lang:default decode:true\">node-gyp configure<\/pre>\n<h3>The native module<\/h3>\n<p>As specified in the <i>binding.gyp<\/i> file, we now need to create the source file of the native module i.e. <i>src\/face-detect.cpp<\/i>.<\/p>\n<p>Here is the source code for that :<\/p>\n<pre class=\"lang:c++ decode:true \" title=\"The native add-on\">\/\/Include native addon headers\r\n#include &lt;node.h&gt;\r\n#include &lt;node_buffer.h&gt;\r\n#include &lt;v8.h&gt;\r\n\r\n#include &lt;vector&gt;\r\n\r\n\/\/Include OpenCV\r\n#include &lt;opencv2\/opencv.hpp&gt;\r\n\r\nvoid faceDetect(const v8::FunctionCallbackInfo&lt;v8::Value&gt;&amp; args) {\r\n  v8::Isolate* isolate = args.GetIsolate();\r\n  v8::HandleScope scope(isolate);\r\n\r\n  \/\/Get the image from the first argument\r\n  v8::Local&lt;v8::Object&gt; bufferObj = args[0]-&gt;ToObject();\r\n  unsigned char* bufferData = reinterpret_cast&lt;unsigned char *&gt;(node::Buffer::Data(bufferObj));\r\n  size_t bufferLength = node::Buffer::Length(bufferObj);\r\n\r\n  \/\/The image decoding process into OpenCV's Mat format\r\n  std::vector&lt;unsigned char&gt; imageData(bufferData, bufferData + bufferLength);\r\n  cv::Mat image = cv::imdecode(imageData, CV_LOAD_IMAGE_COLOR);\r\n  if(image.empty())\r\n  {\r\n    \/\/Return null when the image can't be decoded.\r\n    args.GetReturnValue().Set(v8::Null(isolate));\r\n    return;\r\n  }\r\n\r\n  \/\/OpenCV saves detection rules as something called a CascadeClassifier which\r\n  \/\/  can be used to detect objects in images.\r\n  cv::CascadeClassifier faceCascade;\r\n\r\n  \/\/We'll load the lbpcascade_frontalface.xml containing the rules to detect faces.\r\n  \/\/The file should be right next to the binary of the native addon.\r\n  if(!faceCascade.load(\"lbpcascade_frontalface.xml\"))\r\n  {\r\n    \/\/Return null when no classifier is found.\r\n    args.GetReturnValue().Set(v8::Null(isolate));\r\n    return;\r\n  }\r\n\r\n  \/\/This vector will hold the rectangle coordinates to a detection inside the image.\r\n  std::vector&lt;cv::Rect&gt; faces;\r\n\r\n  \/\/This function detects the faces in the image and places the rectangles of the faces in the vector.\r\n  \/\/See the detectMultiScale() documentation for more details about the rest of the parameters.\r\n  faceCascade.detectMultiScale(\r\n    image,\r\n    faces, \r\n    1.09, \r\n    3, \r\n    0 | CV_HAAR_SCALE_IMAGE,\r\n    cv::Size(30, 30));\r\n\r\n  \/\/Here we'll build the json containing the coordinates to the detected faces\r\n  std::ostringstream facesJson;\r\n\r\n  facesJson &lt;&lt; \"{ \\\"faces\\\" : [ \";\r\n  for(auto it = faces.begin(); it != faces.end(); it++)\r\n  {\r\n    if(it != faces.begin())\r\n      facesJson &lt;&lt; \", \";\r\n\r\n    facesJson &lt;&lt; \"{ \";\r\n    facesJson &lt;&lt; \"\\\"x\\\" : \" &lt;&lt; it-&gt;x &lt;&lt; \", \";\r\n    facesJson &lt;&lt; \"\\\"y\\\" : \" &lt;&lt; it-&gt;y &lt;&lt; \", \";\r\n    facesJson &lt;&lt; \"\\\"width\\\" : \" &lt;&lt; it-&gt;width &lt;&lt; \", \";\r\n    facesJson &lt;&lt; \"\\\"height\\\" : \" &lt;&lt; it-&gt;height;\r\n    facesJson &lt;&lt; \" }\";\r\n  }\r\n\r\n  facesJson &lt;&lt; \"] }\";\r\n  \r\n  \/\/And return it to the node server as an utf-8 string\r\n  args.GetReturnValue().Set(v8::String::NewFromUtf8(isolate, facesJson.str().c_str()));\r\n}\r\n\r\nvoid init(v8::Local&lt;v8::Object&gt; target) {\r\n  NODE_SET_METHOD(target, \"detect\", faceDetect);\r\n}\r\n\r\nNODE_MODULE(binding, init);<\/pre>\n<p>&nbsp;<\/p>\n<p>Basically what this code does is register a method to our module. The method gets the first parameter as a buffer, decodes it to an OpenCV Mat image, detects the faces within the image using the classifier (which should be placed next to the binary), and returns a JSON string containing the coordinates of the faces found in the image.<\/p>\n<p>Now that we have all the pieces in place for the native module, we can build it using :<\/p>\n<pre class=\"lang:default decode:true\">node-gyp build<\/pre>\n<p>If everything goes well, in the folder <i>.\/build\/Release<\/i> you should find a file called <i>face-detect.node<\/i>. This file represents our native module and we should now be able to <i>require<\/i> it in our javascript files. Also, next to this file, we need to copy the <i>lbpcascade_frontalface.xml <\/i>from the <i>OpenCV<\/i> source folder under <i>\/data\/lbpcascades\/<\/i>.<\/p>\n<h3>The Server<\/h3>\n<p>Now we have to create the <i>server.js<\/i> file for the node server. We should load the native add-on for face detection, create a server that will listen to <b>PUT<\/b> requests and call the native add-on on the contents of these requests. The code for that should look like this :<\/p>\n<pre class=\"lang:js decode:true\" title=\"The server\">\/\/The path to our built native add-on\r\nvar faceDetect = require('.\/face-detect\/build\/Release\/face-detect');\r\n\r\nvar http = require('http');\r\n\r\n\/\/Our web server\r\nvar server = http.createServer(function (request, response) {\r\n  \/\/Respond to PUT requests\r\n  if (request.method == 'PUT') \r\n  {\r\n    request.setEncoding('binary');\r\n\r\n    \/\/Collect body chunks\r\n    var body = null;\r\n    request.on('data', function (data) {\r\n      if(null == body)\r\n        body = data;\r\n      else\r\n        body += data;\r\n\r\n      \/\/Destroy the connection if the file is too big to handle\r\n      if (body.length &gt; 1e6) \r\n        request.connection.destroy();\r\n    });\r\n\r\n    \/\/All chunks have been sent\r\n    request.on('end', function () {\r\n      \/\/Create a node buffer from the body to send to the native add-on\r\n      var bodyBuffer = new Buffer(body, \"binary\");\r\n\r\n      \/\/Call the native add-on\r\n      var detectedFaces = faceDetect.detect(bodyBuffer);\r\n      if(null == detectedFaces) \r\n      {\r\n        \/\/Unsupported image format or classifier missing\r\n        response.writeHead(500, {'Content-Type': 'applcation\/json'});\r\n        response.end('{\"error\" : \"internal server error\"}');\r\n      } \r\n      else \r\n      {\r\n        \/\/Faces detected\r\n        response.writeHead(200, {'Content-Type': 'applcation\/json'});\r\n        response.end(detectedFaces);\r\n      }\r\n    });\r\n  }\r\n  else\r\n  {\r\n    \/\/Unsupported methods\r\n    response.writeHead(405, {'Content-Type': 'applcation\/json'});\r\n    response.end('{\"error\" : \"method not allowed\"}');\r\n  }\r\n});\r\n\r\n\/\/Start listening to requests\r\nserver.listen(7000, \"localhost\");<\/pre>\n<p>To start the server just run :<\/p>\n<pre class=\"lang:default decode:true\">node server.js<\/pre>\n<h3>Test it out<\/h3>\n<p>Save an image containing human faces as <i>image.jpg<\/i>. Then, using curl from the command line send the image via a <b>PUT<\/b> request to the node server like this :<\/p>\n<pre class=\"lang:default decode:true\">curl -i -X PUT http:\/\/localhost:7000\/ -H \"Content-Type: application\/octet-stream\" --data-binary \"@image.jpg\"<\/pre>\n<p>Depending on the image you send, you should see something like this :<\/p>\n<pre class=\"lang:default decode:true\" title=\"Response sample\">HTTP\/1.1 200 OK\r\nContent-Type: applcation\/json\r\nDate: Wed, 17 Feb 2016 07:19:44 GMT\r\nConnection: keep-alive\r\nTransfer-Encoding: chunked\r\n\r\n{ \"faces\" : [ { \"x\" : 39, \"y\" : 91, \"width\" : 240, \"height\" : 240 }] }<\/pre>\n<h3>Conclusion<\/h3>\n<p>Sometimes Node.js libraries might not meet your application needs or they might not fit your machine resulting in errors during <i>npm install<\/i>. When that happens, you can write your own custom native Node.js add-on to address those needs and hopefully, this article showed you that it\u2019s possible.<\/p>\n<p>As an exercise you can try changing this application to return an image with rectangles surrounding the detected faces. If you\u2019re having trouble returning a new buffer from inside the native add-on, try returning the image as<i> Data URI<\/i> string.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction As a follow-up to my previous article on how to use your webcam for face detection with OpenCV, I\u2019d [&hellip;]<\/p>\n","protected":false},"author":28,"featured_media":1917,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[77],"tags":[136,180],"yst_prominent_words":[361,700,729,798,990,1013,1121,1192,1888],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/1912"}],"collection":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/users\/28"}],"replies":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/comments?post=1912"}],"version-history":[{"count":4,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/1912\/revisions"}],"predecessor-version":[{"id":133331,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/1912\/revisions\/133331"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/media\/1917"}],"wp:attachment":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/media?parent=1912"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/categories?post=1912"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/tags?post=1912"},{"taxonomy":"yst_prominent_words","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/yst_prominent_words?post=1912"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}