{"id":2823,"date":"2017-08-07T07:28:55","date_gmt":"2017-08-07T07:28:55","guid":{"rendered":"https:\/\/intelligentbee.com\/blog\/?p=2823"},"modified":"2024-06-30T09:23:09","modified_gmt":"2024-06-30T09:23:09","slug":"face-detection-apples-ios-11-vision-framework","status":"publish","type":"post","link":"https:\/\/intelligentbee.com\/blog\/face-detection-apples-ios-11-vision-framework\/","title":{"rendered":"Face Detection with Apple\u2019s iOS 11 Vision Framework"},"content":{"rendered":"<p class=\"graf graf--p\">Great stuff is coming from Apple this autumn! Among a lot of new APIs there is the Vision Framework which helps with detection of faces, face features, object tracking and others.<\/p>\n<p class=\"graf graf--p\">In this post we will take a look at how can one put the face detection to work. We will make a simple application that can take a photo (using the camera of from the library) and will draw some lines on the faces it detects to show you the power of Vision.<\/p>\n<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_68_1 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title \" >Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/intelligentbee.com\/blog\/face-detection-apples-ios-11-vision-framework\/#Select_an_Image\" title=\"Select an\u00a0Image\">Select an\u00a0Image<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/intelligentbee.com\/blog\/face-detection-apples-ios-11-vision-framework\/#Detect_Face_Features\" title=\"Detect Face\u00a0Features\">Detect Face\u00a0Features<\/a><\/li><\/ul><\/nav><\/div>\n<h2><span class=\"ez-toc-section\" id=\"Select_an_Image\"><\/span>Select an\u00a0Image<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p class=\"graf graf--p\"><em class=\"markup--em markup--p-em\">I will go fast through this so if you are a really beginner and you find this too hard to follow, please check the my previous iOS related post, <\/em><em class=\"markup--em markup--p-em\">Building a Travel Photo Sharing iOS App<\/em><em class=\"markup--em markup--p-em\">, first, as it has the same photo selection functionality but explained in greater detail.<\/em><\/p>\n<p class=\"graf graf--p\">You will need Xcode 9 beta and a device running iOS 11 beta to test this. Let\u2019s start by creating a new <strong class=\"markup--strong markup--p-strong\">Single View App<\/strong> project named <strong class=\"markup--strong markup--p-strong\">FaceVision<\/strong>:<\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-2824 size-large\" src=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.19.16-1024x739.png\" alt=\"\" width=\"525\" height=\"379\" srcset=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.19.16-1024x739.png 1024w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.19.16-300x216.png 300w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.19.16-768x554.png 768w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.19.16.png 1458w\" sizes=\"(max-width: 525px) 100vw, 525px\" \/><\/p>\n<p class=\"graf graf--p\">Open the <code class=\"markup--code markup--p-code\">Main.storyboard<\/code> and drag a button <code class=\"markup--code markup--p-code\">Take Photo<\/code> to the center of it. Use the constraints to make it stay there\u00a0\ud83d\ude42 Create a <code class=\"markup--code markup--p-code\">takePhoto<\/code> action for it:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">@IBAction func takePhoto(_ sender: UIButton) {\r\n    let picker = UIImagePickerController()\r\n    picker.delegate = self\r\n    let alert = UIAlertController(title: nil, message: nil, preferredStyle: .actionSheet)\r\n    if UIImagePickerController.isSourceTypeAvailable(.camera) {\r\n        alert.addAction(UIAlertAction(title: \"Camera\", style: .default, handler: {action in\r\n            picker.sourceType = .camera\r\n            self.present(picker, animated: true, completion: nil)\r\n        }))\r\n    }\r\n    alert.addAction(UIAlertAction(title: \"Photo Library\", style: .default, handler: { action in\r\n        picker.sourceType = .photoLibrary\r\n        \/\/ on iPad we are required to present this as a popover\r\n        if UIDevice.current.userInterfaceIdiom == .pad {\r\n            picker.modalPresentationStyle = .popover\r\n            picker.popoverPresentationController?.sourceView = self.view\r\n            picker.popoverPresentationController?.sourceRect = self.takePhotoButton.frame\r\n        }\r\n        self.present(picker, animated: true, completion: nil)\r\n    }))\r\n    alert.addAction(UIAlertAction(title: \"Cancel\", style: .cancel, handler: nil))\r\n    \/\/ on iPad this is a popover\r\n    alert.popoverPresentationController?.sourceView = self.view\r\n    alert.popoverPresentationController?.sourceRect = takePhotoButton.frame\r\n    self.present(alert, animated: true, completion: nil)\r\n}<\/pre>\n<p class=\"graf graf--p\">Here we used an <code class=\"markup--code markup--p-code\">UIImagePickerController<\/code> to get an image so we have to make our <code class=\"markup--code markup--p-code\">ViewController<\/code> implement the <code class=\"markup--code markup--p-code\">UIImagePickerControllerDelegate<\/code> and <code class=\"markup--code markup--p-code\">UINavigationControllerDelegate<\/code> protocols:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate {<\/pre>\n<p class=\"graf graf--p\">We also need an outlet for the button:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">@IBOutlet weak var takePhotoButton: UIButton!<\/pre>\n<p class=\"graf graf--p\">And an <code class=\"markup--code markup--p-code\">image<\/code> var:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">var image: UIImage!<\/pre>\n<p class=\"graf graf--p\">We also need to add the following in the Info.plist to be able to access the camera and the photo library:<\/p>\n<ul class=\"postList\">\n<li class=\"graf graf--li\"><code class=\"markup--code markup--li-code\">Privacy - Camera Usage Description<\/code>: <em class=\"markup--em markup--li-em\">Access to the camera is needed in order to be able to take a photo to be analyzed by the app<\/em><\/li>\n<li class=\"graf graf--li\"><code class=\"markup--code markup--li-code\">Privacy - Photo Library Usage Description<\/code>: <em class=\"markup--em markup--li-em\">Access to the photo library is needed in order to be able to choose a photo to be analyzed by the app<\/em><\/li>\n<\/ul>\n<p class=\"graf graf--p\">After the users chooses an image we will use another view controller to show it and to let the user start the processing or go back to the first screen. Add a new <strong class=\"markup--strong markup--p-strong\">View Controller<\/strong> in the <code class=\"markup--code markup--p-code\">Main.storyboard<\/code>. In it, add an <strong class=\"markup--strong markup--p-strong\">Image View<\/strong> with an <code class=\"markup--code markup--p-code\">Aspect Fit<\/code> <strong class=\"markup--strong markup--p-strong\">Content Mode<\/strong> and two <strong class=\"markup--strong markup--p-strong\">buttons<\/strong> like in the image below (don\u2019t forget to use the necessary constraints):<\/p>\n<p><img decoding=\"async\" class=\"size-large wp-image-2827 aligncenter\" src=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.34.59-1024x598.png\" alt=\"\" width=\"525\" height=\"307\" srcset=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.34.59-1024x598.png 1024w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.34.59-300x175.png 300w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.34.59-768x449.png 768w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-13.34.59.png 2784w\" sizes=\"(max-width: 525px) 100vw, 525px\" \/><\/p>\n<p class=\"graf graf--p\">Now, create a new <strong>UIViewController<\/strong> class named <code class=\"markup--code markup--p-code\">ImageViewControler.swift<\/code> and set it to be the class of the new View Controller you just added in the <code class=\"markup--code markup--p-code\">Main.storyboard<\/code>:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">import UIKit\r\n\r\nclass ImageViewController: UIViewController {\r\n\r\n    override func viewDidLoad() {\r\n        super.viewDidLoad()\r\n\r\n        \/\/ Do any additional setup after loading the view.\r\n    }\r\n}<\/pre>\n<p class=\"graf graf--p\">Still in the <code class=\"markup--code markup--p-code\">Main.storyboard<\/code>, create a <code class=\"markup--code markup--p-code\">Present Modally<\/code> kind segue between the two view controllers with the <code class=\"markup--code markup--p-code\">showImageSegue<\/code> identifier:<\/p>\n<p><img decoding=\"async\" class=\"size-large wp-image-2829 aligncenter\" src=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-14.45.55-1024x598.png\" alt=\"\" width=\"525\" height=\"307\" srcset=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-14.45.55-1024x598.png 1024w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-14.45.55-300x175.png 300w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-14.45.55-768x449.png 768w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-14.45.55.png 2784w\" sizes=\"(max-width: 525px) 100vw, 525px\" \/><\/p>\n<p class=\"graf graf--p\">Also add an outlet for the <strong class=\"markup--strong markup--p-strong\">Image View<\/strong> and a new property to hold the image from the user:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">@IBOutlet weak var imageView: UIImageView!\r\n\r\nvar image: UIImage!<\/pre>\n<p class=\"graf graf--p\">Now, back to our initial <code class=\"markup--code markup--p-code\">ViewController<\/code> class, we need to present the new <code class=\"markup--code markup--p-code\">ImageViewController<\/code> and set the selected image:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {\r\n    dismiss(animated: true, completion: nil)\r\n    image = info[UIImagePickerControllerOriginalImage] as! UIImage\r\n    performSegue(withIdentifier: \"showImageSegue\", sender: self)\r\n}\r\n\r\noverride func prepare(for segue: UIStoryboardSegue, sender: Any?) {\r\n    if segue.identifier == \"showImageSegue\" {\r\n        if let imageViewController = segue.destination as? ImageViewController {\r\n            imageViewController.image = self.image\r\n        }\r\n    }\r\n}<\/pre>\n<p class=\"graf graf--p\">We also need an exit method to be called when we press the Close button from the Image View Controller:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">@IBAction func exit(unwindSegue: UIStoryboardSegue) {\r\n    image = nil\r\n}<\/pre>\n<p class=\"graf graf--p\">To make this work, head back to the <code class=\"markup--code markup--p-code\">Main.storyboard<\/code> and Ctrl+drag from the <strong class=\"markup--strong markup--p-strong\">Close<\/strong> button to the exit icon of the Image View Controller and select the exit method from the popup.<\/p>\n<p class=\"graf graf--p\">To actually show the selected image to the user we have to set it to the <code class=\"markup--code markup--p-code\">imageView<\/code>:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">override func viewDidLoad() {\r\n    super.viewDidLoad()\r\n\r\n    \/\/ Do any additional setup after loading the view.\r\n    imageView.image = image\r\n}<\/pre>\n<p class=\"graf graf--p\">If you run the app now you should be able to select a photo either from the camera or from the library and it will be presented to you in the second view controller with the <strong class=\"markup--strong markup--p-strong\">Close<\/strong> and <strong class=\"markup--strong markup--p-strong\">Process!<\/strong> buttons below it.<\/p>\n<p><img decoding=\"async\" class=\"size-large wp-image-2830 aligncenter\" src=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-15.18.05-601x1024.png\" alt=\"\" width=\"525\" height=\"895\" srcset=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-15.18.05-601x1024.png 601w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-15.18.05-176x300.png 176w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-15.18.05-768x1308.png 768w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/Screen-Shot-2017-08-05-at-15.18.05.png 802w\" sizes=\"(max-width: 525px) 100vw, 525px\" \/><\/p>\n<h2><span class=\"ez-toc-section\" id=\"Detect_Face_Features\"><\/span>Detect Face\u00a0Features<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p class=\"graf graf--p\">It\u2019s time to get to the fun part, detect the faces and faces features in the image.<\/p>\n<p class=\"graf graf--p\">Create a new <code class=\"markup--code markup--p-code\">process<\/code> action for the <strong class=\"markup--strong markup--p-strong\">Process!<\/strong> button with the following content:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">@IBAction func process(_ sender: UIButton) {\r\n    var orientation:Int32 = 0\r\n\r\n    \/\/ detect image orientation, we need it to be accurate for the face detection to work\r\n    switch image.imageOrientation {\r\n    case .up:\r\n        orientation = 1\r\n    case .right:\r\n        orientation = 6\r\n    case .down:\r\n        orientation = 3\r\n    case .left:\r\n        orientation = 8\r\n    default:\r\n        orientation = 1\r\n    }\r\n\r\n    \/\/ vision\r\n    let faceLandmarksRequest = VNDetectFaceLandmarksRequest(completionHandler: self.handleFaceFeatures)\r\n    let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!, orientation: orientation ,options: [:])\r\n    do {\r\n        try requestHandler.perform([faceLandmarksRequest])\r\n    } catch {\r\n        print(error)\r\n    }\r\n}<\/pre>\n<p class=\"graf graf--p\">After translating the image orientation from UIImageOrientationx values to kCGImagePropertyOrientation values (not sure why Apple didn\u2019t make them the same), the code will start the detection process from the Vision framework. Don\u2019t forget to <code class=\"markup--code markup--p-code\">import Vision<\/code> to have access to it\u2019s API.<\/p>\n<p class=\"graf graf--p\">We\u2019ll add now the method that will be called when the Vision\u2019s processing is done:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">func handleFaceFeatures(request: VNRequest, errror: Error?) {\r\n    guard let observations = request.results as? [VNFaceObservation] else {\r\n        fatalError(\"unexpected result type!\")\r\n    }\r\n\r\n    for face in observations {\r\n        addFaceLandmarksToImage(face)\r\n    }\r\n}<\/pre>\n<p class=\"graf graf--p\">This also calls yet another method that does the actual drawing on the image based on the data received from the detect face landmarks request:<\/p>\n<pre class=\"theme:xcode lang:swift decode:true \">func addFaceLandmarksToImage(_ face: VNFaceObservation) {\r\n    UIGraphicsBeginImageContextWithOptions(image.size, true, 0.0)\r\n    let context = UIGraphicsGetCurrentContext()\r\n\r\n    \/\/ draw the image\r\n    image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))\r\n\r\n    context?.translateBy(x: 0, y: image.size.height)\r\n    context?.scaleBy(x: 1.0, y: -1.0)\r\n\r\n    \/\/ draw the face rect\r\n    let w = face.boundingBox.size.width * image.size.width\r\n    let h = face.boundingBox.size.height * image.size.height\r\n    let x = face.boundingBox.origin.x * image.size.width\r\n    let y = face.boundingBox.origin.y * image.size.height\r\n    let faceRect = CGRect(x: x, y: y, width: w, height: h)\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.red.cgColor)\r\n    context?.setLineWidth(8.0)\r\n    context?.addRect(faceRect)\r\n    context?.drawPath(using: .stroke)\r\n    context?.restoreGState()\r\n\r\n    \/\/ face contour\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.faceContour {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ outer lips\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.outerLips {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ inner lips\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.innerLips {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ left eye\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.leftEye {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ right eye\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.rightEye {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ left pupil\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.leftPupil {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ right pupil\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.rightPupil {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ left eyebrow\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.leftEyebrow {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ right eyebrow\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.rightEyebrow {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ nose\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.nose {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.closePath()\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ nose crest\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.noseCrest {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ median line\r\n    context?.saveGState()\r\n    context?.setStrokeColor(UIColor.yellow.cgColor)\r\n    if let landmark = face.landmarks?.medianLine {\r\n        for i in 0...landmark.pointCount - 1 { \/\/ last point is 0,0\r\n            let point = landmark.point(at: i)\r\n            if i == 0 {\r\n                context?.move(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            } else {\r\n                context?.addLine(to: CGPoint(x: x + CGFloat(point.x) * w, y: y + CGFloat(point.y) * h))\r\n            }\r\n        }\r\n    }\r\n    context?.setLineWidth(8.0)\r\n    context?.drawPath(using: .stroke)\r\n    context?.saveGState()\r\n\r\n    \/\/ get the final image\r\n    let finalImage = UIGraphicsGetImageFromCurrentImageContext()\r\n\r\n    \/\/ end drawing context\r\n    UIGraphicsEndImageContext()\r\n\r\n    imageView.image = finalImage\r\n}<\/pre>\n<p class=\"graf graf--p\">As you can see we have quite a lot of features that Vision is able to identify: the face contour, the mouth (both inner and outer lips), the eyes together with the pupils and eyebrows, the nose and the nose crest and, finally, the median line of the faces.<\/p>\n<p class=\"graf graf--p\">You can now run the app and take some unusual selfies of yourself. Here\u2019s mine:<\/p>\n<p><img decoding=\"async\" class=\"alignnone size-large wp-image-2831\" src=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/IMG_3029-576x1024.png\" alt=\"\" width=\"525\" height=\"933\" srcset=\"https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/IMG_3029-576x1024.png 576w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/IMG_3029-169x300.png 169w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/IMG_3029-768x1365.png 768w, https:\/\/intelligentbee.com\/blog\/wp-content\/uploads\/2017\/08\/IMG_3029.png 1152w\" sizes=\"(max-width: 525px) 100vw, 525px\" \/><\/p>\n<p class=\"graf graf--p\">I hope you enjoyed this, please let me know in the comments how did it go and if there are things that can be improved. Also, some pictures taken with the app wouldn\u2019t hurt at all\u00a0\ud83d\ude42<\/p>\n<p class=\"graf graf--p\">You can get the code from here: <a class=\"markup--anchor markup--p-anchor\" href=\"https:\/\/github.com\/intelligentbee\/FaceVision\" target=\"_blank\" rel=\"nofollow noopener\" data-href=\"https:\/\/github.com\/intelligentbee\/FaceVision\">https:\/\/github.com\/intelligentbee\/FaceVision<\/a><\/p>\n<p class=\"graf graf--p\">Thanks!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Great stuff is coming from Apple this autumn! Among a lot of new APIs there is the Vision Framework which [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":2833,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[75,79,80],"tags":[134,158,248],"yst_prominent_words":[1124,1133,1132,1131,1130,1129,1128,1127,1126,1125,273,1123,1122,1121,1120,1119,1118,1117,390],"post_mailing_queue_ids":[],"_links":{"self":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/2823"}],"collection":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/comments?post=2823"}],"version-history":[{"count":4,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/2823\/revisions"}],"predecessor-version":[{"id":133071,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/posts\/2823\/revisions\/133071"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/media\/2833"}],"wp:attachment":[{"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/media?parent=2823"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/categories?post=2823"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/tags?post=2823"},{"taxonomy":"yst_prominent_words","embeddable":true,"href":"https:\/\/intelligentbee.com\/blog\/wp-json\/wp\/v2\/yst_prominent_words?post=2823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}