Can Google Vision API detects the outline of face in an image? - artificial-intelligence

I want to draw lines around face (including forehead) and cut that face out from the image. Can I use Google Vision API to realize my goal? I have tested Google Vision API to detect face in some images, and it only returns the bounding poly (the rectangle area) around the face, the landmarks and face expression. It cannot detects the coordinates of outline around face. How to do that with Vision API? If Vision API cannot do it, than what library should I use?

The Vision API service offers a Face Detection feature that can be used to detect multiple faces within an image along with the associated key facial attributes. Based on this, Vision API feature that fits the most to your current requirement is the usage of the fdBoundingPoly. As mentioned in the official documentation:
The fdBoundingPoly bounding polygon is tighter than the boundingPoly,
and encloses only the skin part of the face. Typically, it is used to
eliminate the face from any image analysis that detects the "amount of
skin" visible in an image
I recommend you to check the FACE_DETECTION Responses example that you can use as a reference to know more about this functionality.
In case this feature doesn't cover your current needs, you can use the Send Feedback button, located at the lower left and upper right corners of the service public documentation, as well as take a look the Issue Tracker tool, in order to raise a Vision API feature request and notify to Google about this desired functionality.

Related

What tools should I use to build data visualization dashboard

I needed help and guidance on something. I had developed a web form that would require users to submit their planting information about the crop planted, location, date of planting, planting technique, level of experience etc. This information is what I am going to use to develop planting calendars so that it will answer questions like when should i plant this crop? This information will be displayed in terms of interactive charts, graphs and plots and also maps and dashboard to filter data. Like for instance a chart should be drawn if a user selects/filters out crop planted and their location in the dashboard, I should have the charts/graphs/plots of date planted, planting technique and experience. I would also select a crop and a specific year and I should get a line plot showing extents of it throughout that year. I was thinking of making it a web map and chart. However, I was not sure of the best open source tools that will set this to work. I had an idea that should connect maybe the maps and the charts and graphs together such that what i filter out in the dashboard, say I select California as the location and the date range, I should have the map zoom to that location while plotting graduated symbols of the crops and at the same time drawing out charts and graphs of the crops lets say outside the map, in a section of its own.
If anyone of you has an idea of some of the best tools I can use to set this to work, kindly guide me.
Grafana could probably satisfy your needs. Besides its core data analytics functionality with support for building all kinds of tables/charts/graphs, it also has plugins for handling geographical data on maps.
That's an interesting use case. While there are dozens of data visualization tools (DVTs) available; but will recommend the following options-
Tableau
Grafana
Google Analytics
Take a look at some differences between commonly used DVTs.

Is it possible to create your own animoji within iOS 11

I've seen ARkit and I've seen the demo for animoji on the keynote, but I'm wondering if there is a way to create your own animoji (that will work within messages) within Xcode.
You can use ARKit to provide facial movement data to animate your own 3D models. In conjunction with an iMessage app, you should be able to export videos of animated characters similar to animojis.
Take a look at ARBlendShapeLocation (documentation) which provides high-level facial feature detection. You could track these features and use them to animate your models.
I'd also recommend watching the recent Apple developer video called "Face Tracking with ARKit" (link) which gives a good overview of the API's available.
When you're ready to jump right in, start with this face-tracking sample code (link) from Apple. (thanks #rickster)
Note that these features are only available on the new iPhone X.

Using Google maps API within Silveright

I am developing a silverlight aplication which uses a bing maps interface. The client has now changed their requirements and would like to use existing google maps licences rather than pay for both google and bing (it's a private application and hence does not come under the free licences). Does anyone know if it's possible to do?
Cheers
Cap
Is it possible to do? Technically, very easy. But doing so in a legal way, pretty hard.
When you say you were "developing a Silverlight application using a Bing Maps interface", do you mean that you were using the Bing Maps Silverlight control provided by Microsoft? (http://www.microsoft.com/maps/isdk/silverlight/)
If so, unfortunately, you can't simply switch out the Bing tiles and use Google Map tiles instead - to do so would be a breach of the Bing Terms of Service (Section 2i. "You may not... integrate the Bing Maps Platform or any of its content with any other mapping platform; " - http://www.microsoft.com/maps/product/terms.html).
If you've coded your own Silverlight map control, then the terms above don't apply and it shouldn't be too hard to point at a Google Maps tile source rather than the Bing Maps tiles - they use exactly the same Spherical Mercator projection and tiling system, with only a few differences in the way that tiles are referenced that can easily be converted between the two systems. The problem I see here is that the Google terms of use state that you "may not... access or use the Products or any Content through any technology or means other than those provided in the Products" (http://www.google.com/help/terms_maps.html), and Google Maps don't provide a supported means of direct tile access.
So, to comply with Google's ToS, you're going to have to access the Google Javascript Map control from your Silverlight application (either via the SL webbrowser control, or by overlaying an iframe on top of the SL application). Note that, by doing so, you've basically lost any advantage of having coded your application in Silverlight - you may as well have written the whole thing in HTML/Javascript....

The correct choice of tools for a new Deep Zoom application

I want to create a new application. It will basically be a Deep Zoom application that users can draw annotations on (that will save to a DB so other users can see those annotations.) At first it will just simply run in a browser. However, the app would be useful if it could be used by enthusiasts in the field, so ability to run on smartphones or other handheld devices would be massively beneficial. 3G/4G signal is likely to be practically non existent in those places, so having the ability to download all the images and info for an "area" would be good.
I can't decide on which technology to use. Silverlight Deep Zoom apps look really nice in browsers, but I have heard that it is not a widely supported technology that MS might be ditching anyway and the only smartphones that would be capable of running Silverlight would be Windows phones = a very small share of the smartphone market. Flash will probably never run on iPhones/Apple products in general. So should I use HTML5? HTML5 all seems a little confusing to me at the moment, would it even be possible to make a HTML5 Deep Zoom application that users could annotate?
Any thoughts and advice would be really handy, thanks for reading.
I wrote a Deep Zoom app that supported annotation for a proof of concept a couple of years ago.
I used Django for this, however it is not approach I would recommend. If i was doing the same job again I would use CanvasZoom, which is based on HTML5. Canvas Zoom can be embedded into a webpage through javascript. There is a guide on how to do this here:
a link
Unfortunately you need to run Microsoft DeepZoom composer on the original image first in order too generate the deep zoom data that CanvasZoom will use. If you want your app to run in a browser it is likely that you will have to go for the following approach.
User selects image.
Image gets uploaded to server
Server creates deep zoom information
Use a PHP based approach so you have a canvaszoom page for the image.
The annotations will probably complicate matters, I did this with javascript when I attempted it. The trick is to work out when the image has been zoomed in (with canvas zoom there are preset zoom levels) and redraw the annotation regions. I found this approach non-trivial but not overly complicated.
Canvas Zoom is MIT licensed, so you can do what you like with it.
Good luck with your project.

Manipulation with GIS content on the web using the WebGL

I have task to create program for manipulation with 3d content on the web. When I said 3d content i mean
on 3d map (witch i have and it is something like *.sdm) which i should load into browser and work some basic operation with it (rotate screen, change camera etc...).
Because i am totaly n00b i want to ask a couple of questions:
1. How to load maps into browser. Just for notice that my map have sdm extension. Is this possible?
2. What i should use for represent 3d content. I am thinking of GLGE framework for webGL, if it is possible of course
What should be the most painless and the most effective way to do this? Maybe i was totally wrong when choose webGL?
Programs that use WebGL aren't mature enough to do what you want. Within the next few years, when GIS applications start popping up it may be possible, but not now.
Also, keep in mind that WebGL is what gives you access to a low-level graphics library. It does not directly have anything to do with GIS data.
You may want to take a look at OpenLayers (2d, javascript based) or WorldWind-Java (3d, jogl/java based). Both of these programs can display map information in a browser.
http://openlayers.org/
http://worldwind.arc.nasa.gov/java/

Resources