How does the Google ML Kit figure out someone is smiling? - face-detection

I was looking at the official explanation of the plugin. However, it lacks explanation on how it recognizes one is smiling. I am guessing it is using the contour detection to figure this out. If so, can a custom parameter be calculated based on the focused attributes? For example, a laughing parameter.

Your guess is likely wrong, given the fact that you can still use ML Kit to determine if someone is smiling without enabling the contour detection mode. ML Kit does not currently supporting calculating a custom parameter based on the focused attributes.

Related

It is possible to get the location of classified content in Watson Visual Recognition?

I'm testing IBM's Watson Visual Recognition using Node-RED, I've trained it to identify some elements in the image, but I wonder if it's possible to get the exact position of these elements.
At this time it is not possible to get a location for anything other than a face which you do using the detect_faces end point. However we recognize this is a valuable feature so please “stay tuned”
Update: This is now possible with the v4 "Custom Object Detection" API:
https://cloud.ibm.com/docs/visual-recognition?topic=visual-recognition-getting-started-tutorial

Add badge for android with codenameone

We tried more time to find badges with android using codename one, but I found some comments in stackoverflow is only working on IOS and not working on android, if true so when will it become available on android?.
Thanks in advance
When we created the badge code Android didn't support that UI paradigm. Arguably it still doesn't but some vendors include that functionality with a special API.
Currently the main thing holding this back is that no one asked for it or implemented it. First of all I suggest filing an RFE which will give you a way to track the schedule for adding this. You can add an RFE on http://github.com/codenameone/CodenameOne/issues/
You can also just implement this yourself and submit a pull request as explained in this post: https://www.codenameone.com/blog/how-to-use-the-codename-one-sources.html
This should be relatively easy although you will need to be careful with using the right API level options and might need to use reflection to avoid SDK dependencies which we don't want.
Other than that it's just a matter of interest if we need to implement it. If you have an enterprise account then make sure to let us know through the account of your interest in this feature. We also take pro account requests more seriously when assigning a feature.

Google Earth API in Silverlight Application

I have been handed over a Silverlight 4 application that uses the Google Earth API. We have an issue with newer versions of Google Earth: In Internet Explorer, the map displays as a white background with the text "ATL 10.00". In other browsers, the background is just white (cannot see any text).
It works with Google Earth version 6.0.3.2197 but not in any version after that.
I have read this thread
- but none of the suggestions there worked. I must note, though, that the JavaScript code for initializing GE in Silverlight is rather complex, but as far as I can se, the initialization of GE is done in the google.setOnLoadCallback function.
It would be nice, if someone knows what exactly the "ATL 10.00" message means.
Any help would be greatly appreciated!
EDIT
Please let me know if I should clarify in further detail.
UPDATE:
The problem was caused by 2 things and probably a combination of the two:
1. The container for the map was added dynamically with JavaScript into another div
2. The container's width and height was set to 0 in order to hide the map.
So, the solution for me was to render the containing div together with the rest of the DOM. In order to "hide" the map, I positioned it absolute beyond the bounds of the screen.
Hope this can guide others to solve similar problems.
ATL in referring to the Active Template Library in Windows. ATL in Windows is a set of template-based C++ classes that let developers create COM objects (rather like MFC and ActiveX).
10.00 here simply refers to version of ATL being used. Seeing it probably means that the COM object (GEPlugin in this case) has not been created or initialised properly in the browser. The blank screen with the version number in the centre is what the plugin looks like before it loads content.
So, it is not really an error message at all - indeed one could say it is really the failure of an error message to appear that you are seeing.
Anyhow, to answer your question in simple terms it means than the version of ATL that was used to create the plugin was version 10.00.
In practical terms it means that the plugin failed to initialise properly for some reason.

How are you integrating help into your WPF application. Any recommendations?

The question says it all really. If you are writing a WPF application, how are you integrating the application help? What is the state of play in mid-2013?
It seems that there is no clear answer to this from an afternoon with a search engine, but several options:
Write your own fancy tooltip based help (but where are you getting your data from?)
Use .CHM files and the Windows Forms help system (seems archaic to me).
Use Microsoft Help Viewer 1.X or Microsoft Help 2.0.
There is some confusion as to which is more recent / approved of by MS. It appear Help Viewer 1.X might be the recommended option over Microsoft Help 2.0. It doesn't help that the names are so similar...
What is the status of 2.0? Should we use it? Was it ever fully deployed?
Use a third-party product to author your help files and link to them somehow - DocToHelp/NetHelp, NetAdvantage on-line help, etc...
Furthermore, what XAML based mark-up / attributes are you using to provide the necessary context? What is the recommended method?
It seems surprising there is no clear path for supporting application based help in WPF.
My current preference is to use a third party help authorizing system to generate HTML based help.
We then use a WebBrowser to display this help as needed. The authoring system we use makes it fairly easy to extract out a single page from the main help (each "topic" is a single HTML file, and can be included with full contents or not as desired).
Granted, this definitely felt like a bit of a nasty hack at first - but once we wrote the basic plumbing (some attached properties for xaml to specify attributes for context location and add behavior to trigger help, etc), it's fairly clean.
One very nice advantage to this approach, however, is a single help system build works perfectly in all contexts - we can include the documentation online, expose it locally for use in a browser, and use it with context from within our application directly.

How to get a picture from a kinect with labview

My class and I are trying to make a robot, we need to be able to take a picture using a kinect in Labview. We know how to get a skeleton from the kinect, but we can't figure out how to take a picture/video. Is there a DLL we need to download? We can't find anything on the internet but we know it's possible to do in labview. So do we need to write any code in C? Im pretty good with C so don't restrict your answer to just labview. All relevant answers are welcome.
We can't find anything on the internet but we know it's possible to do
in labview.
Personally, I have no experience with this, but a quick "LabVIEW Kinect" search on Google turns up any number of results, including some which show video (example). Based on the videos and images I see there, it looks like they're using the IMAQ control to show the video, so I'm assuming you're going to need the vision toolkit for that. If you have an academic license, you might already have that, but even if not, you could try asking your local NI office for one, as they generally tend to support schools.
I would go on ni.com and search for LabVIEW Kinect. There are links over there like this:
Kinect LabVIEW Interface Using Microsoft Kinect API

Resources