VXML a common handler for all forms for a generalized DTMF input - vxml

I have just started doing some VXML, and looking to do a simple feature where a DTMF input like zero obtained on any of the forms will lead to a common generalized action. A good example for this is the one we use in our normal interactions with IVR where we press zero to speak to an agent at any time when we are in the IVR.
1) I thought of using throw/catch for this purpose. Is this a good design or is there another VXML feature that best suited for an action like this?
<choice dtmf="0" event="zeroEntered"/>
</choice>
My root document would have:
<catch event="ZeroEntered">
---Do something
</catch>
2) If throw/catch is the way to go, I see its not possible to get that dtmf zero on all the VXML pages. Because elements are not present on all the VXML pages I have...Is there another way? Or I should explicitly include menu in all the pages to catch dtmf zero at any during the call?

What you are referring to are often called universal or global grammars. You can implement them using the link element. You can set the scope for this grammar at the application, document or form level. For more on universal grammars look at this article. And you can find more about the link element here.

Related

How can i make multi language rasa chatbot having at least two languages?

i want to make a rasa based chatbot with at least two languages or multi lingual chatbot. Can anybody tell me possible way of making it.
Nice question. (And this is something I'm working on too.)
The good thing about the embedding intent classifier is that it doesn't have any assumptions about what language it works on. So, in theory, it should work with every language.
There are 2 approaches that you can use to support multi-language intents.
1. make separate intents for them (e.g. hello_en, hello_xx, for hello spoken in 2 languages)
2. create a language detector and handle them all in custom action
If you're using default utter_ methods, method 1 may make more sense becuase you can just use hello_en as intent name and utter_hello_en to get back the response.
Method 2 makes more sense when you actually have multiple variables that you want to use to generate your response (and of course, you handle them in custom actions).
All in all, you can implement multi-language chatbots in rasa!
Edit:
What you want is a custom language detector that finds out which language you're using. You can include the language-detector as a custom component in the beginning of your pipeline and make it fill a language slot. Then, you can use this slot value as input in your custom actions and respond accordingly.

Can I use OWL API to enforce specific subject-predicate-object relationships?

I am working on a project using RDF data and I am thinking about implementing a data cleanup method which will run against an RDF triples dataset and flag triples which do not match a certain pattern, based on a custom ontology.
For example, I would like to enforce that class http://myontology/A must denote http://myontology/Busing the predicate http://myontology/denotes. Any instance of Class A which does not denote an instance of Class B should be flagged.
I am wondering if a tool such as the OWLReasoner from OWL-API would have the capability to accomplish something like this, if I designed a custom axiom for the Reasoner. I have reviewed the documentation here: http://owlcs.github.io/owlapi/apidocs_4/org/semanticweb/owlapi/reasoner/OWLReasoner.html
It seems to me that the methods available with the Reasoner might not be up for the purpose which I would like to use them for, but I'm wondering if anyone has experience using OWL-API for this purpose, or knows another tool which could do the trick.
Generally speaking, OWL reasoning is not well suited to finding information that's missing in the input and flagging it up: for example, if you create a class that asserts that an instance of A has exactly one denote relation to an instance of B, and have an instance of A that does not, under Open World assumption the reasoner will just assume that the missing statement is not available, not that you're in violation.
It would be possible to detect incorrect denote uses - if, instead of relating to an instance of B, the relation was to an instance of a class disjoint with B. But this seems a different use case than the one you're after.
You can implement code with the OWL API to do this check, but it likely wouldn't benefit from the ability to reason, and given that you're working at the RDF level I'd think an API like Apache Jena might actually work better for you (you won't need to worry if your input file is not OWL compliant, for example).

What is the difference between facelets's ui:include and custom tag?

Ui:include and xhtml based tag (the one with source elt) seem to be much the same for me. Both allow to reuse piece of markup. But I believe there should be some reason for having each. Could somebody please briefly explain it? (I guess if I read full facelets tutorial I will learn it, but I have not time to do it now, so no links to lengthy docs please :)
They are quite similar. The difference is mainly syntactical.
After observing their usage for some time it seems the convention is that fragments that you use only in a single situation are candidates for ui:include, while fragments that you re-use more often and have a more independent semantic are candidates for a custom tag.
E.g.
A single view might have a form with three sections; personal data, work history, preferences. If the page becomes unwieldy, you can divide it into smaller parts. Each of the 3 sections could be moved to their own Facelet file and will then be ui-include'ed into the original file.
On the other hand, you might have a specific way to display on image on many views in your application. Maybe you draw a line around it, have some text beneath it etc. Instead of repeating this over and over again you can abstract this to its own Facelet file again. Although you could ui:include it, most people seem to prefer to create a tag here, so you can use e.g. <my:image src="..." /> on your Facelets. This just looks nicer (more compact, more inline with other components).
In the Facelets version that's bundled with JSF 2.0, simple tags can be replaced by composite components. This is yet a third variant that on the first glance looks a lot like custom tags, but these things are technically different as they aren't merely an include but represent true components with declared attributes, ability to attach validators to, etc.

How to handle data from an external program on Mac OSx

I would like to make a program (I would prefer in C language) , but even in cocoa , that can take data from an external program (such as iTunes or adium) and will use them. For example i would like to take the data of a listbox or the text of the chat so as to manipulate it. I need a place to start. In windows I think it is possible with some apis that find the hWnd of a window and then find a pointer to the listbox or textbox. Please give me some info on how to start. Thanks you in advance.
It's not clear exactly what you want to do. It's either impossible or severely restricted.
For one thing, different applications use different ways of constructing a “listbox”—Cocoa applications use NSTableView, Carbon applications use DataBrowser, and GTK, Qt, and Java applications use even more different APIs. These do not all go through some common kind of list box thingy; each is an independent implementation.
(You could hope that either NSTableView or DataBrowser would be based on the other, but don't count on it.)
For another, it is impossible to obtain a pointer to that control. You cannot access another application's NSTableView or DataBrowser view or GTK/Qt/Java equivalent unless (and this only works for NSTableView) that application deliberately serves it up to you. It doesn't sound like that's your situation.
The closest you can get to that is Accessibility, which may be pretty close, but is unlikely to work with most applications not based on Cocoa.
Even then, the view may not be showing you all the data. A table view may be lazily populated, and a table view designed in imitation of the iOS UITableView may even never have all the data (because it only has what it can show).
(All of the above applies to every kind of view, not just table views. Collection views, text fields, buttons—same deal for all of them.)
The only way to get at the true, complete copy of the data is to ask the controller that owns it. And, again, that's impossible if the application is not specifically offering it to you. Not to mention, the application might not even have a controller (not object-oriented, not MVC, or just sloppily made).
… so as to manipulate it.
Getting the data in the first place is the easy part. It is nigh-impossible to mess with data in another application—for good reason.
The closest you're going to get to either of these goals is the Accessibility interfaces.

How to automatically excerpt user generated content?

I run a website that allows users to write blog-post, I would really like to summarize the written content and use it to fill the <meta name="description".../>-tag for example.
What methods can I employ to automatically summarize/describe the contents of user generated content?
Are there any (preferably free) methods out there that have solved this problem?
(I've seen other websites just copy the first 100 or so words but this strikes me as a sub-optimal solution.)
Think of the task of summarization as a challenge to 'select the most important sentences' from the document.
The method described in The Automatic Creation of Literature Abstracts by H.P. Luhn (1958) describes a naive method that actually performs quite well. Try giving it a shot.
If your website is in Python coding this algorithm using the NLTK (Natural Language Toolkit) is a fun task.
Make it predictable.
From a users perspective simply using the first paragraph is not bad at all.
Using any automation is bound to fall flat in some cases. So I suggest to display
the first paragraph (maybe truncating at some point) as a summary and offer the ability to override that by an optional field.
I might try using mechanical Turk or any number of other crowdsourcing options.
Another item to check out, a SourceForge project, AutoSummary Semantic Analysis Engine
Not a trivial task... You should look for articles or books on "extractive summarization"
A few starters could be:
Books:
Natural Language Processing with Python
Foundations of Statistical Natural Language Processing
Articles:
Language independent extractive summarization
Extractive summarization: how to identify the gist of a text
Extractive Summarization using Inter- and Intra- Event Relevance
Yahoo has a free API for this:
http://developer.yahoo.com/search/content/V1/termExtraction.html
Apple's patent 6424362 - Auto-summary of document content contains sample code which might be useful...
This borders on artificial intelligence so there's not going to be an "easy" solution out there, but there are products that target this problem.
Check out Copernic Summarizer, for one.
Noun phrases typically tend to be important elements of a sentence. Picking sentence(s) with a high density of noun phrases could yield a good summary. You could get noun phrases using a POS tagger.
For a good summary, it is desirable that it is a meaningful sentence. Reading a broken sentence is slightly jarring.
Alternatively, when the author posts the article, the author can highlight what are the keywords that can be used in the description which can then be automatically put in the meta description tag.

Resources