How can i make multi language rasa chatbot having at least two languages? - multilingual

i want to make a rasa based chatbot with at least two languages or multi lingual chatbot. Can anybody tell me possible way of making it.

Nice question. (And this is something I'm working on too.)
The good thing about the embedding intent classifier is that it doesn't have any assumptions about what language it works on. So, in theory, it should work with every language.
There are 2 approaches that you can use to support multi-language intents.
1. make separate intents for them (e.g. hello_en, hello_xx, for hello spoken in 2 languages)
2. create a language detector and handle them all in custom action
If you're using default utter_ methods, method 1 may make more sense becuase you can just use hello_en as intent name and utter_hello_en to get back the response.
Method 2 makes more sense when you actually have multiple variables that you want to use to generate your response (and of course, you handle them in custom actions).
All in all, you can implement multi-language chatbots in rasa!
Edit:
What you want is a custom language detector that finds out which language you're using. You can include the language-detector as a custom component in the beginning of your pipeline and make it fill a language slot. Then, you can use this slot value as input in your custom actions and respond accordingly.

Related

Terminology - one-time code generation directives

Is there a such thing as a preprocessor whose statements, once processed, disappear completely and get replaced by the target language syntax permanently?
I want to research it on the web but I don't know what term to search for. If I search for "code generator", "templating language", "preprocessor directives", "mixins", "annotations" I get generators whose input becomes the source of truth.
The closest thing I can think of is a macro.
What I'm trying to do
I often have to write code that is verbose and unnecessary manual labor and am looking for a smarter way to input at least the majority of it and have it automatically transformed and only source-control the output (and hand edit if necessary). For example:
Java code - Instead of writing getters/setters, javadoc (perhaps the transformer can be a maven plugin)
HTML - I just want to add URLs, and have my preprocessor automatically convert them to links, images, videos, audio etc. depending on the file extension with some regex substitution (currently I run a perl script via a cron job)
I just want to use it as my own shorthand and not enforce it in my project and make the output editable so that others have to learn a new framework or language (like Protobuf, Stringtemplate, GWT, C hash-defines, PHP, JSP etc).
There should be no direct clue that I used a template/preprocessor to generate it.
What you want is a "program transformation system". See https://en.wikipedia.org/wiki/Program_transformation. (This is a superset of "transpilers" [ugly term]).
A good source-to-source transformation system will let you apply rewrite rules of the form of:
if you see *this*, replace it by *that* if *this_condition*.
You can then take your source code, and run a set of rewrite rules across that code to change it.
The resulting code is "transformed"; the rewrite rules are not visible.
It seems like Transpiler is one way to describe it.

VXML a common handler for all forms for a generalized DTMF input

I have just started doing some VXML, and looking to do a simple feature where a DTMF input like zero obtained on any of the forms will lead to a common generalized action. A good example for this is the one we use in our normal interactions with IVR where we press zero to speak to an agent at any time when we are in the IVR.
1) I thought of using throw/catch for this purpose. Is this a good design or is there another VXML feature that best suited for an action like this?
<choice dtmf="0" event="zeroEntered"/>
</choice>
My root document would have:
<catch event="ZeroEntered">
---Do something
</catch>
2) If throw/catch is the way to go, I see its not possible to get that dtmf zero on all the VXML pages. Because elements are not present on all the VXML pages I have...Is there another way? Or I should explicitly include menu in all the pages to catch dtmf zero at any during the call?
What you are referring to are often called universal or global grammars. You can implement them using the link element. You can set the scope for this grammar at the application, document or form level. For more on universal grammars look at this article. And you can find more about the link element here.

look up input field webbrowser C language

This is in C Language
I want to know how i can write a program to lookup all the input fields of a website. Any website. and then can fill them in. I can write the simple webbrowser in vbs but how can i analyse the input fields. even better would be is i could click the lookup field and it puts the name of it in a box..... that would be ideal.
Anyone can help? thanks :)
Are you sure you want to do this in C?
I ask because it is not easy. First of all, you need to be able to run the HTTP GET request against the webpage you wish to view. For this, you probably need libcurl; you definitely don't want to be writing from scratch at any rate.
Next, you need to process the html you get, finding all input fields. You do NOT want to do this using regular expressions, if anything for the sake of bobince's blood pressure. HTML is not a regular language is the bit you need to take away - you need an xml parser. Enter libxml. I'm sure there are other xml libraries out there, and even libraries for parsing html.
Finally, having done that (got the fields etc) you need to be able to populate them and submit the correct request as per the ACTION and METHOD parameters of the FORM.
This is of course assuming you know what the fields should be formatted with. And it also assumes nothing else is going on. If you have a javascript validated web form (I sincerely hope they're validating on the request too, but they might provide feedback via JS) you won't benefit from that (unless you're going to integrate JS, in which case you might as well write a browser).
This is not a trivial task and it is the reason there are accessibility standards for HTML, because otherwise it becomes tricky to interpret the form without human interaction.
Of course, this all assumes said html is well formed, which isn't always the case...
I might suggest another approach. BeautifulSoup is a well known Python web scraping library that works very well. Python as a language allows easier string manipulation too, which will dramatically cut down your development time. I'd suggest giving the need to use C some serious thought given the size and complexity of the task you want to undertake vs your need to get a result quickly. If you have a lot of time, by all means go for C.

How to handle data from an external program on Mac OSx

I would like to make a program (I would prefer in C language) , but even in cocoa , that can take data from an external program (such as iTunes or adium) and will use them. For example i would like to take the data of a listbox or the text of the chat so as to manipulate it. I need a place to start. In windows I think it is possible with some apis that find the hWnd of a window and then find a pointer to the listbox or textbox. Please give me some info on how to start. Thanks you in advance.
It's not clear exactly what you want to do. It's either impossible or severely restricted.
For one thing, different applications use different ways of constructing a “listbox”—Cocoa applications use NSTableView, Carbon applications use DataBrowser, and GTK, Qt, and Java applications use even more different APIs. These do not all go through some common kind of list box thingy; each is an independent implementation.
(You could hope that either NSTableView or DataBrowser would be based on the other, but don't count on it.)
For another, it is impossible to obtain a pointer to that control. You cannot access another application's NSTableView or DataBrowser view or GTK/Qt/Java equivalent unless (and this only works for NSTableView) that application deliberately serves it up to you. It doesn't sound like that's your situation.
The closest you can get to that is Accessibility, which may be pretty close, but is unlikely to work with most applications not based on Cocoa.
Even then, the view may not be showing you all the data. A table view may be lazily populated, and a table view designed in imitation of the iOS UITableView may even never have all the data (because it only has what it can show).
(All of the above applies to every kind of view, not just table views. Collection views, text fields, buttons—same deal for all of them.)
The only way to get at the true, complete copy of the data is to ask the controller that owns it. And, again, that's impossible if the application is not specifically offering it to you. Not to mention, the application might not even have a controller (not object-oriented, not MVC, or just sloppily made).
… so as to manipulate it.
Getting the data in the first place is the easy part. It is nigh-impossible to mess with data in another application—for good reason.
The closest you're going to get to either of these goals is the Accessibility interfaces.

Internationalizing Desktop App within a couple years... What should we do now?

So we are sure that we will be taking our product internationally and will eventually need to internationalize it. How much internationalizing would you recommend we do as we go along?
I guess in other words, is there any internationalization that is easy now but can be much worse if we let the code base mature and that won't slow us down very much if we choose to start doing it now?
Tech used: C#, WPF, WinForms
Prepare it now, before you write all the strings in the codebase itself.
Everything after now will be too late. It's now or never!
It's true that it is a bit of extra effort to prepare well now, but not doing it will end up being a lot more expensive.
If you won't follow all the guidelines in the links below, at least heed points 1,2 and 7 of the summary which are very cheap to do now and which cause the most pain afterwards in my experience.
Check these guidelines and see for yourself why it's better to start now and get everything prepared.
Developing world ready applications
Best practices for developing world ready applications
Little extract:
Move all localizable resources to separate resource-only DLLs. Localizable resources include user interface elements such as strings, error messages, dialog boxes, menus, and embedded object resources. (Moving the resources to a DLL afterwards will be a pain)
Do not hardcode strings or user interface resources. (If you don't prepare, you know you will hardcode strings)
Do not put nonlocalizable resources into the resource-only DLLs. This causes confusion for translators.
Do not use composite strings that are built at run time from concatenated phrases. Composite strings are difficult to localize because they often assume an English grammatical order that does not apply to all languages. (After the interface design, changing phrases gets harder)
Avoid ambiguous constructs such as "Empty Folder" where the strings can be translated differently depending on the grammatical roles of the strings' components. For example, "empty" can be either a verb or an adjective, and this can lead to different translations in languages such as Italian or French. (Same issue)
Avoid using images and icons that contain text in your application. They are expensive to localize. (Use text rendered over the image)
Allow plenty of room for the length of strings to expand in the user interface. In some languages, phrases can require 50-75 percent more space. (Same issue, if you don't plan for it now, redesign is more expensive)
Use the System.Resources.ResourceManager class to retrieve resources based on culture.
Use Microsoft Visual Studio .NET to create Windows Forms dialog boxes, so they can be localized using the Windows Forms Resource Editor (Winres.exe). Do not code Windows Forms dialog boxes by hand.
IMHO, to claim something is going to happens "in a few years" literally translates to "we hope one day" which really means "never". Although I would still skim over various tutorials to make sure you don't make any horrendous mistakes. Doing correct internationalization support now will mean less work in the future, and once you get use to it, it won't have any real affect on today's productivity. But if you can measure the goal in years, maybe it's not worth doing at all right now.
I have worked on two projects that did internationalization: a C# ASP.NET (existed before I joined the project) app and a PHP app (homebrewed my own method using a free Internationalization control and my own management app).
You should store all the text (labels, button text, etc etc) as data inside a database. Reference these with keys (I prefer to use the first 4 words, made uppercase, spaces converted to underscores and non alpha-numerics stripped out) and when you have a duplicate, append a number to the end. The benefit of this key method is the programmer has a pretty strong understanding of the content of the text just by looking at the key.
Write a utility to extract the data and build .NET resource files that you add into your project for compile. Create a separate resource file for each language. In your code, use the key to point to the proper entry.
I would skim over the MS documents on the subject:
http://www.microsoft.com/globaldev/getwr/dotneti18n.mspx
Some basic things to avoid:
never ever ever use translation software, hire a pro or an intern taking that language at a local college
never try to create text by appending two existing entries, because grammar differs greately in each language, this will never work. So if you have a string that says "Click" and want one that says "Click Now", do not try to create a setup that merges two entries, or during translation, copy the word for click and translate the word now. Treat every string as a totally new translation from scratch
I will add to store and manipulate string data as Unicode (NVARCHAR in MS SQL).
Some questions to think about…
How match can you afford to delay the shipment of the English version of your application to save a bit of cost internationalize later?
Will you still be trading if you don’t get the cash flow from shipping the English version quickly?
How will you get the UI right, if you don’t get feedback quickly from some customers about it?
How often will you rewrite the UI before you have to internationalize it?
Do you English customers wish to be able to customize strings in the UI, e.g. not everyone calls a “shipping note” the same think.
As a large part of the pain of internationalize is making sure you don’t break the English version, is automated system testing of the UI a better investment?
The only thing I think I will always do is: “Do not use composite strings that are built at run time from concatenated phrases” and if you do so, don’t spread the code that builds up the a single string over lots of methods.
Having your UI automatically resize (and layout) to cope with length of labels etc will save you lots of time over the years if you can do it cheaply. There a lots of 3rd party control sets for Windows Forms that lets you label text boxes etc without having to put the labels on as separate controls.
I just starting to internationalize a WinForms application, we hope to mostly be able to use the “name” of each control as the lookup key, without having to move lots into resource files etc. It is not always as hard as you think at first….
You could use NGettext.Wpf (it can be installed from NuGet, and yes I am the author, but I made it out of the frustrations listed in the other answers).
It is hosted this github repository, and here is the getting started section at the time of writing:
NGettext.Wpf is intended to work with dependency injection. You need to call the following at the entry point of your application:
NGettext.Wpf.CompositionRoot.Compose("ExampleDomainName");
The "ExampleDomainName" string is the domain name. This means that when the current culture is set to "da-DK" translations will be loaded from "Locale\da-DK\LC_MESSAGES\ExampleDomainName.mo" relative to where your WPF app is running (You must include the .mo files in your application and make sure they are copied to the output directory).
Now you can do something like this in XAML:
<Button CommandParameter="en-US"
Command="{StaticResource ChangeCultureCommand}"
Content="{wpf:Gettext English}" />
Which demonstrates two features of this library. The most important is the Gettext markup extension which will make sure the Content is set to the translation of "English" with respect to the current culture, and update it when the current culture is changed. The other feature it demonstrates is the ChangeCultureCommand which changes the current culture to the given culture, in this case "en-US".
I also highly recommend reading Preparing Strings from the gettext utilities manual.
Internationalization will let your product be usable in other countries, it's easy and should be done from the start (this way English speaking people all over the world can use your software), those 3 rules will get you most of the way there:
Support international characters - use only Unicode data types in files and databases.
Support international date, time and number formats - use CultureInfo.InvariantCulture when storing data to file or computer readable storage, use CultureInfo.CurrentCulture when displaying data or parsing user input, never do your own parsing, never use any other culture objects.
textual data entered by the user should be considered a black box, don't try to break it up into words or letters, especially when displaying it to the user - different languages have diffract rules and the OS knows how to display left-to-right text, you don't.
Localization is translating the software into different languages, this is difficult and expensive, a good start is to never hard code strings and never build sentences out of smaller strings.
If you use test data, use non-English (e.g.: Russian, Polish, Norwegian etc) strings.
Encoding peeks it's little ugly head at every corner. If not in your own libraries, then in external ones.
I personally favor Russian because although I don't speak a word Russian (despite my name's origin) it has foreign chars in it and it takes way more space then English and therefor tests your spacing too.
Don't know if that is something language specific, or just because our Russian translator likes verbose strings.

Resources