AngularJs database data internationalization - angularjs

I´m developing a i18n AngularJS application. I would need to display the info in english and also spanish. I´m currently using ng-translate for static content and it´s really good. But I would need to translate some info coming from database, such as the typical dropdown (select field) with country names. Depending on the user language settings, there values in the combo should be displayed in english or spanish.
I don´t know what is the best architectural approach. I mean, I think in two first approaches:
1.1 I have a table in database with the countries in just one default language (english). When I get this list to display these values are dinamically translated somehow.
1.2 I have a table in db with so many columns as languages with the countries in different languages. When I get the list I use the language settings to get the right column.
I don´t know if there is any other approach. I like 1.1. But, I´m not sure if I can implement it and how. I also would need to display these values ordered.
UPDATE
To enrich the final solution. You can have a look at next post, related to how to design a multilingual database (design pattern).
http://cleancodedevelopment-qualityseal.blogspot.com.es/2013/06/translation-multilingualmultilanguage.html

I would tend to go with 1.2, as that way you are in control of the translations. Country names probably aren't going to be much of an issue, but if there is anything more complicated, like phrases and such, then I would definitely go with 1.2, as current translation software is likely to lose nuances in translation that could be important. 1.2 would probably also make sorting slightly easier.

Related

A workaround to building dynamic multilingual Wix pages?

There's no "out of the box" option to create multilingual dynamic pages in Wix platform.
My idea is to have a single data table where I would store the same data in multiple languages (separate columns: Title EN, Title DE etc.). I would then trigger one or the other data column based on the language a user chooses to view the page in.
Has anyone had the same issue on Wix and maybe manages to find a workouround using some Java script magic?
Thanks for the help in advance!
What you are suggesting should work fine. You can either store all the languages in a single collection or have a collection per language. I think a collection per language would probably be easier if you keep the field names the same because you could use the same exact queries, just on a different collection.
Then you can use the currentLanguage property to determine which collection to query.

Generate a series of documents based on SQL table

I am trying to formulate a proposal for an application that allows a user to print a batch of documents based on data stored in a SQL table. The SQL table indicates which documents are due and also contains all demographic information. This is outside of what I normally do and am trying to see if these is a platform/application that already exists to do such a task
For example
List of all documents: Document #1 - Document #10
Person 1 is due for document #: 1,5,7,8
Person 2 is due for document #: 2.6
Person 3 is due for document #: 7,8,10
etc
Ideally, what I would like is for the user to be able to push a button and get a printed stack of documents that have been customized for each user including basic demographic info like name, DOB, etc
Like i said at the top, I already have all of the needed information in a database, I am just trying to figure out the best approach to move that information onto a document
I have done some research and found some people have used mail merge in Word or using Access as a front end but I don't know if this is the best way. I've also found this document. Any advice would be greatly appreciated
If I understand your problem correctly, your problem is two-fold: Firstly, you need to find a way to generated documents based on data (mail-merge) and secondly, you might need to print them two.
For document generation you have two basic approaches: template-based and programmatically from scratch. I suppose that you will opt for a template based approach which basically means that you design (in MS Word) a template document (Word, RTF, ...) that acts as a template and contains placeholders and other tags that designate »dynamic« parts of the document. Then, at document generation time, you need a .NET library/processor that you will pass this template document and the data, where the processor will populate the template with the data and return the resulting document.
One way to achieve this functionality would be employing MS Words' native mail-merge, but you should know that this would involve using Office COM and Word Application Automation which should be avoided almost always.
Another option is to build such a system on top of Open XML SDK. This is velid option, but it will be a pretty demanding task and will most probably cost you much more than buying a commercial .NET library that does mail-merge out-of-the-box – been there, done that. But of course, the good side here is that you will be able to tailer the solution to your needs. If you go down this road I recoment that you use Content Controls for tagging documents/templates. The solution with CCs will be much easier to implement than the solution with bookmarks.
I'm not very familliar with the open source solutions and I'm not sury how many there are that can do mail-merge. One I know is FlexDoc (on CodePlex) but its problem is that uses a construct (XmlControl) for tagging that is depricated in Word 2010+.
Then there are commercial solutions. Again I don't know them in detail but I know that the majority of them are a general purpose document processing libraries. Our company has been using this document generation toolkit for some time now and I can say it covers all our »template-based document generation« needs. It doesn't require MS Word at doc generation time, and has really helpful add-in for MS word and you only need several lines of code to integrate it in your project. Templating is very powerful and you can set-up a template in a very short time. While templates are Word documents, you can generate PDF or XPS docs as well. XPS is useful because you can use .NET/WPF prining framework that works with XPS docs to print documents. This is a very high-end solution, but of course, the downside here is that it is not a free solution.

Pulling out Popular terms from a Solr core

I have an Apache Solr core where i need to pull out the popular terms out of it, i am already aware of luke, facets, and Apache Solr stopwords but i am not getting what i want, for example, when i try to use luke to get the popular terms and after applying the stopwords on the result set i get a bunch of words like:
http, img, que ...etc
While what i really want is:
Obama, Metallica, Samsung ...etc
Is there any better way to implement this in Solr?, am i missing something that should be used to do this?
Thank You
Finding relevant words from a text is not easy. The first thing I would have a deeper look at is Natural Language Processing (NLP) with Solr. The article in Solr's wiki is a starting point for this. Reading the page you will stumble over the Full Example which extracts nouns and verbs, probably that already helps you.
During the process of getting this running you will need to install additional software (Apache's OpenNLP project) so after reading in Solr's wiki that project's home page maybe the next step.
To get a feeling what is possible with that you should have a look on the demonstration of the searchbox guy. There you can paste a sample text and get relevant words and terms extracted from it.
There are several tutorials out there you may have a look at for further reading.
If you went down the path and the results are not as expected or not as good as required, you may go down that road even further and start thinking about text mining with Apache Mahout. There are again several tutorials out there to cross it with Solr.
In any case you should then search Stackoverflow or the web for tutorials and How-Tos you will certainly need.
Update about arabic
If you are going to use OpenNLP for not supported languages, which Arabic unfortunately is out of the box as of version 1.5, you will need to train OpenNLP for the language. The reference about it is found on the developer docs of OpenNLP. Probably there is already something out there from the arabic community, but my arabic google-fu is not that good.
Should you decide to do the work and train it for the arabic language, why not share your traning with the project?
Update about integration in Solr/Lucene
There is work going on to integrate it as a module. In my humble opinion this is as far as it will and should get. If you compare this problem field to stemming stemming appears to be rather easy. But even stemming got complex when supporting different languages. Analysing a language to the level that you can extract nouns, verbs and so forth is so complex that a whole project evolved around it.
Having a module/contrib at hand, which you could simply copy to solr_home/lib would already be very handy. So there would be no need to run a different installer an so forth.
Well , this is a bit open ended.
First you will need to facet and find "popular terms" out of your index, then add all the non useful items such as http , img , time , what, when etc to your stop word list and re-index to get cream of the data you care about. I do not think there is an easier way of knowing popular names unless you can bounce your data against a custom dictionary of nouns during indexing (that is an option by the way)- You can choose to index only names by having a custom token filter (look how stopword filter works) and have your own nouns.txt file to go with your own nouns filter, in the case you allow only words in your dictionary to into index, and this approach is only possible if you have finite known list of nouns.

Plomino multilingual site

I've got Zope 2.13.15, Plone 4.2.0.1, LinguaPlone 4.1.3 and CMFPlomino 1.17.2
I need to make a multilingual website on Plone and am using Plomino. I see that I can translate plomino database, forms and views with LinguaPlone but not documents. I have seen the procedure on http://www.plomino.net/how-to/multilingual-applications (Multilingual applications - How to build a multilingual Plomino application) and more detailed on https://github.com/plomino/Plomino/issues/296. I'm not sure can I translate content of documents using this procedure because the mentioned tutorial states "If the text does not match any msgid from the i18n domain, it remains unchanged".
Does this mean that all the translations of the content of documents should be in the .po files or what. Can anybody clear this mechanism to me please and is this tutorial the right way to document content translation ?? At this moment I'm not sure if there is a document content translation solution for Plomino
What is the procedure to translate document content in Plomino? The tutorials are not clear to me.
At the moment there is no document content translation solution for Plomino.
The mentioned solution can be used to translate pre-defined contents (like labels in forms, computed values, etc.) but obviously it is not applicable to content freely entered by users in documents.
Nevertheless, Plomino is already used in multilingual contexts.
A basic solution is to:
create a field to store the document language,
provide a Document ID formula which will use this lang field value (so we can guess the id of any translated version from the current doc id),
implement the different actions you might need (like "Translate this doc", "Switch to language xx", etc.) as basic Plomino actions.

Cakephp website with English and Arabic support for the same database

Im building a website in CakePHP 1.3. My requirement is to have a website with arabic and english support. I want that if a user is entering the information in arabic so when the english user sees the same information it should be in english and vice versa.
As far as localing the labels ive done that using po files. Its pretty straight forward.
But for the database im using the Cakephp's built-in Translate Behaviour. But it again doesn't translate anything and creates another copy of the data with the current locale that is in use.
Please help me in which direction i should move.
I want to know the best practices that should be followed for this kind of scenario.
May be translating db values is not the best solution and should save the values as in whatever language they are coming.
Any help and suggestions would be highly appreciated.
It isn't actually possible to have CakePHP automatically translate data that is entered.
The Translate Behavior allows you to enter the same content in multiple languages and then retrieve the appropriate language from the database, based on the language that you currently have set in your config. It doesn't actually translate anything for you.
Theoretically, you could add a function to the Model::beforeSave() callback that would submit the Arabic text to a service like Google Translate and then save both Arabic and English versions to their appropriate tables, but the results won't necessarily be very good. As #deceze said in his comment to your question, machine translation is a hard problem.

Resources