Good morning everyone,
I have something [maybe not so] unique which I want to do which is translate a simple phrase like "Hello" into every language available under Google Translate API. I want to basically capture the results and store it into a SQL Server database.
In the past, I have written bulk geocoding processes using ASP and that worked well, and I am thinking that I can do the same with the translate API using a querystring. However, there are really no great examples of it.
I am about to drop the languages and their codes into a table so that I can just loop through things, and then use a JSON parser since I am not using the latest version of SQL Server. I have done crazy ASP SOAP implementation in the past which required authentication twice, but am thinking that this can be done differently.
I am just figuring that someone else out here might have had to slay this dragon before and any and all tips would be greatly appreciated.
Thanks
The Translation API only allows for one target language per request. Unfortunately, for your use case you would have to loop through the desired languages and send a separate request for each.
Related
I've been trying my very best not to ask any nosy question here in stackoverflow, but it has been almost one week since I got stuck in this problem and I couldn't find any solution.
I already have my working website built with CakePHP 3.2. What the website basically does is scrape Twitter for tweets containing a given search term, check if it's already in my database, and store it if it doesn't yet exist. Twitter's JSON response has this "tweet_id" property, and I've been using that value to check for whether I should ignore or append a specific tweet to my DB. While this might be okay while my database is small, I suspect it's going to slow things down considerably when my tables grow bigger. Thus my need for ElasticSearch.
My ElasticSearch server is running on my Arch Linux install, and I've configured my app to point to the said server. Also, I have my "Type" object named the same way as my "Tweets" table (I followed the documentation until the overview part http://book.cakephp.org/3.0/en/elasticsearch.html). This craps out an "Unknown method "alias" error, and following Google searches led me to creating an alternate pagination class since that was what some found to be the cause of the error (https://github.com/lorenzo/audit-stash/issues/4), which still doesn't fix things.
I'm not sure if I got this right. I installed the ElasticSearch plugin with the assumption that all I have to do is name the Types the same name as my tables, since to me the documentation "implies" that this should be done on top of the Blog Tutorial they did to "improve query performance".
TLDR, how is this supposed to work? Is my above assumption right? Do I name the Types differently and index everything myself? I'm not sure if there's just too much automagic, or I'm just poor at these sort of things. And yes, I'm new to frameworks (but not PHP, among other languages)
Thanks in advance!
I am having an assignment in which I have to fetch data from a site (somewhat like a news site) and make them into text files and then list them using tags.
Could someone please provide me some information/knowledge/keywords/instructions that can help me finish this?(using C only) Thank you.
The basic concept is this: you'll need to connect to the HTTP server on port 80, fetch the HTML, parse it and store the information you want in files. To accomplish the former, if you're using Windows, you can use the WinInet API. Otherwise go with cURL, which is a very popular, efficient, cross-platform solution.
The other tasks require a relatively good command of C, but it's nothing special - you'll need to work with strings to parse your results, then use C's file I/O (fopen/fprintf/fwrite) to save your stuff to disk.
I'm currently looking at building a lightweight integration between PivotalTracker and Salesforce.com. Reviewing this bit of PT documentation, it looks like I can do an update of Salesforce data based on PT activity. Awesome! I can't figure out how to access the XML data that is being posted however.
I can't see anything in ApexPages.CurrentPage() that looks like it will let me get to the XML. Has anyone done anything like this, without the use of an intermediate server?
I think we chatted about this over Twitter last week.
AFAIK there is (somewhat annoyingly) no way to access raw (i.e. not form posted key/values) POST data via SFDC. The Apex REST service support would be the closest thing, but requires authentication and still may not do exactly what you want.
Fairly certain you'll need some sort of middle-man proxy that simply takes the XML data and posts it to VF as a form-encoded key/value pair. That is a fairly trivial thing to do, but it's an unnecessary additional moving part and will require some sort of server resource.
I would probably first investigate if PT supports any other ping mechanism, or a way to write a custom extension to convert the raw POST into a form POST.
Preface: I have a broad, college knowledge, of a handful of languages (C++, VB,C#,Java, many web languages), so go with which ever you like.
I want to make an android app that compares numbers, but in order to do that I need a database. I'm a one man team, and the numbers get updated biweekly so I want to grab those numbers off of a wiki that gets updated as well.
So my question is: how can I access information from a website using one of the languages above?
What I understand the problem to be: Some entity generates a data set (i.e. numbers) every other week and you have a need to download that data set for treatment (e.g. sorting).
Ideally, the web site maintaining the wiki would provide a Service, like a RESTful interface, to easily gather the data. If that were the case, I'd go with any language that provides easy manipulation of HTTP request & response, and makes your data manipulation easy. As a previous poster said, Java would work well.
If you are stuck with the wiki page, you have a couple of options. You can parse the HTML your browser receives (Perl comes to mind as a decent language for that). Or you can use tools built for that purpose such as the aforementioned Jsoup.
Your question also mentions some implementation details such as needing a database. Evidently, there isn't enough contextual information for me to know whether that's optimal, so I won't address this aspect of the problem.
http://jsoup.org/ is a great Java tool for accessing content on html pages
Consider https://scraperwiki.com/ - it's a site where users can contribute scrapers. It's free as long as you let your scraper be public. The results of your scraper are exposed as csv and JSON.
If you don't know what a "scraper" is, google "screen scraping" - it's a long and frustrating tradition for coders, who have dealt with the same problem you have since the beginning of networked computing.
You could check out :http://web-harvest.sourceforge.net/
For Python, BeautifulSoup is one of the most tolerant HTML parsers out there. The documentation also lists similar libraries in Ruby and Java, so you'll probably find something relevant there.
Im building a website in CakePHP 1.3. My requirement is to have a website with arabic and english support. I want that if a user is entering the information in arabic so when the english user sees the same information it should be in english and vice versa.
As far as localing the labels ive done that using po files. Its pretty straight forward.
But for the database im using the Cakephp's built-in Translate Behaviour. But it again doesn't translate anything and creates another copy of the data with the current locale that is in use.
Please help me in which direction i should move.
I want to know the best practices that should be followed for this kind of scenario.
May be translating db values is not the best solution and should save the values as in whatever language they are coming.
Any help and suggestions would be highly appreciated.
It isn't actually possible to have CakePHP automatically translate data that is entered.
The Translate Behavior allows you to enter the same content in multiple languages and then retrieve the appropriate language from the database, based on the language that you currently have set in your config. It doesn't actually translate anything for you.
Theoretically, you could add a function to the Model::beforeSave() callback that would submit the Arabic text to a service like Google Translate and then save both Arabic and English versions to their appropriate tables, but the results won't necessarily be very good. As #deceze said in his comment to your question, machine translation is a hard problem.