Is it possible using the GAE Logs API to retrive front-end logs only? - google-app-engine

How can we we send a query to the Log API such that it only retrieves logs from the front end and not the backends?

I don't know what runtime you're asking about, but looking at the Python source for SDK 1.8.8 you have the following arguments for the google.appengine.api.logservice.fetch function:
module_versions: A list of tuples of the form (module, version), that
indicate that the logs for the given module/version combination should be
fetched. Duplicate tuples will be ignored. This kwarg may not be used
in conjunction with the 'version_ids' kwarg.
(This isn't yet reflected in the Google Developers site)
This does not mean you can directly access front-end logs, but if you convert your app from using backends to using two named modules, one for front-end requests and another for backend work, you'll be able to fetch the logs of each independently.

Related

REST API for SOLR analyzer

I'm going to test my SOLR analyzer and I've found instructions how to do it here: https://cwiki.apache.org/confluence/display/solr/Running+Your+Analyzer.
But I need to check several thousand of words, so I'm going to do it programmatically, not manually. Does SOLR have any REST API to run analyzer?
Thank you!
The Solr Admin page is just a set of static HTML files that uses the REST API offered by Solr behind the scenes. If you watch the Network tab in your browser's developer tools while navigating it, you'll see all the endpoints it talks to.
After doing this on the Analysis page, you can see that it makes requests to three endpoints, one to fetch the HTML, then two new requests to get the schema (for the field list) and one to perform the actual analysis:
http://localhost:8983/solr/corename/analysis/field?wt=json&analysis.showmatch=true&analysis.fieldvalue=asd&analysis.query=asd&analysis.fieldname=content

How to log per request / context - Golang

I'm trying to migrate an a web app from Google Appengine to a dedicated server and I've got stuck to the logging issue. Basically I would like to organise the logs per request/context(like on GAE) so that I can easily review the errors/trace on each request. The most advanced logging library I could find is the glog package but still I can't figure it out how to log per request/context.
Each request gives you a http.Request-object to work with.
If you're using sessions, then you'll have a sessions.Session-object to work with.
You will want to use those objects to help log per request/context, as they identify the request / session.

Search support for Google App Engine Go runtime

There is search support (experimental) for python and Java, and eventually Go also may supported. Till then, how can I do minimal search on my records?
Through the mailing list, I got an idea about proxying the search request to a python backend. I am still evaluating GAE, and not used backends yet. To setup the search with a python backed, do I have to send all the request (from Go) to data store through this backend? How practical is it, and disadvantages? Any tutorial on this.
thanks.
You could make a RESTful Python app that with a few handlers and your Go app would make urlfetches to the Python app. Then you can run the Python app as either a backend or a frontend (with a different version than your Go app). The first handler would receive a key as input, would fetch that entity from the datastore, and then would store the relevant info in the search index. The second handler would receive a query, do a search against the index, and return the results. You would need a handler for removing documents from the search index and any other operations you want.
Instead of the first handler receiving a key and fetching from the datastore you could also just send it the entity data in the fetch.
You could also use a service like IndexDen for now (especially if you don't have many entities to index):
http://indexden.com/
When making urlfetches keep in mind the quotas currently apply even when requesting URLs from your own app. There are two issues in the tracker requesting to have these quotas removed/increased when communicating with your own apps but there is no guarantee that will happen. See here:
http://code.google.com/p/googleappengine/issues/detail?id=8051
http://code.google.com/p/googleappengine/issues/detail?id=8052
There is full text search coming for the Go runtime very very very soon.

google ajax-search-api call "quota exceeded" on google app engine

i tried to use the custom search api ( http://code.google.com/intl/de-DE/apis/websearch/docs ) with java. it works perfectly on eclipse on my local machine.
when i try to do the same from google app engine the reply is: {"responseData": null, "responseDetails": "Quota Exceeded. Please see http://code.google.com/apis/websearch", "responseStatus": 403}
i do not understand. isn't it possible to call search api from GAE apps?
If you look at the very top of that page you linked to, they note that the API has been deprecated and the number of search queries you can make is limited.
However, if you absolutely NEED to use that API instead of the Custom Search API as Google suggests, there are a few troubleshooting steps you can take:
1) Check that your API key is unique to the project, and the limited number of queries you're allowed isn't being consumed by some other application.
2) Google does (did?) hostname filtering so that one computer doesn't use up all the API requests. You may be able to move the queries to Javascript instead of Java -- essentially move the request from the server to the client.
3) Try using a named backend (Java Backends)

Multiple data sources: data storage and retrieval approaches

I am building a website (probably in Wordpress) which takes data from a number of different sources for display on various pages.
The sources:
A Twitter feed
A Flickr feed
A database on a remote server
A local database
From each source I will mainly retrieve
A short string, e.g. for Twitter, the Tweet, and from the local database the title of a blog page.
An associated image, if one exists
A link identifying the content at its source
My question is:
What is the best way to a) store the data and b) retrieve the data
My thinking is:
i) Write a script that is run every 2 or so minutes on a cron job
ii) the script retrieves data from all sources and stores it in the local database
iii) application code can then retrieve all data from the one source, the local database
This should make application code easier to manage - we only ever draw data from one source in application code - and that's the main appeal. But is it overkill for a relatively small site?
I would recommend putting the twitter feed and flickr feed in JavaScript. Both flickr and twitter have REST APIs. By putting it on the client you free up resources on your server, create less complexity, your users won't be waiting around for your server to fetch the data, and you can let twitter and flickr cache the data for you.
This assumes you know JavaScript. Once you get past JavaScript quirks, it's not a bad language. Give Jquery a try. JQuery Twitter plugin Flickery JQuery plugin. There are others, that's just the first results from Google.
As for your data on the local server and remote server, that will depend more on the data that is being fetched. I would go with whatever you can develop the fastest and gives acceptable results. If that means making a REST call from server to sever, then go for it. IF the remote server is slow to respond, I would go the AJAX REST API method.
And for the local database, you are going to have to write server side code for that, so I would do that inside the Wordpress "framework".
Hope that helps.

Resources