using golang logging library, where does one see localhost logs? - google-app-engine

following the golang library instructions if you write logs with the client library, where can one see those logs when running your server locally during development (eg via go run main.go)?
in my case (not sure if it's relevant) i'm using the library as part of golang logic in appengine, and even the relevant-looking instructions on "viewing logs" for those docs don't mention local development explicitly. Is that because it (running gcloud app logs tail and seeing local server logs) should "just work" or because there's no way to see logs for a local logs sdk interaction?

It's a good question and the Cloud Logging libraries do appear bound to Google's Cloud Logging service but, for local development (your question) and, loose-coupling as a generally good principle, these libraries really ought to be pluggable. Why shouldn't services running on e.g. GCP route logs to e.g. AWS?
With OpenTelemetry (nee OpenCensus), Google (and others) promote the ability to disconnect metric and trace production from consuming services, and logs aren't distinctly different.
A popular logging library in Go, Logrus supports pluggable logging via Hooks and an old (!) Stackdriver Logging implementation exists; this should be straightforward to upgrade to the current API (version).
Meantime, I think your question would benefit from being posted to Google's public issue tracker for Stackdriver (sic.) logging (link) and I'm going to ask someone who's very familiar with Cloud Logging as she may have some insight into this for us.
Update
I emailed with some former colleagues at Google and learned that Open Telemetry will eventually encompass logging. This is mentioned briefly on the project's About page.

tl;dr Tentatively answering myself: that's not supported - instead one has to just conditionally swap out calls to regular logger if env (eg empty GAE_INSTANCE env variable) indicates you're on localhost.
Walking through the code under the NewClient(...) call on the logging package, I end up a spot where the upstream API is really being called (note the rpc context used by the very last turtle - I never saw logic as I walked through that seemed to be switching to something for local development), so I suspect there really is no emulation capturing.
EDIT: See DazWilkin's helpful answer below for more context

Related

Problems with Push Queues on Google App Engine Managed VM / Flexible Environment

I am having trouble with using Push Queues on Google App Engine's Flexible Environment (formally named, their Managed VM Environment). I am receiving numerous 404 Instance Unavailable (see picture below).
After a bit of sleuthing, I believe these errors may be because I am adding a task to a task queue, then deploying a new version of the Flexible VM instance. The taskqueue that I previously pushed is locked to the older instance, and can no longer run. Is this how taskqueues work with Flexible VM? If so, how does one use push taskqueues with the Flexible VM?
I was 90% done migrating to flexible env when I came across this same problem. After extensive research, I concluded there are three options:
REST API (experimental)
Use the beta REST API for task queues (this, as all other google APIs from flexible env, is external, so you need to deal with auth appropriately).
REST API reference: https://cloud.google.com/appengine/docs/python/taskqueue/rest/
Note, this is external and experimental. Find e.g. a java sdk without any meaningful documentation here: https://developers.google.com/api-client-library/java/apis/ (current version: https://developers.google.com/api-client-library/java/apis/taskqueue/v1beta2)
Compat runtime
Build your own flexible environment, based off a -compat runtime. This offers the old appengine api in a container suitable for the flexible env:
https://cloud.google.com/appengine/docs/flexible/custom-runtimes/build (look for images with "YES" in the last column)
e.g.: https://cloud.google.com/appengine/docs/flexible/java/dev-jetty9-and-apis
https://cloud.google.com/appengine/docs/flexible/java/migrating-an-existing-app
Note: I spent two weeks in blistered frustration pleading every God almighty help me get this to work, following container rabbit holes into the depths of Lucifer's soul and across unexplored dimensions. I eventually had to give in. I just can't get this to work to a satisfying degree.
Proxy service
Kind of a hacky alternative, but it gets the job done: create a very thin standard environment wrapper service which proxies tasks onto / off your queue. Pass them to your own app however you want. ¯\_(ツ)_/¯
Downside is you are now spinning up extra instances and burning extra minutes.
I ended up with a variation of this, where I'm using a proxy service in standard env, but just ported my eventual task handler to AWS Lambda (so it's completely off GAE). It's a different disaster, but a more manageable one.
Good luck!
I'm the tech lead and manager for this product.
There are two distinct answers to your question.
For the first, it seems like you have a version routing issue -- as you say, tasks cannot run against a VM because you launched a new version. By default, tasks are assigned to run on the version from which they were enqueued to avoid version mismatches. You should be able to override the version by re-configuring the target in your queue.yaml (or queue.xml). Documentation for that can be found here. You might also need to look at your
From a broader perspective, building a migration path away from standard/MVM-only support for task queues is currently our highest priority.
The replacement is Cloud Tasks, which exposes the same interface but can be used fully independently from App Engine. It exists in the same universe as AppEngine Task Queues, so you will be able to add tasks to existing queues (both push and pull). It is currently available in closed alpha. You can sign up to join the alpha here.
We strongly recommend against writing new code against the REST API. It is unsupported and the cloud tasks alpha is already substantially more feature complete.
I'm upvoting hraban's answer (he did wrestle with the devil after all) but providing an additional answer here.
Keep in mind that the Flexible Environment (Managed VMs) is still just a compute engine instance... with Google doing a good job of extracting features from AppEngine to make them reachable in a transparent manner. TaskQueues didn't quite make it. Keep a sharp eye on the cloud library--that's the mechanism by which the DataStore becomes usable (for Java go to http://googlecloudplatform.github.io/google-cloud-java/0.3.0/index.html). If you go to that link you can also select other languages. I have it on official word that TaskQueues are still on the roadmap (but no ETA).
As of now you can't use the REST api to enqueue onto PUSH queues. Now the way that I decided to tackle this problem was to use the REST API and create a PULL queue to put tasks in. Then I poll that queue inside an AppEngine service (i.e. module) and put it into a PUSH task queue. Why do I go to all that trouble? Because I need scheduled execution... which is a feature that TaskQueues alone can give you on AppEngine. So I package my task in an envelope and then unpack and re-push it into a task queue. Sounds crazy? It's been working reliably for me. Don't be scared off by the fact that the REST api is "alpha".
I will say if you're starting something new take a good look at the Pub/Sub API.

A plea for a basic Notebook example getting data into and out of Google Cloud Datalab

I have started to try to use the Google Cloud datalab. While I understand it is a Beta product, I find the Doc's very frustrating, to say the least.
The questions here and lack of responses as well as lack of new revisions or docs over the several months the project has been available make me wonder if there is any commitment to the product?
A beginning would be a notebook that shows data ingestion from external sources to both the datastore system and the Big query system. That is a common use case. I'd like to use my own data, it would be great to have a Notebook to ingest it. It seems that should be doable without huge effort? And it would get me (and others) out of this mess trying to link the various terse docs from various products and workspaces up and working together..
in addition to a better explanation of the Git hub connection process (prior question))
For BigQuery, see here: https://github.com/GoogleCloudPlatform/datalab/blob/master/content/datalab/tutorials/BigQuery/Importing%20and%20Exporting%20Data.ipynb
For GCS, see here: https://github.com/GoogleCloudPlatform/datalab/blob/master/content/datalab/tutorials/Storage/Storage%20Commands.ipynb
Those are the only two storage options currently supported in Datalab (which should not be used in any event for large scale data transfers; these are for small scale transfers that can fit in memory in the Datalab VM).
For Git support, see https://github.com/GoogleCloudPlatform/datalab/blob/master/content/datalab/intro/Using%20Datalab%20-%20Managing%20Notebooks%20with%20Git.ipynb. Note that this has nothing to do with Github, however.
As for the low level of activity recently, that is because we have been heads down getting ready for GCP Next (which happens this coming week). Once that is over we should be able to migrate a number of new features over to Datalab and get a new public release out soon.
Datalab isn't running on your local machine. Just the presentation part is in your browser. So if you mean the browser client machine, that wouldn't be a good solution - you'd be moving data from the local machine to a VM which is running the Datalab Python code (and this VM has limited storage space), and then moving it again to the real destination. Instead, you should use the cloud console or (preferably) gcloud command line on your local machine for this.

Google Cloud Endpoints stability?

I am using this link to build a simple chat application using GCM, and I found this great feature "Google Cloud Endpoints" which makes things easier. But I am afraid to depend on it as I noticed it is still experimental. Can I trust it or should I use Java Servlets instead?
It is true that the tag 'experimental' is a bit scary. If you are concerned, you could consider holding back a bit until Google IO 2013, which is the middle of May. They often make announcement and introduce new technologies there.
They first announced endpoints at last years' Google IO (in July) and if there any significant changes pending for endoints they would likely announce them at this years'.
If you do start using Endpoints, just for Android, and w/o user authentication, I don't think it would be too hard to revert to using a Servlet instead, if you had to (i.e. due to a change in terms that was off-putting). The user authentication stuff would be harder to replace IMO.
As far as I have used Google Cloud Endpoints they work perfectly. Furthermore many interesting features are already implemented, such as integration with Google Eclipse Plugin and testing through the Google APIs Explorer, even in localhost, using the Development Server.
I understand they're still experimental maybe because they're just a new technology not really thoroughly tested yet and are subject to updates. Anyway I've not found significant bugs so far and you should be able to reuse your endpoints with the sucesive versions that will exist. It doesn't seem to be something that will dissapear in the near future...
This is an older question, but for further references I want to say that my short experience was not so pleasant.
I tried "Mobile Backend App". In the beginning, everything worked fine, but after a few days (without changing anything) I received:
GoogleJsonResponseException 404 Not Found
I sow other posts on stackoverflow and manage to solve it by creating another project. I changed the code and it still worked. But again I had problems I played a bit with the 2 projects, I redeployed and changed the settings (tips found on other posts) and it worked. Now it is no longer working, no matter what I do.
I hope that the problem is specific to this project, but nevertheless it is frustrating.

How to interact with a locally running datastore service in appengine-magic?

I'm using appengine-magic to set up a web application, more or less as described at http://www.digitalbricklayers.com/2012/03/geotasklist-in-jquery-mobile-and.html. The example works on my local machine, locations and tasks are added to a local datastore etc.
My question is if it is possible to interact with the datastore from within a REPL, e.g. call (ds/save! ...) etc. during interactive development? I ask because when I try I get:
NullPointerException No API environment is registered for this thread.
com.google.appengine.api.datastore.DatastoreApiHelper.getCurrentAppId
(DatastoreApiHelper.java:108)
I'm getting this error no matter if I use an eclipse+counterclockwise based setup or an emacs+slime based setup.
Thanks,
Joachim
There's a bunch of ways to do this.
The easiest way is to go to the admin console (http://localhost:/_ah/admin) and click on the "Interactive Console".
I use django-nonrel, which comes with a command to launch an interactive shell (manage.py shell). If you're not using django-nonrel though, getting it set up though, is somewhat involved. I suspect most of what's necessary is found in the setup_env() function in django-nonrel: https://github.com/django-nonrel/djangoappengine/blob/develop/djangoappengine/boot.py
Getting it all to work is a pain, good luck.
The solution I use 99% of the time is to use pdb and force the interpreter to break at a certain point in my app where I need to do some debugging. See this for instructions: http://eatdev.tumblr.com/post/12076034867/using-pdb-on-app-engine
appengine-magic lets you use App Engine services (like the datastore) as long as the application is running; see https://github.com/gcv/appengine-magic#app-engine-services — as long as you ae/start your application, it should work.

Configuring logging for ActiveMQ 5.5 on Tomcat 6 with web app using SLF4j and logback

I would like my web app to log using SLF4j and logback. However, I am using ActiveMQ - which then requires that some if its jars go in /usr/share/tomcat6/lib (this is because the queues are defined outside of the web app so the classes to support them must be at container level).
ActiveMQ 5.5+ requires SLF4j-api so that jar has to go in to. Because SLF4j is now starting it needs to have a logging library added or it will simply nop. Thus, logback-core and logback-classic go in too.
After quite some frustration I got this working well enough that I can tidy it up shortly. I needed to configure logback to use a JNDI lookup to get the context. Then it can lookup logback-kenobi.xml in my web app and have a separate configuration there.
However, I'm wondering if this is the best way to do this. For one, the context handling appears not to support the groovy format. I did have a logback.groovy in my web app that logged to console when I was developing locally (which means that Eclipse WTP works nicely) but logs to file and to Splunk Storm when everywhere else. I'm going to want to do something similar with this setup but I'm not sure if I should do that by overwriting the logback-kenobi.xml or some other method.
Note that I don't, currently, need Tomcat itself to log with slf4j although I am planning to do that. Nor do I really need ActiveMQ to log with slf4j but I did need it to stop spewing debug messages every 30s as it was doing. I am aware of tomcat-slf4j-logbak but I don't believe it is directly useful as it is ActiveMQ requiring logging which is the issue.
However, I'm wondering if this is the best way to do this.
Best is an opinion, working is a fact.

Resources