My application generates many, many log messages. I would like to download only messages in a specific time interval. I looked around and LogQuery seems to be a good choice.
It seems like LogQuery, as specified here:
https://developers.google.com/appengine/docs/java/javadoc/com/google/appengine/api/log/LogQuery.Builder
, let one specify a start and end time in milliseconds.
However, there is no such method withStartTimeMillis or withEndTimeMillis in the GAE library that I downloaded from the web. I'm using GAE SDK 1.7.1. There are only withStartTimeUSec and withEndTimeUsec which deal in microsecond.
What's amiss here?
LogQuery.Builder has both methods: withStartTimeMillis(long startTimeMillis) and withStartTimeUsec(long startTimeUsec).
Also, the current version is 1.8.1, so you should upgrade to that before starting to resolve errors.
Related
When deploying using gcloud app deploy I get the following error:
Timed out waiting for the app infrastructure to become healthy gcp
I contacted GCP Support and they told me the same thing I had read in other threads:
the error you are referring to may be related to the Compute Engine “In-Use IP Addresses” Quota limit. You can view your current quota limit information by accessing from your GCP menu “IAM & Admin > Quotas”.
I checked the "In-Use IP Addresses" and it doesn't seem like I have a problem with quotas:
Looking for the error, I found that in the Activity tab, when deploying, I get an error. Apparently , when App Engine is trying to delete a VM, the process starts to loop trying to delete it. You can see the error:
(I intentionally covered the project ID)
Edit: It seem like the problem is only with southamerica-east1. I created a new project in southamerica-east1 but I kept getting the same error, so then I created a new project with the App Engine in us-west2 and worked like a charm (I used the same application and app.yaml). I wonder if the problem is GCP southamerica-east1 or a unknown bad configuration by my side.
This is probably related to this issue: https://issuetracker.google.com/u/2/issues/73583699. It does mentioned the "in-use IP Address" quota, but many people have posted in recent days (Nov 2018) indicating that they are seeing the error and have verified that they have not hit their quota.
Unfortunately, no solution has been posted and there hasn't been any recent comment from the devs.
First, our apologies that you’ve experienced this issue. Be assured that we are aware of the situation and the team works hard to resolve it.
Our goal is to make sure that there are available resources in all zones. This
type of issue is rare. When a situation like this occurs, or is about to
occur, our team is notified immediately and the issue is investigated.
We recommend deploying and balancing your workload across multiple zones or
regions to reduce the likelihood of an outage. Please review our documentation
which outlines how to build resilient and scalable architectures on Google
Cloud Platform.
For the time being, you can try relaxing your requirements (e.g. requesting a smaller instance or one with fewer resources) or removing the external IP requirement.
If that proves not to be enough, you can try deploying your application to another region
Again, we want to offer our sincerest apologies.
Thanks for understanding.
At the end we didn't find a real solution so we moved all our services from Brazil to US-2. I'm not sure if the Region is the problem, but there in US-2 all works like a charm
I've been working on a small Google App Engine (standard environment) project that uses Cloud Endpoints v2. My code is largely based on the quickstart provided by Google.
Everything was working fine, but I re-deployed today after having not looked at it for a few weeks, and I'm getting the following error when I attempt to call the endpoint:
error: An error occured while connecting to the server: DNS lookup failed for URL: metadata.google.internal
This wasn't happening before. It seems to be happening when the endpoints package is being imported by Python.
My endpoint doesn't do anything fancy - I haven't changed the source from the sample EchoApi. The error ends up in the GCP Logging console no matter if I try to access the API through the API Explorer or via Curl.
I don't get any errors during deployment.
Edit #1
Some further information:
The error originates from within Google's code that is included with the google-endpoints package which I've included in my lib folder, per
the documentation. Specifically, the error occurs on line 54 of google/api/control/wsgi.py.
Basically, it's making a request to metadata.google.internal using urllib2.
I'm guessing this address is only available from within the Google Cloud, and that for whatever reason, the instance that's hosting my app can't do a DNS lookup on it.
Edit: #2
Dug a bit further.
It seems that the error originates in the google-endpoints-api-management package. Changes committed to that package on October 19th seem to have introduced additional platform reporting. metadata.google.internal is queried to check if the code is running within the Google Container Engine, then it blows up, because the metadata address doesn't resolve.
Here's the commit:
https://github.com/cloudendpoints/endpoints-management-python/commit/0a37d0e443091053ed03e455e06d3a0ae770999f
The google-endpoints package only requires google-endpoints-api-management >=1.0.0b1. On my end, things were working fine on version 1.0.0b2, but then I built a new lib folder, which brought down 1.0.0b5, and things went sideways. Required packages haven't changed between b2 and b5, so I'm thinking I may be able to just downgrade back to b2 for the time being. Haven't tried it yet.
Sent the Google Dev an email. Perhaps he'll chime in with further tips.
Edit: 2016-11-07
Tested downgrading the google-endpoints-api-management package to 1.0.0b2. Seems to be working, kludgy a fix as it is. If you're using the lib folder, the following will scrub the newer error-prone wsgi.py file and put back the older one:
pip install -t lib google-endpoints-api-management==1.0.0b2 --upgrade
Not pretty, but it may just get you back in business.
On a side note, the Google engineer promptly replied saying that he would take a look at this issue soon. With luck, endpoints v2 will eventually come out of beta, 'cause I'm really liking it so far.
This will be fixed in an upcoming patch to the google-endpoints-api-management package (which will be 1.0.0b6). It will probably be released sometime on Monday, 11/6.
If you'd like to continue testing right away and this error is blocking you, you can go back to 1.0.0b4 until 1.0.0b6 comes out. Everything should still work as normal with that version.
Thanks for bringing this to our attention! We're doing our best to iron out all of these wrinkles now during beta in preparation for our first general release.
EDIT: 1.0.0b6 has been released and resolves this issue. Thanks for your patience during our beta phase!
(Posted solution on behalf of the OP).
Google has released version 1.0.0b6 of the google-endpoints-api-management package to address this issue. It solved the problem for me. For anyone who is encountering this problem, clean out your lib folder and re-install the google-endpoints package. This will bring down the new google-endpoints-api-management package with it.
Thanks to Brad at Google for really quick action on this.
Is it possible to auto-deploy to AppEngine when Github receives a new commit? I found a bunch of dead documentation links that suggest it is, however I still have no idea how to set it up. There are some mentions for creating a release pipeline, but I don't see any way to do that in the cloud console anymore.
I've got my code mirroring the GitHub repository already, however I can't figure out how to link this to the deployment pipeline or even how to create a new version. Am I missing something obvious? This seems like it should be incredibly straightforward to do...
The entry on the Google Cloud Platform blog that's supposed to explain how to achieve this is blank. It seems Google quietly killed this feature. Some people have complained that it suddenly stopped working. The only way to do this now is to use Continous Delivery/Integration tools such as Travis CI.
I have a couple of questions about the App Engine Map Reduce API. First of all there's a mapreduce package in the SDK, and there's a separate mapreduce bundle here:
https://developers.google.com/appengine/downloads
Which one should I be using? Should I be using the bundle, or is the documentation out of date and I should actually use the SDK version?
Second I'd like to be able to run mapreduce's on a non-default version to make sure that the requests from the mapreduce don't interfere with user requests.
What's the best way to do this? Can I start the pipeline with a task queue, and set the target version of that queue to be my non-default version?
We recommend using the open source version of Map Reduce for GAE at http://code.google.com/p/appengine-mapreduce/
The stale bundle link in the docs is a bug. That'll get cleaned up soon.
A few of our SDKs have bits of MapReduce (for historic reasons), but the open source version is the way to go for now.
As for using a separate version, this is kind of "it depends". If you're thinking of interference in terms of competition for the processor, that's not likely to be a noticeable issue. Depending on queue processing rates you've set up, more instances of your app will be spun up to handle mapping tasks as needed. I'd try some experiments first. Make sure you have a problem before you invest time and effort solving it.
mapreduce can be start on a not default version. And after it starts, it will continue run on that version automatically.
In my case I just deploy the code on a non default version and trigger the mapreduce with version_id.app_id.appspot.com/path_to_start_a_job.
cron job can also trigger the mapreduce on non default version without problem.
When i try to run gwt sample project it gives "RPC call failure.An error occurred while attempting to contact the server. Please check your network connection and try again". It was running good before but after i update programs and libraries it gives that error. Which update causes this error or there is other things?
Appengine version:1.7.0
GWT version:2.4.0
Eclipse version:4.2(juno)
JDK version:1.7.0_05
This may not be the problem you are facing but it seems to be the most frequent problem.
Let me predict - you last tried GWT four years ago and you hoarded the sample project hoping to pull it out one day.
And yesterday, you did pull it out. It kinda worked and then you decided to upgrade to the latest 2.4.0. (Actually the latest is 2.5.0-rc1).
Ooops. The web-inf/lib of your project is still faithfully using pre-version 2.2 gwt-servlet jar.
Nope, you can't do that. GWT-RPC data transfer format is version-unstable. Not guaranteed to be compatible from one version to the next.
Simple solution - recreate a new GWT project using the new Google plugin.
Then copy the src and web.xml of your project into the new project.
Or replace the gwt-servlet.jar with the latest. And if you usegwt-servlet-deps.jar, you would need to upgrade that too (but I doubt so, because if you did use gwt-servlet.deps.jar you wouldn't be asking this question).
But why would you keep the gwt sample from an old project?
The samples have remained quite the same over the years. Why not use the samples from the new GWT 2.4.0 download. You don't have to keep the samples. You should try to construct the projects for the samples afresh.
The GWT directory is found under Eclipse's plugins directory, under a long name. Like
plugins/com.google.gwt.eclipse.sdkbundle_2.4.0.v201205091048-rel-r37/gwt-2.4.0.
In which you will find the samples directory.