I have a request in AppEngine that takes a little while to complete (many seconds). Is there a way to detect whether the user or some network problem has already aborted the request? This would allow me to save myself the server-load of continuing the result generation, which won't go anywhere anyways.
I tried the following in Dev-Mode, but neither worked (haven't checked yet whether it behaves differently in production mode):
Checking whether resp.getOutputStream completes without throwing an IOException
Checking whether there was an Interrupt sent to the servlet thread
Thanks, Markus
PS: I am really specifically interested in this question, not in ways to restructure my app to make the request faster or prevent aborts or other things.
I don't know if that is possible at all on the App Engine, app engine doesn't allow in progress request. The response is sent to the client after that the handler/servlat has returned.
No, there is no way to detect this from inside the app. I wouldn't worry about it.
Way late but this may be useful. In Golang you can detect interrupts using the Context package.
Here is a useful video of Francesc Campoy explaining it:
https://www.youtube.com/watch?v=LSzR0VEraWw
Related
I am doing a project in apache flink where I need to call multiple APIs so as to achieve my goal. The result of each API is required for the next API to work. Also as I am doing it on a KeyedStream, the same flow will be applicable to multiple data at once.
Below dig. can explain the scenario
/------API1---API2----
KeyedStream ----|------API1---API2----
\------API1---API2----
As I am doing all this, I am getting an exception saying "Buffer pool destroyed" after the job runs for sometime. Is it something related to API call, do I need to make use of Asynchronous function?? Please suggest. Thanks in advance.
a few things that are typically needed to help answer questions about Flink...
What version are you running?
How are you running it (from IDE, YARN cluster, stand-alone, etc)?
What's the complete stack trace for the exception?
(often) Can you share your code?
But at a high level, the "buffer pool destroyed" message you mentioned is not the root cause of failover, it's just a byproduct of Flink trying to kill off the workflow after an error has happened. So you need to dig deeper in the logs (typically Task Manager logs are where you'd look first).
Some requests silently fail in my python app, intermittently and unpredictably. The hallmarks of the failure are:
Request returns a 200, so the client doesn't know there's a problem.
Request does NOT successfully execute on the server.
No logging statements are recorded for the request.
Below is an example from my logs of a bunch of requests which are each supposed to write an entity to the datastore. You can see for the lower, successful request, a blue 'i' is present, indicating that info level logs were recorded. When I examine the datastore, an entity was successfully written for this request.
However, for the failed request, you can see there is just a white box, and there are no logging statements present at all. While the server returned a 200, no entity was written to the datastore for this request.
Has anyone encountered something like this before on App Engine? Any ideas on how to debug it? I've seen it in multiple different apps myself, but I've never been able to figure it out.
EDIT
To clarify, the main problem here is that code doesn't execute, as measured by the failure to write an entity. The spurious 200 and lack of logging is an associated symptom.
From a comment originally, but seems to be the resolution path for this issue:
Given that there are no log statements at all in the line and you appear to unpack the arguments and log them as soon as you enter the handler, this starts to look like an infrastructure/platform issue.
In such a case, it's best to open a public issue tracker issue, with "Type-Production" as a tag, including your app's app id and a timeframe, and as much information about your app and request handler involved as possible, and platform support will pick up the issue in the course of triage.
That said, it's worth examining the handler to make absolutely sure there's no way you could be exiting from the handler and sending a 200 without logging anything or seeing an exception. It all depends on what the code handling the request is capable of, what stack of libraries it's build upon, etc.
I'm building a vClould client application via the REST APIs, however, the documentation is inconsistent an in some cases just wrong and misleading.
All I really need is a solid debug tool or even a log file. Any recommendations?
You already mentioned you have access to the message stream, which is one of the first steps. Typically if I'm using the Apache HttpClient/HttpComponents I'll go increase the log level so it logs the full HTTP requests.
My next step is usually to cheat and to log into vCD as a system administrator and see what's going on. When vCD was designed there was a very deliberate decision to not reveal infrastructure level problems to tenants of the cloud (normal org users or org admins), as that would break the cloud abstraction. Sadly, that means as an org-level user you're often going to get "contact your cloud admin" error responses. We are aware that this isn't ideal and try to find ways to make it better when we can (IIRC the new 5.5 release that was announced last month does have some improvements in that area).
The last step is usually to cheat even more and to look at the server side logs (vcloud-container-debug.log, specifically). That usually gives me a better clue as to what went wrong. Of course, you may be unlucky and not have access to the vCD cell machine.
My workaround in the latter two cases is to try the operations via the vCD UI and see (1) if they work as expected and (2) if they do, to check the system state via the API and see if I'm sending the wrong request payloads, etc. because the doc or schema reference may not have been clear enough.
In regards to the documentation, please use the feedback links () found on individual doc pages to let us know! Our technical writer reviews all the feedback and tries to address them.
My final suggestion is that you might want to post API questions to the vCloud API community forum VMware has. There are a number of experts (both users and VMware employees) that monitor it and respond to questions.
I'm developing an iOS app that used GAE as a backend. The only sensitive data my app will transfer to GAE is login details, anything else that is transferred is not sensitive. I intend to use SSL for everything, just coz that seems most sensible to me - is there any reason not to? Also, I want some way of ensuring that my app is the only way that my GAE system can be accessed (ie nobody accessing it from the web/spoofing a client to look like mine) how do I go about this? I read something about public and private keys but wasn't exactly sure if it was relevant?
Any help is much appreciated
Thanks!
Short answer for your last question: you cant. There is no way you can enforce that your application is only accessed through your IOS app.
You can make it as hard as possible, but you cant guarantee that. The correct way is not relying on your IOS application
to validate the data send, but to do this verification in your gae app (if needed: again).
SSL is a good thing anyway - if done correctly (see: http://www2.dcsec.uni-hannover.de/files/android/p50-fahl.pdf )
But if the only sensitive data send is a password, you could consider using something like SRP ( start reading here: http://en.wikipedia.org/wiki/Secure_Remote_Password_protocol )
Hopefully Moishe sees this: in development mode, the channel api client (javascript) resorts to polling... and uses a very fast polling rate. After poking around I found that if I set
goog.appengine.Socket.POLLING_TIMEOUT_MS = interval;
I can control the polling rate. What I'm wondering is:
How do I know if/when the client is going to go into "poll mode" in production?
Is it possible to force the client into "poll mode"?
What happens if I reach the channel quota for my app? will the /_ah/channel/ endpoint just stop working altogether? or will it resort to polling?
-Thank you
Answers:
The client will never go into polling mode in production. The implementation is completely different in prod.
See above
The call to create_channel() will fail and you won't be able to get any more tokens. Existing tokens (and hence channels) will work until they time out.
Hope that helps!
-Moishe