It seems that over night the Google Drive API methods files().patch( , ).execute() has stopped working and throws an exception. This problem is also observable through Google's reference page https://developers.google.com/drive/v2/reference/files/patch if you "try it".
The exception response is:
500 Internal Server Error
cache-control: private, max-age=0
content-encoding: gzip
content-length: 162
content-type: application/json; charset=UTF-8
date: Thu, 22 Aug 2013 12:32:06 GMT
expires: Thu, 22 Aug 2013 12:32:06 GMT
server: GSE
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "Precondition Failed",
"locationType": "header",
"location": "If-Match"
}
],
"code": 500,
"message": "Precondition Failed"
}
}
This is really impacting our application.
We're experiencing this as well. A quick-fix solution is to add this header: If-Match: * (ideally, you should use the etag of the entity but you might not have a logic for conflict resolution right now).
Google Developers, please give us a heads up if you're planning to deploy breaking changes.
Looks like sometime in the last 24 hours the Files.Patch issue has been put back to how it used to work as per Aug 22.
We were also hitting this issue whenever we attempted to Patch the LastModified Timestamp of a file - see log file extract below:
20130826 13:30:45 - GoogleApiRequestException: retry number 0 for file patch of File/Folder Id 0B9NKEGPbg7KfdXc1cVRBaUxqaVk
20130826 13:31:05 - ***** GoogleApiRequestException: Inner exception: 'System.Net.WebException: The remote server returned an error: (500) Internal Server Error.
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at Google.Apis.Requests.Request.InternalEndExecuteRequest(IAsyncResult asyncResult) in c:\code.google.com\google-api-dotnet-client\default_release\Tools\BuildRelease\bin\Debug\output\default\Src\GoogleApis\Apis\Requests\Request.cs:line 311', Exception: 'Google.Apis.Requests.RequestError
Precondition Failed [500]
Errors [
Message[Precondition Failed] Location[If-Match - header] Reason[conditionNotMet] Domain[global]
]
'
20130826 13:31:07 - ***** Patch file request failed after 0 tries for File/Folder 0B9NKEGPbg7KfdXc1cVRBaUxqaVk
Today's run of the same process is succeeding whenever it Patches a files timestamp, just as it was prior to Aug 22.
As a result of this 4/5 day glitch, we now have hundreds (possibly thousands) of files with the wrong timestamps.
I know the API is Beta but please, please Google Developers "let us know in advance of any 'trialing fixes'" and at least post in this forum to acknowledge the issue to save us time trying to find the fault in our user programs.
duplicated here Getting 500: Precondition Failed when Patching a folder. Why?
I recall a comment from one of dev videos saying "use Update instead of Patch as it has one less server roundtrip internally". I've inferred from this that Patch checks etags but Update doesn't. I've changed my code to use Update in place of Patch and the problem hasn't recurred since.
Gotta love developing against a moving target ;-)
Related
Monday: My OneDrive integration has suddenly stopped working today, yielding an ETag error. Note that I am not uploading/writing - simply reading. Here is the error from reading a directory. (no changes in user permission since it was working last night) Any idea what's causing it / how to fix?
POST ERROR: HTTP/1.1 409 Conflict https://graph.microsoft.com/v1.0/me/drive/root/children {
"error": {
"code": "resourceModified",
"message": "ETag does not match current item's value",
"innerError": {
"date": "2020-06-22T20:29:07",
"request-id": "{uuid}"
}
}
}
I am trying to access a website, and then return whatever it outputs in the body -> eg. "Success" or "Failed".
When I try with my code, I am getting the following back.
<<< REQ >>>
HTTP/1.1 200 OK
Date: Sat, 30 Aug 2014 17:36:31 GMT
Content-Type: text/html
Connection: close
Set-Cookie: __cfduid=d8a4fc3c84849b6786c6ca890b92e2cc01409420191023; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.japseyz.com; HttpOnly
Vary: Accept-Encoding
X-Powered-By: PHP/5.3.28
Server.
My code is: http://pastebin.com/WwWbnLNn
If all you want to know is whether the HTTP transaction succeeded or failed, then you need to examine the HTTP Response code... which is in the first line of the response. In your example it is "200"... the human readable interpretation of it is "OK".
Here is a link to most of the HTTP 1.1 response codes: w3.org-rfc2616 RespCodes
Your question indicated you wanted to extract this information from the "body"...
... but that information is not located in the "body", it is in the first response
header, as described above.
have you tried ethercard samples? there is a webclient sample, in which you can find procedure called CALLBACK - in that procedure you can process data stored in buf variable.
in your case you need to look for first empty line, which tells you that headers has been sent and page content(what php writes to the page i.e.) follows.
how familiar are you at pointers? how deep you do need to process the page output? i.e. OK or ERROR is enough, or you do need to pass same parameters back to duino?
Please find below the code i ran (Using: eclipse-java-kepler-SR2-win32-x86_64 + IE 11)
public class SampleTest {
public static void main(String[] args) {
System.setProperty("webdriver.ie.driver", "C:\\Program Files\\IEDriverServer\\IEDriverServer.exe");
WebDriver d1 = new InternetExplorerDriver();
d1.get("http://www.google.com/");
WebElement element = d1.findElement(By.name("q"));
element.sendKeys("selenium");
System.out.println("Test Selenium");
}
}
While running I got below logs
Started InternetExplorerDriver server (64-bit)
2.40.0.0
Listening on port 22795
Mar 26, 2014 7:04:27 PM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: I/O exception (java.net.SocketException) caught when processing request: Software caused connection abort: recv failed
Mar 26, 2014 7:04:27 PM org.apache.http.impl.client.DefaultRequestDirector tryExecute
INFO: Retrying request
Why am i getting this warning messages all the time only in IE
While writing "Send Keys" string in "Search" text box its taking more than 5 secs for each character
Would appreciate any helpful note on these... :)
From a blog post that discusses this issue in great detail:
There are two answers to this question, a short one and a long one.
The short one is, "Read the log message. It's clearly tagged as
'INFO', as in an informational message, and not indicative of any
problem with the code?" I find that this question often comes from
users of Eclipse, and that the Eclipse console has colored the message
red, and people are so conditioned to see "red == bad" that they react
to the format of the message rather than the content. The content of
the message is flagged at a level that means, "Hey, nothing is wrong,
we're just telling you about it."
For the longer, more detailed explanation, see the blog post, but it boils down to a race condition in bringing up an HTTP server, and using an HTTP client to poll for when that server is available to receive commands.
I'm running into the following error when running an export to CSV job on AppEngine using the new Google Cloud Storage library (appengine-gcs-client). I have about ~30mb of data I need to export on a nightly basis. Occasionally, I will need to rebuild the entire table. Today, I had to rebuild everything (~800mb total) and I only actually pushed across ~300mb of it. I checked the logs and found this exception:
/task/bigquery/ExportVisitListByDayTask
java.lang.RuntimeException: Unexpected response code 200 on non-final chunk: Request: PUT https://storage.googleapis.com/moose-sku-data/visit_day_1372392000000_1372898225040.csv?upload_id=AEnB2UrQ1cw0-Jbt7Kr-S4FD2fA3LkpYoUWrD3ZBkKdTjMq3ICGP4ajvDlo9V-PaKmdTym-zOKVrtVVTrFWp9np4Z7jrFbM-gQ
x-goog-api-version: 2
Content-Range: bytes 4718592-4980735/*
262144 bytes of content
Response: 200 with 0 bytes of content
ETag: "f87dbbaf3f7ac56c8b96088e4c1747f6"
x-goog-generation: 1372898591905000
x-goog-metageneration: 1
x-goog-hash: crc32c=72jksw==
x-goog-hash: md5=+H27rz96xWyLlgiOTBdH9g==
Vary: Origin
Date: Thu, 04 Jul 2013 00:43:17 GMT
Server: HTTP Upload Server Built on Jun 28 2013 13:27:54 (1372451274)
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Google-Cache-Control: remote-fetch
Via: HTTP/1.1 GWA
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.put(OauthRawGcsService.java:254)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.continueObjectCreation(OauthRawGcsService.java:206)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:147)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:78)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.writeOut(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.waitForOutstandingWrites(GcsOutputChannelImpl.java:186)
at com.moose.task.bigquery.ExportVisitListByDayTask.doPost(ExportVisitListByDayTask.java:196)
The task is pretty straightforward, but I'm wondering if there is something wrong with the way I'm using waitForOutstandingWrites() or the way I'm serializing my outputChannel for the next task run. One thing to note, is that each task is broken into daily groups, each outputting their own individual file. The day tasks are scheduled to run 10 minutes apart concurrently to push out all 60 days.
In the task, I create a PrintWriter like so:
OutputStream outputStream = Channels.newOutputStream( outputChannel );
PrintWriter printWriter = new PrintWriter( outputStream );
and then write data out to it 50 lines at a time and call the waitForOutstandingWrites() function to push everything over to GCS. When I'm coming up to the open-file limit (~22 seconds) I put the outputChannel into Memcache and then reschedule the task with the data iterator's cursor.
printWriter.print( outputString.toString() );
printWriter.flush();
outputChannel.waitForOutstandingWrites();
This seems to be working most of the time, but I'm getting these errors which is creating ~corrupted and incomplete files on the GCS. Is there anything obvious I'm doing wrong in these calls? Can I only have one channel open to GCS at a time per application? Is there some other issue going on?
Appreciate any tips you could lend!
Thanks!
Evan
A 200 response indicates that the file has been finalized. If this occurs on an API other than close, the library throws an error, as this is not expected.
This is likely occurring do to the way you are rescheduling the task. It may be that when you reschedule the task, the task queue is duplicating the delivery of the task for some reason. (This can happen) and if there are no checks to prevent this, there could be two instances attempting to write to the same file at the same time. When one closes the file the other sees an error. The net result is a corrupt file.
The simple solution is not to re-schedule the task. There is no time limit on how long a file can be held open with the GCS client. (Unlike the deprecated Files API.)
Fellow Drupal developers,
I have a really strange issue. I have a small module that has a menu item, which outputs an image - in other words, it doesn't show any pages, any HTML or anything else, but simply sends a header('Content-Type: image/png'); and then outputs the PNG and exits with exit();
BUT... and this is really strange... sometimes it runs twice and goes through the function twice even though I only load the URL once. If I add a watchdog to the function and inspect the log afterwords, I can see that the function has been processed twice... sometimes. For no apparent reason it occasionally works as intended - one pass, one image output and then nothing, but at other times it runs twice.
If I add a counter that increments a number in the database, this number sometimes increments 1 and sometimes 2 in spite of me only loading the image once in the browser.
I have tested it on two servers (one Unix, one Windows)... same erratic behavior.
I have had my attention on headers and caching, but can't see that anything is wrong. The header for the image looks like this when I output a 1x1 PNG:
Date: Thu, 04 Oct 2012 09:21:51 GMT
Server: Apache/2.2.22 (Win32) PHP/5.2.17
X-Powered-By: PHP/5.2.17
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 04 Oct 2012 09:21:51 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
Etag: "1349342511"
Content-Length: 95
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: image/png
200 OK
If I add watchdogs here and there I can see that the module initializes more than one time, which is no surprise, but it really surprises me that my custom function is called more than once - and only sometimes. I have tried all kinds of magic, like adding a session variable that counts the number of passes and breaks after the first, but to no avail. The function runs more than once... most of the time.
It's critical for the purpose of the function that it ALWAYS runs once and only once.
Does anybody know what's happening?
Here's my basic code:
function my_image_menu() {
$items = array();
$items['image_1x1'] = array(
'title' => t('Create image'),
'description' => t('Output 1x1 PNG.'),
'page callback' => 'my_image_show',
'access arguments' => array('access content'),
);
return $items;
}
function my_image_show() {
watchdog('My Image', 'Image shown');
if (!headers_sent()) {
header('Content-Type: image/png');
echo base64_decode('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAACnej3aAAAAAXRSTlMAQObYZgAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=');
exit();
}
}
If I load http://mysite/image_1x1 I get one nice little 1x1 dot on the screen as expected, but most of the time (but not every time...) I get two "Image shown" entries in the log! In spite of the exit(), which should halt the script as far as I'm informed.
What voodoo might Drupal be doing on me?
Maybe this or maybe not! --- Both watchdog() and and exit() report to the log. Your watchdog() is not in a conditional - so it will always log. Your exit() is in a conditional so this will log only if the condition is met. This could explain the Voodoo.
Try die() instead of exit() for a cleaner log.
I have partly solved this problem by not outputting an image in the code but sending the browser on to a physical image file with a header command.
This seems to break any flow in Drupal and renders the image once as expected. The only disadvantage is that I can't just generate any image, but have to have it physically, but that's an obstacle I can overcome.
In other words replacing
if (!headers_sent()) {
header('Content-Type: image/png');
echo base64_decode('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAACnej3aAAAAAXRSTlMAQObYZgAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=');
exit(); }
with
if (!headers_sent()) {
header('location:/sites/default/files/1px.png');
}
in the code in my question and making sure that /sites/default/files/1px.png is on the server.
This will work for me now, but I'd still be glad to know what can stop what I guess is Drupal's handling of the exit() or die() commands.
Martin
May be you should try to check this on "clean" browser. Try Firefox (not Chrome!) without any extensions.
Add
watchdog('My Image debug info', print_r($_SERVER, true));
and analyze this output.