Fellow Drupal developers,
I have a really strange issue. I have a small module that has a menu item, which outputs an image - in other words, it doesn't show any pages, any HTML or anything else, but simply sends a header('Content-Type: image/png'); and then outputs the PNG and exits with exit();
BUT... and this is really strange... sometimes it runs twice and goes through the function twice even though I only load the URL once. If I add a watchdog to the function and inspect the log afterwords, I can see that the function has been processed twice... sometimes. For no apparent reason it occasionally works as intended - one pass, one image output and then nothing, but at other times it runs twice.
If I add a counter that increments a number in the database, this number sometimes increments 1 and sometimes 2 in spite of me only loading the image once in the browser.
I have tested it on two servers (one Unix, one Windows)... same erratic behavior.
I have had my attention on headers and caching, but can't see that anything is wrong. The header for the image looks like this when I output a 1x1 PNG:
Date: Thu, 04 Oct 2012 09:21:51 GMT
Server: Apache/2.2.22 (Win32) PHP/5.2.17
X-Powered-By: PHP/5.2.17
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 04 Oct 2012 09:21:51 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
Etag: "1349342511"
Content-Length: 95
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: image/png
200 OK
If I add watchdogs here and there I can see that the module initializes more than one time, which is no surprise, but it really surprises me that my custom function is called more than once - and only sometimes. I have tried all kinds of magic, like adding a session variable that counts the number of passes and breaks after the first, but to no avail. The function runs more than once... most of the time.
It's critical for the purpose of the function that it ALWAYS runs once and only once.
Does anybody know what's happening?
Here's my basic code:
function my_image_menu() {
$items = array();
$items['image_1x1'] = array(
'title' => t('Create image'),
'description' => t('Output 1x1 PNG.'),
'page callback' => 'my_image_show',
'access arguments' => array('access content'),
);
return $items;
}
function my_image_show() {
watchdog('My Image', 'Image shown');
if (!headers_sent()) {
header('Content-Type: image/png');
echo base64_decode('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAACnej3aAAAAAXRSTlMAQObYZgAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=');
exit();
}
}
If I load http://mysite/image_1x1 I get one nice little 1x1 dot on the screen as expected, but most of the time (but not every time...) I get two "Image shown" entries in the log! In spite of the exit(), which should halt the script as far as I'm informed.
What voodoo might Drupal be doing on me?
Maybe this or maybe not! --- Both watchdog() and and exit() report to the log. Your watchdog() is not in a conditional - so it will always log. Your exit() is in a conditional so this will log only if the condition is met. This could explain the Voodoo.
Try die() instead of exit() for a cleaner log.
I have partly solved this problem by not outputting an image in the code but sending the browser on to a physical image file with a header command.
This seems to break any flow in Drupal and renders the image once as expected. The only disadvantage is that I can't just generate any image, but have to have it physically, but that's an obstacle I can overcome.
In other words replacing
if (!headers_sent()) {
header('Content-Type: image/png');
echo base64_decode('iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAA1BMVEUAAACnej3aAAAAAXRSTlMAQObYZgAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=');
exit(); }
with
if (!headers_sent()) {
header('location:/sites/default/files/1px.png');
}
in the code in my question and making sure that /sites/default/files/1px.png is on the server.
This will work for me now, but I'd still be glad to know what can stop what I guess is Drupal's handling of the exit() or die() commands.
Martin
May be you should try to check this on "clean" browser. Try Firefox (not Chrome!) without any extensions.
Add
watchdog('My Image debug info', print_r($_SERVER, true));
and analyze this output.
Related
I'am working with kind of IoT device. Finaly I've got simple httpd server to work, and simple html pages works like a charm, but browser does not recognise images. I think this is http header issue, but i do not know what is exacly wrong.
For example, my test page look like this:
<html>
<head><title>test page</title></head>
<body>
hello world!
<img src="img.png">
</body>
</html>
If i go to http://de.vi.ce.ip/ 2 reqests are generated:
GET / HTTP/1.1\r\n
Accept text/html, application/xhtml+xml, */*\r\n
Accept-Language: en-EN\r\n
...
GET /img.png HTTP/1.1\r\n
Accept image/png, image/svg+xml, image/*;q=0.8, */*;q=0.5\r\n
Accept-Language: en-EN\r\n
...
To witch my server responds with:
HTTP/1.0 200 OK\r\n
Content-Type: text/html\r\n
Content-Length: 131\r\n
\r\n
<page data>
HTTP/1.0 200 OK\r\n
Content-Type: image/png\r\n
Content-Length: 5627\r\n
\r\n
<image binary data>
As the result i can see the text, but images are broken.
I've tryed few more parameters like Connection: close, Accept-Ranges: bytes, Content-Location (path).
I've tryed jpeg image under Content-Type: image/jpeg with no luck. I'am certain that image sent correctly.
I've made exactly the same - raw http server for IoT and your response looks absolutely correct. Try check following:
You correctly flush the socket before closing it. If you call close() right after send(), you will likely encounter this problem - data has not been correctly written
The Content-Length should be exactly the size of your file. Make sure you are not counting \r\n bytes of http response. Browser may still wait for tail bytes
Finally, get the browser network logs :)
The request is asking for png
GET /img.png HTTP/1.1\r\n
Why not return the correct content type;
Content-Type: image/png\r\n
I was running into a very similar problem.
In my case when I thought I was using \r\n line terminators, I was actually only using \n; which worked fine in chromium for serving the text/html page, but was throwing net::ERR_INVALID_HTTP_RESPONSE error when serving the image/jpeg. So the page loaded, but the images were broken.
My fix was to make sure that everything was using \r\n as it was supposed to.
I have an Angular 1.5 client, published off of a Node 4,Express 4 server. I do 99% of my manual testing in IE Edge. (The rest is in Mocha, Karma, and before delivery, I hit Firefox.)
We recently added this line to our http server, using helmet:
//Prevent Mime type sniffing/infering
app.use(helmet.noSniff());
PROBLEM: The nosniff option broke all of my thumbnails.
In one of my other Angular modules, which is a controller and view component, I have this line:
...
<img ng-src="/api/thumbnail/{{title}}"/>
...
On my Node/Express server, my /api/thumbnail/:title/ route looks like this:
router.get('/api/thumbnail/:title/',function(req,res){
... get file to read from 'title'
fs.readFile(fileName,function(err,data){
if ( err ) { ... do error handling ... }
else { resp.send(data); }
});
})
Using IE's Network debugger, I noticed that the requests being sent to the server have 'application/octet stream' as the 'Content-Type'. Maybe because I am sending back an 'image/jpeg', so, I asked myself if that's what is causing the nosniff to kill the response?
In my server code, I have a "DEFAULT_THUMBNAIL" which I send back in the event that 'title' produced no viable thumbnail image for me. So, before I do a resp.send(data), I did this:
const mime = require('mime');
...
resp.setHeader('Content-type',mime.lookup(DEFAULT_THUMBNAIL));
And that seemed to fix the nosniff issue.
I am trying to access a website, and then return whatever it outputs in the body -> eg. "Success" or "Failed".
When I try with my code, I am getting the following back.
<<< REQ >>>
HTTP/1.1 200 OK
Date: Sat, 30 Aug 2014 17:36:31 GMT
Content-Type: text/html
Connection: close
Set-Cookie: __cfduid=d8a4fc3c84849b6786c6ca890b92e2cc01409420191023; expires=Mon, 23-Dec-2019 23:50:00 GMT; path=/; domain=.japseyz.com; HttpOnly
Vary: Accept-Encoding
X-Powered-By: PHP/5.3.28
Server.
My code is: http://pastebin.com/WwWbnLNn
If all you want to know is whether the HTTP transaction succeeded or failed, then you need to examine the HTTP Response code... which is in the first line of the response. In your example it is "200"... the human readable interpretation of it is "OK".
Here is a link to most of the HTTP 1.1 response codes: w3.org-rfc2616 RespCodes
Your question indicated you wanted to extract this information from the "body"...
... but that information is not located in the "body", it is in the first response
header, as described above.
have you tried ethercard samples? there is a webclient sample, in which you can find procedure called CALLBACK - in that procedure you can process data stored in buf variable.
in your case you need to look for first empty line, which tells you that headers has been sent and page content(what php writes to the page i.e.) follows.
how familiar are you at pointers? how deep you do need to process the page output? i.e. OK or ERROR is enough, or you do need to pass same parameters back to duino?
It seems that over night the Google Drive API methods files().patch( , ).execute() has stopped working and throws an exception. This problem is also observable through Google's reference page https://developers.google.com/drive/v2/reference/files/patch if you "try it".
The exception response is:
500 Internal Server Error
cache-control: private, max-age=0
content-encoding: gzip
content-length: 162
content-type: application/json; charset=UTF-8
date: Thu, 22 Aug 2013 12:32:06 GMT
expires: Thu, 22 Aug 2013 12:32:06 GMT
server: GSE
{
"error": {
"errors": [
{
"domain": "global",
"reason": "conditionNotMet",
"message": "Precondition Failed",
"locationType": "header",
"location": "If-Match"
}
],
"code": 500,
"message": "Precondition Failed"
}
}
This is really impacting our application.
We're experiencing this as well. A quick-fix solution is to add this header: If-Match: * (ideally, you should use the etag of the entity but you might not have a logic for conflict resolution right now).
Google Developers, please give us a heads up if you're planning to deploy breaking changes.
Looks like sometime in the last 24 hours the Files.Patch issue has been put back to how it used to work as per Aug 22.
We were also hitting this issue whenever we attempted to Patch the LastModified Timestamp of a file - see log file extract below:
20130826 13:30:45 - GoogleApiRequestException: retry number 0 for file patch of File/Folder Id 0B9NKEGPbg7KfdXc1cVRBaUxqaVk
20130826 13:31:05 - ***** GoogleApiRequestException: Inner exception: 'System.Net.WebException: The remote server returned an error: (500) Internal Server Error.
at System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
at Google.Apis.Requests.Request.InternalEndExecuteRequest(IAsyncResult asyncResult) in c:\code.google.com\google-api-dotnet-client\default_release\Tools\BuildRelease\bin\Debug\output\default\Src\GoogleApis\Apis\Requests\Request.cs:line 311', Exception: 'Google.Apis.Requests.RequestError
Precondition Failed [500]
Errors [
Message[Precondition Failed] Location[If-Match - header] Reason[conditionNotMet] Domain[global]
]
'
20130826 13:31:07 - ***** Patch file request failed after 0 tries for File/Folder 0B9NKEGPbg7KfdXc1cVRBaUxqaVk
Today's run of the same process is succeeding whenever it Patches a files timestamp, just as it was prior to Aug 22.
As a result of this 4/5 day glitch, we now have hundreds (possibly thousands) of files with the wrong timestamps.
I know the API is Beta but please, please Google Developers "let us know in advance of any 'trialing fixes'" and at least post in this forum to acknowledge the issue to save us time trying to find the fault in our user programs.
duplicated here Getting 500: Precondition Failed when Patching a folder. Why?
I recall a comment from one of dev videos saying "use Update instead of Patch as it has one less server roundtrip internally". I've inferred from this that Patch checks etags but Update doesn't. I've changed my code to use Update in place of Patch and the problem hasn't recurred since.
Gotta love developing against a moving target ;-)
I'm running into the following error when running an export to CSV job on AppEngine using the new Google Cloud Storage library (appengine-gcs-client). I have about ~30mb of data I need to export on a nightly basis. Occasionally, I will need to rebuild the entire table. Today, I had to rebuild everything (~800mb total) and I only actually pushed across ~300mb of it. I checked the logs and found this exception:
/task/bigquery/ExportVisitListByDayTask
java.lang.RuntimeException: Unexpected response code 200 on non-final chunk: Request: PUT https://storage.googleapis.com/moose-sku-data/visit_day_1372392000000_1372898225040.csv?upload_id=AEnB2UrQ1cw0-Jbt7Kr-S4FD2fA3LkpYoUWrD3ZBkKdTjMq3ICGP4ajvDlo9V-PaKmdTym-zOKVrtVVTrFWp9np4Z7jrFbM-gQ
x-goog-api-version: 2
Content-Range: bytes 4718592-4980735/*
262144 bytes of content
Response: 200 with 0 bytes of content
ETag: "f87dbbaf3f7ac56c8b96088e4c1747f6"
x-goog-generation: 1372898591905000
x-goog-metageneration: 1
x-goog-hash: crc32c=72jksw==
x-goog-hash: md5=+H27rz96xWyLlgiOTBdH9g==
Vary: Origin
Date: Thu, 04 Jul 2013 00:43:17 GMT
Server: HTTP Upload Server Built on Jun 28 2013 13:27:54 (1372451274)
Content-Length: 0
Content-Type: text/html; charset=UTF-8
X-Google-Cache-Control: remote-fetch
Via: HTTP/1.1 GWA
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.put(OauthRawGcsService.java:254)
at com.google.appengine.tools.cloudstorage.oauth.OauthRawGcsService.continueObjectCreation(OauthRawGcsService.java:206)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:147)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl$2.run(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:78)
at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:123)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.writeOut(GcsOutputChannelImpl.java:144)
at com.google.appengine.tools.cloudstorage.GcsOutputChannelImpl.waitForOutstandingWrites(GcsOutputChannelImpl.java:186)
at com.moose.task.bigquery.ExportVisitListByDayTask.doPost(ExportVisitListByDayTask.java:196)
The task is pretty straightforward, but I'm wondering if there is something wrong with the way I'm using waitForOutstandingWrites() or the way I'm serializing my outputChannel for the next task run. One thing to note, is that each task is broken into daily groups, each outputting their own individual file. The day tasks are scheduled to run 10 minutes apart concurrently to push out all 60 days.
In the task, I create a PrintWriter like so:
OutputStream outputStream = Channels.newOutputStream( outputChannel );
PrintWriter printWriter = new PrintWriter( outputStream );
and then write data out to it 50 lines at a time and call the waitForOutstandingWrites() function to push everything over to GCS. When I'm coming up to the open-file limit (~22 seconds) I put the outputChannel into Memcache and then reschedule the task with the data iterator's cursor.
printWriter.print( outputString.toString() );
printWriter.flush();
outputChannel.waitForOutstandingWrites();
This seems to be working most of the time, but I'm getting these errors which is creating ~corrupted and incomplete files on the GCS. Is there anything obvious I'm doing wrong in these calls? Can I only have one channel open to GCS at a time per application? Is there some other issue going on?
Appreciate any tips you could lend!
Thanks!
Evan
A 200 response indicates that the file has been finalized. If this occurs on an API other than close, the library throws an error, as this is not expected.
This is likely occurring do to the way you are rescheduling the task. It may be that when you reschedule the task, the task queue is duplicating the delivery of the task for some reason. (This can happen) and if there are no checks to prevent this, there could be two instances attempting to write to the same file at the same time. When one closes the file the other sees an error. The net result is a corrupt file.
The simple solution is not to re-schedule the task. There is no time limit on how long a file can be held open with the GCS client. (Unlike the deprecated Files API.)