Why the services like 'ping' always return 'image'? - mime-types

There are lots of web services (mostly real-time & AJAX for analysis) like heart-beat or clickstream out there.
I found that they basically all return as image/png or image/gif instead of text/html or some other MIME types.
Are image MIME types more effective or something?

The heartbeat is implemented via an image tag:
<img src="heartbeat.php"/>
to circumvent the same-origin-policy.

Related

How to add multipart/mixed MIME boundary in Content-Type header in Swagger

In Swagger UI, is there a way to supply a MIME boundary that can be included in the Content-Type header?
Using Swagger, I can generate / submit different content types in Swagger UI by using the consumes property in the Swagger Spec, e.g. application/json, application/xml, etc. However, I need to supply a MIME boundary that may be different per request. For example, in the following Content-Type header from RFC 2049, it would be preferable for the unique boundary value to be input in the UI as a text field. Is there anyway to indicate this in the Spec?
Content-Type: multipart/mixed; boundary=unique-boundary-1
This isn't currently supported and has been accepted as an possible enhancement in the Swagger.Next Proposal on Github:
https://github.com/swagger-api/swagger-spec/issues/303

Returning Gzip-ed response bodies on App Engine

When caching item in App Engine's memcache I used gzip compression to save space and get below the 1MB limit for some files.
Since I also put rendered pages into the memcache, I though it would be nice and much quicker to directly return the gzipped body to the client, if it accepts gzip encdoing.
Unfortunately the request's Accept-Encoding only has the value identity (using the AE dev server with Go), which to me means I have to return the body as-is (i.e. plain html).
Is one not supposed to gzip contents themselves? Or could I always return gzipped content with the appropriate headers, and the AE infrastructure would decompress this when the client does not support compression?
After all I hope to get even better response times by caching the response in its output state.
For caching the response, if your response is public (same copy for all users), you can make use of Google's edge cache by setting the proper HTTP headers, for example:
Cache-Control: public,max-age=86400
Expires: Sat, 16 May 2015 07:23:15 +0000
About compression, as far as I know, Google automatically compress the content in HTTP response whenever possible. There is no need to handle this manually.

413 - Request Entity Too Large

I can upload small drafts OK using the metadata endpoint (https://www.googleapis.com/gmail/v1/users/me/drafts), e.g.:
{"message":{"raw":"TUlNRS1WZXJzaW9uOiAxLjANClgtTWFpbGVyOiBNYWlsQmVlLk5FVCA3LjAuNC4zMjgNClRvOiBjaHJpcy53b29kQG5vdGFibHlnb29kLmNvbQ0KU3ViamVjdDogdGVzdCENCkNvbnRlbnQtVHlwZTogbXVsdGlwYXJ0L21peGVkOw0KCWJvdW5kYXJ5PSItLS0tPV9OZXh0UGFydF8wMDBfQUFEQV9FOUMzOEZCNy5BMjRFQjI2OSINCg0KDQotLS0tLS09X05leHRQYXJ0XzAwMF9BQURBX0U5QzM4RkI3LkEyNEVCMjY5DQpDb250ZW50LVR5cGU6IHRleHQvcGxhaW4NCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IHF1b3RlZC1wcmludGFibGUNCg0KVGVzdCBjb250ZW50DQotLS0tLS09X05leHRQYXJ0XzAwMF9BQURBX0U5QzM4RkI3LkEyNEVCMjY5DQpDb250ZW50LVR5cGU6IGFwcGxpY2F0aW9uL29jdGV0LXN0cmVhbTsNCgluYW1lPSJUcmFjZS5sb2ciDQpDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50Ow0KCWZpbGVuYW1lPSJUcmFjZS5sb2ciDQpDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiBiYXNlNjQNCg0KVTNWdUlESTVJRXBoYmlBeU1ERXlJREV5T2pJek9qTTBMalkzTmlBNklDTXhNREExTXpvZ1YxTkJSVU5QVGs1QlFrOVNWRVZFT2lCVA0KYjJaMGQyRnlaU0JqWVhWelpXUWdZMjl1Ym1WamRHbHZiaUJoWW05eWRDNGdJRWc2TURVMU9USWdSam9uU0VOVFRsUlRiMk5yWlhSZg0KVTJWdVpDY2dRVG9uYzJWdVpDY2dWRG9uVTI5amEyVjBQVFUwTUM0Z1JtbHVhWE5vWldRZ2NtVjBjbmxwYm1jdUp5QU5DbE4xYmlBeQ0KT1NCS1lXNGdNakF4TWlBeE1qb3lNem96TkM0Mk9UQWdPaUFqTVRBd09Eb2dSWEp5YjNJNklDQklPakExTlRreUlFWTZKMU5sYm1SVg0KYzJWeVJHVm1hVzVsWkVoVVZGQk5aWE56WVdkbFFtOWtlU2NnVkRvblJYSnliM0lnYzJWdVpHbHVaeUIxYzJWeUlHUmxabWx1WldRZw0KWm1sc1pTQmpiMjUwWlc1MGN5QjBieUJqYkdsbGJuUWdLRUp2WkhrcExpQlNaWFIxY200OUxURXVKeUFOQ2xOMWJpQXlPU0JLWVc0Zw0KTWpBeE1pQXhNam95TXpvek5DNDJPVElnT2lBak1UQXdOVE02SUZkVFFVVkRUMDVPUVVKUFVsUkZSRG9nVTI5bWRIZGhjbVVnWTJGMQ0KYzJWa0lHTnZibTVsWTNScGIyNGdZV0p2Y25RdUlDQklPakExTlRreUlFWTZKMGhEVTA1VVUyOWphMlYwWDFObGJtUW5JRUU2SjNObA0KYm1RbklGUTZKMU52WTJ0bGREMDFOREF1SUVacGJtbHphR1ZrSUhKbGRISjVhVzVuTGljZ0RRcFRkVzRnTWprZ1NtRnVJREl3TVRJZw0KTVRJNk1qTTZNelF1TmprMElEb2dJekV3TURnNklFVnljbTl5T2lBZ1JYSnliM0lnY21WeGRXVnpkR2x1WnlCQ1lYTnBZeUJCZFhSbw0KWlc1MGFXTmhkR2x2Ymk0TkNsTjFiaUF5T1NCS1lXNGdNakF4TWlBeE1qb3lOVG96TVM0NE1EWWdPaUFqTVRBeE5Ub2dVMmgxZEdSdg0KZDI0NklDQkdjbVZsSUZCeWIzaDVJRk5sY25acFkyVWdjM1J2Y0hCbFpDNE5DZz09DQotLS0tLS09X05leHRQYXJ0XzAwMF9BQURBX0U5QzM4RkI3LkEyNEVCMjY5LS0NCg"}}
However, when I try a larger file that's still within the 35MB limit (e.g. an 11MB file), I get the following HTTP WebException:
The remote server returned an error: (413) Request Entity Too Large.
Is this a bug in the new API, or is this down to the fact I should be using the media endpoint instead for this kind of thing? If so, can anybody provide an example of how to do this using the .NET Client?
You need to use the /upload "media upload" path to upload anything over a few MB. The URL and POST format are slightly different:
You'd do:
POST https://www.googleapis.com/upload/gmail/v1/users/userId/drafts
add a HTTP header like "Content-type: multipart/related; boundary=\"part_boundary\""
POST body looks more like:
--part_boundary
Content-Type: application/json; charset=UTF-8
{
}
--part_boundary
Content-Type: message/rfc822
From: script#example.org
To: user#example.com
Subject: test
body here
--part_boundary--
See this for more info (which then links to this).

App Engine Accept-Encoding

In the APP Engine API, it is mentioned that, if the request comes with "Accept-Encoding" set, then it will automatically compress the response.
But when I look at the request, the header is not there. but at the browser, it is set. when I try to explicitly set the header(with JQuery ajax function), there is a message:
Refused to set unsafe header "Accept-Encoding"
But this situation is not occurring when working in local host - request has the "Accept-Encoding" header. this happens only after publishing. but not allowing to set the "Accept-Encoding" explicitly happens always.
I searched everywhere, but couldn't find a explanation to the problem. It would be really helpful if someone can explain...
You have two different problems:
App Engine does not compress reply. GAE uses a number of factors to determine if response needs to be compressed. It takes content type and user agent into account when deciding. See the answer by Nick Johnson (from GAE team).
jQuery refuses to set "Accept-Encoding" header. Note that this is a jQuery issue and has nothing to do with GAE. See this: Is it possible to force jQuery to make AJAX calls for URLs with gzip/deflate enabled?
I have a similar problem as in the HTTPRequest header, "Accept-Encoding" is null. As GAE has explained it looks for Accept-Encoding and User-Agent headers, if it wants to compress. but in my case there is no way the GAE can recognize whether to compress.
From the browser, then header is set, but in the request header, it is not.

Headers and caching in REST service call from Silverlight

I've been developing a small Silverlight client, which will talk to a REST service build using the WCF WEBAPI....
When the service is called using GET, it'll kick of a long running process, that'll generate a resource, so the service will return 'Accepted' and a URI in a Location header, to where the resource will be found.
Server: ASP.NET Development Server/10.0.0.0
Date: Fri, 18 Nov 2011 09:00:17 GMT
X-AspNet-Version: 4.0.30319
Content-Length: 3
Location: http://localhost:52878/myservice?fileid=f68201f6-9d77-4818-820e-e5e796e9710a
Cache-Control public, max-age=21600
Expires: 21600
Content-Type: text/plain
Connection: Close
Now, in my Silverlight client, I need to access this header information, however using the BrowserHTTP stack, this is not possible... so I've switched to the ClientHTTP, which makes it possible for me to access the header information returned.
However the ClientHTTP stack doesn't support Content Caching:
http://www.wintellect.com/CS/blogs/jprosise/archive/2009/10/14/silverlight-3-s-new-client-networking-stack.aspx
which is causing me troubles..... I wan't the same resource to be returned for 6 hours, before a new one is generated.
Is there a way to get the best of both... being able to access the Header info AND have content caching??
TIA
Søren
Stop using a header to return the information needed by the client code.
If you include the required information in the entity body using either raw or encoded in some message format (e.g. XML or JSON) then you can continue to use the BrowserHTTP and benefit from its caching.
Using the headers is the correct way to convey this information. That's why it's in the standard.
I don't do silverlight, but what I get from that post is that you will now need to implement the caching. Using the BrowserHttp leverages the browsers caching mechanism. Now using ClientHttp you are dropping closer to the metal and you will have to implement caching.

Resources