How does Google App Engine modify HTTP request/response encoding? - google-app-engine

I'm trying to send JSON HTTP request from Google App Engine application and retrieve response, and while this works great locally, it suddenly breaks when I deploy it to GAE.
To be more precise, the HTTP response body that is returned to my application ends up looking like this instead of being simple JSON:
�\bD�[��8��ʖϣ�M�M$ �\\�bA` #!r���~pvk�cR]�_7E�
I did find one set of circumstances where I get correct response on GAE which might give some insight at this behavior - if the response doesn't have content-type header it goes through fine, but as soon as there is content-encoding header set to "gzip" present I get the incorrect garbage above as a response body.
Unfortunately I don't have control over the service I'm calling. So the only choice I have is to fix this somehow on my side, but to fix it I'm trying to understand the difference between what Google does to response. Does anyone know?
I understand that Google does some things to HTTP traffic. Is it forcing gzip on my responses as well?
I've also tried playing with encodings, trying to read response as utf-8, and setting utf-8 as default encoding for my GAE application as recommended here, but with no effect. I've ruled out incorrect processing of response in my code or anything I'm using, at least I think so, otherwise I'd have the same problems locally. I'm trying to understand what exactly is happening with hope that this will give me some idea how to prevent it.
EDIT: I figured it out and did a workaround but it's still a workaround, not a solution. So my GAE app calls another web service from outside GAE which sometimes gzips response and sometimes doesn't. If it does, GAE strips away content-type header from response, thus preventing my app from correctly decoding response body. My workaround so far is to get response bytes and test if response is valid JSON, unzipping it manually if it isn't. Would still want to know if stripping content-type can be prevented...

As explained at https://cloud.google.com/appengine/kb/#compression , the application should not supply the content-encoding header: "Google App Engine does its best to serve gzipped content to browsers that support it. Taking advantage of this scheme is automatic and requires no modifications to applications".
I believe that the origin of this architecture (that content encoding is not controlled by the application side of things -- on App Engine, your application code -- but by the server/gateway side) originated with WSGI, the Python standard interface of applications to web servers/gateways (which App Engine uses on the Python runtime), but the architecture makes enough sense that we generalized it -- as the above page puts it, "This approach avoids some well-known bugs with gzipped content in popular browsers.".
The client is far from powerless, and in fact, if it so chooses, can control the content encoding -- still quoting, "To force gzipped content to be served, clients may supply 'gzip' as the value of both the Accept-Encoding and User-Agent request headers. Content will never be gzipped if no Accept-Encoding header is present.".

Related

AppEngine authentication through Node.js

I'm trying to write a VSCode extension where users could log into Google AppEngine with a google account, and I need to get their SACSID cookie to make appengine requests.
So I'm opening a browser window at
https://accounts.google.com/ServiceLogin?service=ah&passive=true&continue=https://appengine.google.com/_ah/conflogin%3Fcontinue%3Dhttp://localhost:3000/
(generated by google.appengine.api.users.create_login_url)
The user logs in and is redirected to my local webserver at
localhost:3000/_ah/conflogin/?state={state}
Now I try to forward the request to my AppEngine app (since it knows how to decode the state parameter), so I do a request to
https://my-app.appspot.com/_ah/conflogin/?state={state}
basically just replacing localhost with the actual app.
but it doesn't work, presumably because the domain is different. I assume this is on purpose, for security.
Is there any way I can make this work ?
Not ideal, but the only solution I've found is to have an endpoint on my GAE instance that does the redirection. Then I can set that as the continue url, when I'm starting the authentication process
https://accounts.google.com/ServiceLogin?service=ah&passive=true&continue=https://appengine.google.com/_ah/conflogin%3Fcontinue%3Dhttps://my-app.appspot.com/redirect?to=http://localhost:3000
I think you should center the attention on the protocols you are using, since it’s known that the cookie name is based on the http protocol (HTTP : ACSID, HTTPS:SACSID), and that’s the security perspective till this point for me.
Having the error you are facing now would be helpful to understand the problem better. Also, how are you performing the call to the API and the code you are using would be helpful too.

How to detect that a request is originating from a Good Mobile browser

We have a requirement to redirect the request to a mobile version of the app if it origniates from a mobile device.I'm using the existence of X-WAP-Profile in the header and it seems to work with Blackberry however when we try to test on Good (Secure Mobile) Browser it doesn't work.looks like the header is not on in this case.I'm accessing from iPhone.
So there are two questions
What is a conclusive way of recognising that the request is originating from the Good Browser
Will this change based on the kind of device that the Mobile browser is used from i.e iPad/iPhone/Android etc?
If there is a way to avoid the user-agent (assuming that they change from device/mobile os type) I would prefer that method of detection.
Any pointers in this regard please help
Ultimately, a http request, including its headers, is text, and this text can be anything a piece of software wants to send. So, I can easily have a mobile browser that reports itself as being a desktop browser. What this means is that there is no absolute and conclusive way of recognizing anything about the source of a request. All you can reasonably do is trust the user-agent string, and respond to as many different values as you can. If you're getting no value, then you'll have to make a decision on which version of the app to go to.

Youtube API (v3) and CORS preflight

EDIT: Looks like this post is much ado about nothing, as the OPTIONS preflighting was introduced late last fall; my problems have been more user error than anything. See comments for more details (I'll leave it all here for documentation's sake). --jlm
It's a known issue that the Youtube API does not support the OPTIONS request method, so any web app that tries to do a Cross-Origin preflight will fail, even if (as is the case with the Youtube API) the actual CORS request would succeed. As we've been facing this issue yet again this week, we've come up with three workarounds; however, each has it's own plusses and minuses.
POSSIBLE WORKAROUND #1: Forget the preflight, and just make the simple cross-origin request.
Benefits -- A) it works. B) this is all the specification requires when doing most GET or POST requests.
Drawbacks -- A) The specification states that any PATCH, PUT, or DELETE requests, along with POST requests that use a content-type other than form-data, url-encoded, or text/plain, need to be preflighted. So while it would work, it would be breaking the spec (which we'd like to avoid if possible). B) Preflighting is also necessary when setting custom headers; so, for example, when I use AngularJS's $http method in the 1.0 branch, it sets a custom header and thus triggers preflighting of even GET requests. In this case, of course, I could write my own $http service or move to the 1.1 branch (as the problem would really be on Angluar's end).
POSSIBLE WORKAROUND #2: Use JSON-P
Benefits -- A) it works, too (in some cases, anyway). B) It's fairly simple to set up.
Drawbacks -- A) An older technology, and it isn't clear if the Youtube API will continue to support JSONP. B) Requires a callback. C) Is limited to GET requests only.
POSSIBLE WORKAROUND #3: Set up a server-side proxy on the same domain, use it to communicate with the Youtube API.
Benefits -- A) Avoids the need to do CORS requests at all, since the client works on the same origin and the server-side proxy has no need to preflight anything.
Drawbacks -- A) Can get complicated to set up, especially if trying to work with credentials (oAuth2 through a proxy can be quite the beast).
So that this post has an actual question (or several, actually)
What take do you have (for better for worse) on any or all of the workarounds above?
Has anyone implemented other solutions?
Is there any information as to if/when the Youtube servers might support the OPTIONS method?
Any and all comments are welcome -- and I apologize in advance if this isn't the best forum for such a question (although I'm hoping that by putting it here on Stack Overflow, it'll prove useful to others facing the same issue)
Youtube upload API(uploads.gdata.youtube.com) doesn't support CORS.
You may vote the issues here and here . There are also several forum posts about it. CORS is not working !
It seems Google is still working to fix it , though started working on it about an year ago so most likely the API will be deprecated before to have the issue fixed(as we already have a V3).
The only workaround is to use a proxy(which is not cool). Basically upload the video on your server and then send it to youtube.

Caching & GZip on GAE (Community Wiki)

Why does it seem like Google App Engine isn’t setting appropriate cache-friendly headers (like far-future expiration dates) on my CSS stylesheets and JavaScript files? When does GAE gzip those files? My app.yaml marks the respective directories as static_dirs, so the lack of far-future expiration dates is kind of surprising to me.
This is a community wiki to showcase the best practices regarding static file caching and gzipping on GAE!
How does GAE handle caching?
It seems GAE sets near-future cache expiration times, but does use the etag header. This is used so browsers can ask, “Has this file changed since when it had a etag of X68f0o?” and hear “Nope – 304 Not Modified” back in response.
As opposed to far-future expiration dates, this has the following trade-offs:
Your end users will get the latest copies of your resources, even if they have the same name (unlike far-future expiration). This is good.
Your end users will however still have to make a request to check on the status of that file. This does slow down your site, and is “pure overhead” when the content hasn’t changed. This is not ideal.
Opting for far-future cache expiration instead of (just) etag
To use far-future expiration dates takes two steps and a bit of understanding.
You have to manually update your app to request new versions of resources, by e.g. naming files like mysitesstyles.2011-02-11T0411.css instead of mysitestyles.css. There are tools to help automate this, but I’m not aware of any that directly relate to GAE.
Configure GAE to set the expiration times you want by using default_expiration and/or expiration in app.yaml. GAE docs on static files
A third option: Application manifests
Cache manifests are an HTML5 feature that overrides cache headers. MDN article, DiveIntoHTML5, W3C. This affects more than just your script and style files' caching, however. Use with care!
When does GAE gzip?
According to Google’s FAQ,
Google App Engine does its best to serve gzipped content to browsers that support it. Taking advantage of this scheme is automatic and requires no modifications to applications.
We use a combination of request headers (Accept-Encoding, User-Agent) and response headers (Content-Type) to determine whether or not the end-user can take advantage of gzipped content. This approach avoids some well-known bugs with gzipped content in popular browsers. To force gzipped content to be served, clients may supply 'gzip' as the value of both the Accept-Encoding and User-Agent request headers. Content will never be gzipped if no Accept-Encoding header is present.
This is covered further in the runtime environment documentation (Java | Python).
Some real-world observations do show this to generally be true. Assuming a gzip-capable browser:
GAE gzips actual pages (if they have proper content-type headers like text/html; charset=utf-8)
GAE gzips scripts and styles in static_dirs (defined in app.yaml).
Note that you should not expect GAE to gzip images like GIFs or JPEGs as they are already compressed.

Silverlight and SSL Client Certificates

Can anyone point me in the right direction of how I can use SSL client-side certificates with Silverlight to access a restful web service?
I can't seem to find anything on how to handle them, or even whether they are supported.
Cheers.
Slipjig mentioned this:
"The browser stack does, and pretty much automatically, if you're willing to live with its other limitations (lack of support for all HTTP verbs, coercion of response status codes, etc.)."
If that is acceptable to you, look at how Microsoft themselves deal with this in some of their APIs using the custom X-HTTP-Method header, like how they do it for WCF and OData:
http://www.odata.org/developers/protocols/operations
In MSDN, Microsoft also mentions this about using REST in conjunction with SharePoint 2010's WCF based REST API:
msdn.microsoft.com/en-us/library/ff798339.aspx
"In practice, many firewalls and other network intermediaries block HTTP verbs other than GET and POST. To work around this issue, WCF Data Services (and the OData standard) support a technique known as "verb tunneling." In this technique, PUT, DELETE, and MERGE requests are submitted as a POST request, and an X-HTTP-Method header specifies the actual verb that the recipient should apply to the request. For more information, see X-HTTP-Method on MSDN and OData: Operations (the Method Tunneling through POST section) on the OData Web site."
Don Box's also had some words about this, but regarding GData specifically:
www.pluralsight-training.net/community/blogs/dbox/archive/2007/01/16/45725.aspx
"If I were building a GData client, I honestly wonder why I'd bother using DELETE and PUT methods at all given that X-HTTP-Method-Override is going to work in more cases/deployments."
There's an article about Silverlight and Java interop which also addresses this limitation of Silverlight by giving the same advice:
www.infoq.com/articles/silverlight-java-interop
"Silverlight supports only the GET and POST HTTP methods. Some firewalls restrict the use of PUT and DELETE HTTP methods.
It is important to point out that true RESTful service can be created (conforming to all the REST principles listed above) only using the GET and POST HTTP methods, in other words the REST architecture does not require a specific mapping to HTTP. Google’s GData X-Http-Method-Override header is an example of this approach.
The following HTTP methods overrides may be set in the header to accomplish the PUT and DELETE actions if the web services interpret the X-HTTP-Method-Override header on a POST:
* X-HTTP-Method-Override: PUT
* X-HTTP-Method-Override: DELETE"
Hope this helps
-Josh
It depends on whether you're using the browser HTTP stack or the client HTTP stack. The client stack does not support client certificates, period. The browser stack does, and pretty much automatically, if you're willing to live with its other limitations (lack of support for all HTTP verbs, coercion of response status codes, etc.).
I have however been running into a problem using the browser stack with client certificates in an OOB scenario. Prism module loading fails under these conditions - the request gets to IIS, but causes a 500 server error for no apparent reason. If I set IIS to ignore client certs, or if I run the app in-browser, it works fine :-/
take a look at this.
http://support.microsoft.com/kb/307267
just change your urls to https
hope this helps
Dim url As Uri = New Uri(Application.Current.Host.Source, "../WebService.asmx")
Dim binding As New System.ServiceModel.BasicHttpBinding
If url.Scheme = "https" Then
binding.Security.Mode = ServiceModel.BasicHttpSecurityMode.Transport
End If
binding.MaxBufferSize = 2147483647 'this value set to override a bug,
binding.MaxReceivedMessageSize = 2147483647 'this value set to override a bug,
Dim proxy As New ServiceReference1.WebServiceSoapClient(binding, New ServiceModel.EndpointAddress(url))
proxy.InnerChannel.OperationTimeout = New TimeSpan(0, 10, 0)

Resources