Google App Engine Flex environment: ETag HTTP header removed when resource is gzip'd? - google-app-engine

I have a custom Node.js application deployed on Google App Engine flexible environment that uses dynamically calculated digest hashes to set ETag HTTP response headers for specific resources. This works fine on an AWS EC2 instance. But, not on Google App Engine flexible environment; in some cases Google App Engine appears to removes my application's custom ETag HTTP response header and this severely degrades the performance of the application. And, will be needlessly expensive.
Specifically, it appears that Google App Engine flex environment strips my application's ETag header when it gzip's eligible resources.
For example, if I use curl to request a utf8::application/json resource AND do not indicate that I will accept the response in compressed format then everything works as I would expect --- the resource is returned along with my custom ETag header that is a digest hash of the resource's data.
curl https://viewpath5.appspot.com/javascript/client-app-bundle.js --verbose
... we get the client-app-bundle.js as an uncompressed UTF8 resource along with a ETag HTTP response header whose value is a digest hash of the JavaScript file's data.
However, if I emulate my browser and set the Accept-Encoding HTTP request header to indicate to Google App Engine that my user agent (here curl) will accept a compressed resource, then I do not ever get the ETag HTTP response header.
$ curl --verbose https://xxxxxxxx.appspot.com/javascript/client-app-bundle.js
* Hostname was NOT found in DNS cache
* Trying zzz.yyy.xxx.www...
* Connected to xxxxxxxx.appspot.com (zzz.yyy.xxx.www) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.appspot.com
* start date: 2019-05-07 11:31:13 GMT
* expire date: 2019-07-30 10:54:00 GMT
* subjectAltName: xxxxxxxx.appspot.com matched
* issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
* SSL certificate verify ok.
> GET /javascript/client-app-bundle.js HTTP/1.1
> User-Agent: curl/7.38.0
> Host: xxxxxxxx.appspot.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 23 May 2019 00:24:06 GMT
< Content-Type: application/javascript
< Content-Length: 4153789
< Vary: Accept-Encoding
< ETag: #encapsule/holism::kiA2cG3c9FzkpicHzr8ftQ
< Cache-Control: must-revalidate
* Server #encapsule/holism v0.0.13 is not blacklisted
< Server: #encapsule/holism v0.0.13
< Via: 1.1 google
< Alt-Svc: quic=":443"; ma=2592000; v="46,44,43,39"
<
/******/ (function(modules) { // webpackBootstrap
/******/ // The module cache
/******/ var installedModules = {};
/******/
.... and lots more JavaScript. Importantly note ETag in HTTP response headers.
COMPRESSED (FAILING) CASE:
$ curl --verbose -H "Accept-Encoding: gzip" https://xxxxxxxx.appspot.com/javascript/client-app-bundle.js
* Hostname was NOT found in DNS cache
* Trying zzz.yyy.xxx.www...
* Connected to xxxxxxxx.appspot.com (zzz.yyy.xxx.www) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.appspot.com
* start date: 2019-05-07 11:31:13 GMT
* expire date: 2019-07-30 10:54:00 GMT
* subjectAltName: xxxxxxxx.appspot.com matched
* issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
* SSL certificate verify ok.
> GET /javascript/client-app-bundle.js HTTP/1.1
> User-Agent: curl/7.38.0
> Host: xxxxxxxx.appspot.com
> Accept: */*
> Accept-Encoding: gzip
>
< HTTP/1.1 200 OK
< Date: Thu, 23 May 2019 00:27:15 GMT
< Content-Type: application/javascript
< Vary: Accept-Encoding
< Cache-Control: must-revalidate
* Server #encapsule/holism v0.0.13 is not blacklisted
< Server: #encapsule/holism v0.0.13
< Content-Encoding: gzip
< Via: 1.1 google
< Alt-Svc: quic=":443"; ma=2592000; v="46,44,43,39"
< Transfer-Encoding: chunked
<
�}{{G���˧�(�.rb�6`����1ƀw���,���4�$�23,���UU_碋-��
No ETag?
To me it seems incorrect that my application's custom ETag HTTP response header is removed; the gzip compression on the server and subsequent decompression in the user agent should be wholly encapsulated as an implementation detail of the network transport?

This behavior is caused by the NGINX proxy side car container that handles requests on GAE flex.
NGINX removes ETag headers when compressing content, maybe to comply with the strong identity semantics of the ETag, which is byte-by-byte, but not sure of that.
Unfortunately there is no way to configure the NGINX proxy in GAE Flex (other than manually SSHing into the container in each instance, changing nginx.comf, and restarting the nginx proxy).
The only workaround I know is to loosen the ETag strictness by making it "weak" by prepending "W/" to the value as specified in https://www.rfc-editor.org/rfc/rfc7232.
This is not documented. There's already an internal feature request to the App Engine documentation team to include this behavior in our public documentation.

Related

hawkBit swupdate Suricatta: HTTP/1.1 401 Unauthorized

I want to set up hawkBit (running on server) and swupdate (running on multiple clients - Linux OS) to perform OS/Software update in Suricatta mode.
1/ Follow up my post on hawkBit community, I've succeeded to run hawkBit in my server as below:
Exported to external link: http://:
Enabled MariaDB
Enabled Gateway Token Authentication (in hawkBit system configuration)
Created a software module
Uploaded an artifact
Created a distribution set
Assigned the software module to the distribution set
Create Targets (in Deployment Mangement UI) with Targets ID is "dev01"
Created a Rollout
Created a Target Filter
2/ I've succeeded to build/execute swupdate as SWupdate guideline
Enabled Suricatta daemon mode
Run swupdate: /usr/bin/swupdate -v -k /etc/public.pem -u '-t DEFAULT -u http://<domain>:<port> -i dev01'
I'm pretty sure this command isn't correct, output log as below:
* Trying <ip address>...
* TCP_NODELAY set
* Connected to <domain> (<ip address>) port <port> (#0)
> GET /DEFAULT/controller/v1/10 HTTP/1.1
Host: <domain>:<port>
User-Agent: libcurl-agent/1.0
Content-Type: application/json
Accept: application/json
charsets: utf-8
< HTTP/1.1 401 Unauthorized
< Date: Sun, 16 May 2021 02:43:40 GMT
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-Frame-Options: DENY
< Content-Length: 0
<
* Connection #0 to host <domain> left intact
[TRACE] : SWUPDATE running : [channel_log_effective_url] : Channel's effective URL resolved to http://<domain>:<port>/DEFAULT/controller/v1/dev01
[ERROR] : SWUPDATE failed [0] ERROR corelib/channel_curl.c : channel_get : 1109 : Channel operation returned HTTP error code 401.
[DEBUG] : SWUPDATE running : [suricatta_wait] : Sleeping for 45 seconds.
As per a suggestion from #laverman on Gitter:
You can use Gateway token in the Auth header of the request, e.g. “Authorization : GatewayToken a56cacb7290a8d8a96a2f149ab2f23d1”
but I don't know how the client sends this request (it should be sent by swupdate, right?)
3/ Follow up these instructions from Tutorial # EclipseCon Europe 2019, it guides me to send the request to provision multiple clients from hawkBit Device Simulator. And the problem is how to apply this to real devices.
Another confusion is: when creating new Software Module, Distribution on hawkBit UI, I can't find the ID of these, but creating by send request as Tutorial, I can see the IDs in the response.
So my questions are:
1/ Are my hawkBit setup steps correct?
2/ How can I configure/run swupdate (on clients) to perform the update: poll for new software, download, update, report status, ...
If my description isn't clear enough, please tell me.
Thanks
happy to see that you're trying out Hawkbit for your solution!
I have a few remarks:
The suricatta parameter for GatewayToken is -g, and -k for TargetToken respectively.
The -g <GATEWAY_TOKEN> needs to be set inside of the quotation marks
see SwUpdate Documentation
Example: /usr/bin/swupdate -v -u '-t DEFAULT -u http://<domain>:<port> -i dev01 -g 76430e1830c56f2ea656c9bbc88834a3'
For the GatewayToken Authentication, you need to provide the generated token in System Config view, it is a generated hashcode that looks similar to this example here
You can also authenticate each device/client separately using their own TargetToken.
You can find more information in the Hawkbit documentation

How to make libCurl use hexadecimal cnonce instead of alphanumeric in Digest authentication header

I have recently started using libCurl for client-server communication project where I use libcurl in the client side. We used to use WinHTTP, but did not find a way to add TLS 1.3 backward compatibility with earlier windows versions.
The cnonce is part of Digest Authentication headers.
When my project was earier using WinHTTP, the cnonce used to be hexadecimal.
Eg:
cnonce="a01e21c2a827ec6d3d9b6e1745ca8a0b"
HTTP Header
Server to client:
HTTP/1.1 401 Unauthorized
Content-Length: 26
WWW-Authenticate: Negotiate
WWW-Authenticate: Digest realm="a2ffc77914d6e791d", qop="auth",nonce="3f3da4b94e249058", opaque ="3b3542c"
Client to Server:
POST /wsman HTTP/1.1
Connection: Keep-Alive
Content-Type: application/soap+xml;charset=UTF-8
User-Agent: Openwsman
Content-Length: 889
Host: 10.138.141.178:623
Authorization: Digest username="PostMan",realm="a2ffc77914d6e791d",nonce="3f3da4b94e249058",uri="/wsman",cnonce="a01e21c2a827ec6d3d9b6e1745ca8a0b",nc=00000001,response="9dd37ef997ef332e46dff0f868b3de89",qop="auth",opaque="3b3542c"
When I look at the HTTP header I find that the cnonce is alphanumeric with Curl.
Eg:
cnonce="NDlmYTM0ZjVlM2IzNTNhMDNiNDk0MzQ1MzdlYmFlMzA="
HTTP Header
Server to Client
HTTP/1.1 401 Unauthorized
Content-Length: 0
Connection: Keep-Alive
Content-Type: application/soap+xml;charset=UTF-8
WWW-Authenticate: Digest realm="a2ffc77914d6e791d", nonce="5bf1156647e8eb42", algorithm="MD5", qop="auth", opaque="661d9eae", userhash=true
Client to Server
POST /wsman HTTP/1.1
Host: blr-5cg64728l6.amd.com:623
Authorization: Digest username="PostMan", realm="a2ffc77914d6e791d", nonce="5bf1156647e8eb42", uri="/wsman", cnonce="NDlmYTM0ZjVlM2IzNTNhMDNiNDk0MzQ1MzdlYmFlMzA=", nc=00000001, qop=auth, response="6847e465c9c90b40264b736070f721da", opaque="661d9eae", algorithm=MD5, userhash=true
Accept: */*
Content-Type: application/soap+xml;charset=UTF-8
User-Agent: Openwsman
Content-Length: 897
With alpha numeric cnonce the server is not responding back consistantly. Is there a way to specify in libcurl to generate hexadecimal cnonce - explicitly?
Note: To avoid security risk, the fields have been modified in the headers above.
I am using LibCurl: 7.73
with OpenSSL TLS backend: 1.1.1h

Configuring Nginx to serve HTTPS is failing, however serving over HTTP is working

First off, I completely understand there are many, many, articles out there that reflect this same topic. However, I having read and re-read many of them, I am still stuck.
I have a very simple static webpage (it's a Hello World HTML page, but eventually it will serve a production build react app), that I need to have served via Nginx over HTTPS.
I have been able to server the HTML over HTTP, but not HTTPS.
My Nginx config file that works correctly to serve over HTTP is:
server {
listen 80;
server_name app.mycompany.com;
root /opt/mycompany/mycompany-web/packages/client/build;
index index.html;
access_log /var/log/nginx/app.mycompany.com.access.log;
error_log /var/log/nginx/app.mycompany.com.error.log debug;
location / {
try_files $uri /index.html =404;
}
}
My Nginx config file that does NOT work correctly to serve over HTTP is:
server {
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
server_name app.mycompany.com;
root /opt/mycompany/mycompany-web/packages/client/build;
index index.html;
ssl_certificate /etc/letsencrypt/live/app.mycompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.mycompany.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
access_log /var/log/nginx/app.mycompany.com.access.log;
error_log /var/log/nginx/app.mycompany.com.error.log debug;
location / {
try_files $uri /index.html =404;
}
}
Using the first config, my webpage is served correct, however, using the second config I simply get a 404.
From what I've read on the internet, the above looks correct.
Does anyone see where I might be going wrong? How can I begin to debug my HTML not being served over HTTP?
Update (per Timo Stark's comment)
curling to https://localhost:443 returns a 404:
# curl -ikv https://localhost:443
* Rebuilt URL to: https://localhost:443/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using ECDHE-RSA-AES256-GCM-SHA384
* Server certificate:
* subject: CN=app.mycompany.com
* start date: 2020-03-31 18:34:56 GMT
* expire date: 2020-06-29 18:34:56 GMT
* issuer: C=US; O=Let's Encrypt; CN=Let's Encrypt Authority X3
* SSL certificate verify ok.
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: localhost
> Accept: */*
>
< HTTP/1.1 404 Not Found
HTTP/1.1 404 Not Found
* Server nginx/1.4.6 (Ubuntu) is not blacklisted
< Server: nginx/1.4.6 (Ubuntu)
Server: nginx/1.4.6 (Ubuntu)
< Date: Wed, 01 Apr 2020 00:02:16 GMT
Date: Wed, 01 Apr 2020 00:02:16 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 177
Content-Length: 177
< Connection: keep-alive
Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.4.6 (Ubuntu)</center>
</body>
</html>
* Connection #0 to host localhost left intact
Update 2
$ nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
And my /etc/nginx/nginx.conf file is:
# cat /etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

Google's resumable video upload status API endpoint for Google Drive is failing with HTTP 400: "Failed to parse Content-Range header.":

In order to resume an interrupted upload to Google Drive, we have implemented status requests to Google's API following this guide.
https://developers.google.com/drive/v3/web/resumable-upload#resume-upload
Request:
PUT
https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable&upload_id=UPLOAD_ID HTTP/1.1
Content-Length: 0
Content-Range: bytes */*
It works perfectly in most of the cases. However, the following error occurs occasionally and even retries of the same call result in the same erroneous response.
Response:
HTTP 400: "Failed to parse Content-Range header."
We are using the google.appengine.api.urlfetch Python library to make this request in our Python App Engine backend.
Any ideas?
EDIT:
I could replicate this issue using cURL
curl -X PUT 'https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable&upload_id=UPLOAD_ID' -H 'Content-Length: 0' -vvvv -H 'Content-Range: bytes */*'
Response:
* Trying 172.217.25.138...
* Connected to www.googleapis.com (172.217.25.138) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 697 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.googleapis.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: C=US,ST=California,L=Mountain View,O=Google Inc,CN=*.googleapis.com
* start date: Thu, 16 Mar 2017 08:54:00 GMT
* expire date: Thu, 08 Jun 2017 08:54:00 GMT
* issuer: C=US,O=Google Inc,CN=Google Internet Authority G2
* compression: NULL
* ALPN, server accepted to use http/1.1
> PUT /upload/drive/v3/files?uploadType=resumable&upload_id=UPLOAD_ID HTTP/1.1
> Host: www.googleapis.com
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 0
> Content-Range: bytes */*
>
< HTTP/1.1 400 Bad Request
< X-GUploader-UploadID: UPLOAD_ID
< Content-Type:
< X-GUploader-UploadID: UPLOAD_ID
< Content-Length: 37
< Date: Thu, 23 Mar 2017 22:45:58 GMT
< Server: UploadServer
< Content-Type: text/html; charset=UTF-8
< Alt-Svc: quic=":443"; ma=2592000; v="37,36,35"
<
* Connection #0 to host www.googleapis.com left intact
Failed to parse Content-Range header.

Angular + Node + Express + Passport + oauth2orize unique CORS issues

I've built an API to use for local auth and Facebook auth.
I'm using node, express, passport and oauth2orize for the authorization process.
I'm now running the API perfectly through terminal applications and API testing suites, however, when making calls to my authentication endpoints from angular I receive the following:
Local auth:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at
http://localhost:4200/oauth2/auth
?client_id=[CLIENT_ID]
&redirect_uri=http:%2F%2Flocalhost:4200%2Foauth2%2Fauth%2Fcallback (http://localhost:4200/oauth2/auth/callback)
&response_type=code.
This can be fixed by moving the resource to the same domain or enabling CORS.
Facebook auth:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at
https://www.facebook.com/dialog/oauth
?response_type=code
&redirect_uri=http%3A%2F%2Flocalhost%3A4200%2Fauth%2Ffacebook%2Fcallback (http://localhost/auth/facebook/callback)
&client_id=[CLIENT_ID].
This can be fixed by moving the resource to the same domain or enabling CORS.
I have had CORS issues in the past and integrated the npm 'cors' middleware module found at https://www.npmjs.com/package/cors
CORS init:
var cors = require('cors');
api.use(cors());
With my previous issues, this was sufficient, however, with these new CORS issues it's not helping.
I've also noticed, in Firefox, if I click on the error message, a new dialog window opens up as it should and the server continues to correctly authorize the user.
Could anyone help?
UPDATE 1:
Check comments for screenshot of debug info.
UPDATE 2:
Response headers for the last 2 requests performed in the login flow.
204:
Access-Control-Allow-Credentials: true
Connection: keep-alive
Date: Fri, 06 Feb 2015 15:26:43 GMT
Vary: Origin
X-Powered-By: Express
access-control-allow-headers: authorization
access-control-allow-methods: GET,HEAD,PUT,PATCH,POST,DELETE
access-control-allow-origin: http://localhost:8100
302:
Access-Control-Allow-Credentials: true
Connection: keep-alive
Content-Length: 138
Content-Type: text/plain; charset=utf-8
Date: Fri, 06 Feb 2015 15:26:43 GMT
Location: http://localhost:4200/oauth2/auth/callback?code=[CODE_HERE]
Set-Cookie: connect.sid=[SID_HERE]; Path=/; HttpOnly
Vary: Origin, Accept
X-Powered-By: Express
access-control-allow-origin: http://localhost:8100
The earlier examples in the docs don't include handling the preflight request, and do not specify any origins, which is required if you want to send any credentials with the requests (for example, your authorization header). Here's an example:
var whitelist = ['https://localhost:4200']; // Acceptable domain names. ie: https://www.example.com
var corsOptions = {
credentials: true,
origin: function(origin, callback){
var originIsWhitelisted = whitelist.indexOf(origin) !== -1;
callback(null, originIsWhitelisted);
// callback(null, true); uncomment this and comment the above to allow all
}
};
// Enable CORS
app.use(cors(corsOptions));
// Enable CORS Pre-Flight
app.options('*', cors(corsOptions));

Resources