Google's resumable video upload status API endpoint for Google Drive is failing with HTTP 400: "Failed to parse Content-Range header.": - google-app-engine

In order to resume an interrupted upload to Google Drive, we have implemented status requests to Google's API following this guide.
https://developers.google.com/drive/v3/web/resumable-upload#resume-upload
Request:
PUT
https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable&upload_id=UPLOAD_ID HTTP/1.1
Content-Length: 0
Content-Range: bytes */*
It works perfectly in most of the cases. However, the following error occurs occasionally and even retries of the same call result in the same erroneous response.
Response:
HTTP 400: "Failed to parse Content-Range header."
We are using the google.appengine.api.urlfetch Python library to make this request in our Python App Engine backend.
Any ideas?
EDIT:
I could replicate this issue using cURL
curl -X PUT 'https://www.googleapis.com/upload/drive/v3/files?uploadType=resumable&upload_id=UPLOAD_ID' -H 'Content-Length: 0' -vvvv -H 'Content-Range: bytes */*'
Response:
* Trying 172.217.25.138...
* Connected to www.googleapis.com (172.217.25.138) port 443 (#0)
* found 173 certificates in /etc/ssl/certs/ca-certificates.crt
* found 697 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
* server certificate verification OK
* server certificate status verification SKIPPED
* common name: *.googleapis.com (matched)
* server certificate expiration date OK
* server certificate activation date OK
* certificate public key: RSA
* certificate version: #3
* subject: C=US,ST=California,L=Mountain View,O=Google Inc,CN=*.googleapis.com
* start date: Thu, 16 Mar 2017 08:54:00 GMT
* expire date: Thu, 08 Jun 2017 08:54:00 GMT
* issuer: C=US,O=Google Inc,CN=Google Internet Authority G2
* compression: NULL
* ALPN, server accepted to use http/1.1
> PUT /upload/drive/v3/files?uploadType=resumable&upload_id=UPLOAD_ID HTTP/1.1
> Host: www.googleapis.com
> User-Agent: curl/7.47.0
> Accept: */*
> Content-Length: 0
> Content-Range: bytes */*
>
< HTTP/1.1 400 Bad Request
< X-GUploader-UploadID: UPLOAD_ID
< Content-Type:
< X-GUploader-UploadID: UPLOAD_ID
< Content-Length: 37
< Date: Thu, 23 Mar 2017 22:45:58 GMT
< Server: UploadServer
< Content-Type: text/html; charset=UTF-8
< Alt-Svc: quic=":443"; ma=2592000; v="37,36,35"
<
* Connection #0 to host www.googleapis.com left intact
Failed to parse Content-Range header.

Related

Azure AD Authentication Setup with Spring Boot Web App - AADSTS50011

I followed the official configure-spring-boot-starter-java-app-with-azure-active-directory tutorial but I can't seem to get it to work. I've confirmed the redirect url is exactly as written with the same security controller.
Here is my request headers:
HTTP/1.1 302
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Location: https://login.microsoftonline.com/07d020b6-d78f-40cb-b6c7-98eab8c29a94/oauth2/v2.0/authorize?response_type=code&client_id=8ff48ac1-1ddf-479d-ac5a-5db407c70c50&scope=openid%20profile%20https://graph.microsoft.com/User.Read%20https://graph.microsoft.com/Directory.Read.All&state=aUtWlcsG6Oc6NnYxA8z7E339CVlfodi7kBs5HiNIx8M%3D&redirect_uri=http://localhost:8080/login/oauth2/code/&nonce=xhjQJa-IVP_9kXFKsDX_oNrLprt4HnqDzUgyYqrjyBA
Content-Length: 0
Date: Fri, 16 Apr 2021 17:31:11 GMT
Keep-Alive: timeout=60
Connection: keep-alive
Note that the location contains my RedirectURI: http://localhost:8080/login/oauth2/code/azure
I've also reviewed other issues, and it feels close but, as mentioned, the tutorial what was provided -- so it should work.
Please let me know if you need any other information.
Request Id: a6cb6d0d-9a3a-4bd3-a2b7-c16c053c7b01
Correlation Id: 9f74d074-df2c-4a73-8c79-31aa7442a427
Timestamp: 2021-04-16T16:54:36Z
Message: AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application: '8ff48ac1-1ddf-479d-ac5a-5db407c70c50'.
You actually sent
http://localhost:8080/login/oauth2/code/
as redirect_uri in logon request. But app was likely defined with following redirect_uri.
http://localhost:8080/login/oauth2/code/azure
They don't match. Hence the error.

How to make libCurl use hexadecimal cnonce instead of alphanumeric in Digest authentication header

I have recently started using libCurl for client-server communication project where I use libcurl in the client side. We used to use WinHTTP, but did not find a way to add TLS 1.3 backward compatibility with earlier windows versions.
The cnonce is part of Digest Authentication headers.
When my project was earier using WinHTTP, the cnonce used to be hexadecimal.
Eg:
cnonce="a01e21c2a827ec6d3d9b6e1745ca8a0b"
HTTP Header
Server to client:
HTTP/1.1 401 Unauthorized
Content-Length: 26
WWW-Authenticate: Negotiate
WWW-Authenticate: Digest realm="a2ffc77914d6e791d", qop="auth",nonce="3f3da4b94e249058", opaque ="3b3542c"
Client to Server:
POST /wsman HTTP/1.1
Connection: Keep-Alive
Content-Type: application/soap+xml;charset=UTF-8
User-Agent: Openwsman
Content-Length: 889
Host: 10.138.141.178:623
Authorization: Digest username="PostMan",realm="a2ffc77914d6e791d",nonce="3f3da4b94e249058",uri="/wsman",cnonce="a01e21c2a827ec6d3d9b6e1745ca8a0b",nc=00000001,response="9dd37ef997ef332e46dff0f868b3de89",qop="auth",opaque="3b3542c"
When I look at the HTTP header I find that the cnonce is alphanumeric with Curl.
Eg:
cnonce="NDlmYTM0ZjVlM2IzNTNhMDNiNDk0MzQ1MzdlYmFlMzA="
HTTP Header
Server to Client
HTTP/1.1 401 Unauthorized
Content-Length: 0
Connection: Keep-Alive
Content-Type: application/soap+xml;charset=UTF-8
WWW-Authenticate: Digest realm="a2ffc77914d6e791d", nonce="5bf1156647e8eb42", algorithm="MD5", qop="auth", opaque="661d9eae", userhash=true
Client to Server
POST /wsman HTTP/1.1
Host: blr-5cg64728l6.amd.com:623
Authorization: Digest username="PostMan", realm="a2ffc77914d6e791d", nonce="5bf1156647e8eb42", uri="/wsman", cnonce="NDlmYTM0ZjVlM2IzNTNhMDNiNDk0MzQ1MzdlYmFlMzA=", nc=00000001, qop=auth, response="6847e465c9c90b40264b736070f721da", opaque="661d9eae", algorithm=MD5, userhash=true
Accept: */*
Content-Type: application/soap+xml;charset=UTF-8
User-Agent: Openwsman
Content-Length: 897
With alpha numeric cnonce the server is not responding back consistantly. Is there a way to specify in libcurl to generate hexadecimal cnonce - explicitly?
Note: To avoid security risk, the fields have been modified in the headers above.
I am using LibCurl: 7.73
with OpenSSL TLS backend: 1.1.1h

Google App Engine Flex environment: ETag HTTP header removed when resource is gzip'd?

I have a custom Node.js application deployed on Google App Engine flexible environment that uses dynamically calculated digest hashes to set ETag HTTP response headers for specific resources. This works fine on an AWS EC2 instance. But, not on Google App Engine flexible environment; in some cases Google App Engine appears to removes my application's custom ETag HTTP response header and this severely degrades the performance of the application. And, will be needlessly expensive.
Specifically, it appears that Google App Engine flex environment strips my application's ETag header when it gzip's eligible resources.
For example, if I use curl to request a utf8::application/json resource AND do not indicate that I will accept the response in compressed format then everything works as I would expect --- the resource is returned along with my custom ETag header that is a digest hash of the resource's data.
curl https://viewpath5.appspot.com/javascript/client-app-bundle.js --verbose
... we get the client-app-bundle.js as an uncompressed UTF8 resource along with a ETag HTTP response header whose value is a digest hash of the JavaScript file's data.
However, if I emulate my browser and set the Accept-Encoding HTTP request header to indicate to Google App Engine that my user agent (here curl) will accept a compressed resource, then I do not ever get the ETag HTTP response header.
$ curl --verbose https://xxxxxxxx.appspot.com/javascript/client-app-bundle.js
* Hostname was NOT found in DNS cache
* Trying zzz.yyy.xxx.www...
* Connected to xxxxxxxx.appspot.com (zzz.yyy.xxx.www) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.appspot.com
* start date: 2019-05-07 11:31:13 GMT
* expire date: 2019-07-30 10:54:00 GMT
* subjectAltName: xxxxxxxx.appspot.com matched
* issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
* SSL certificate verify ok.
> GET /javascript/client-app-bundle.js HTTP/1.1
> User-Agent: curl/7.38.0
> Host: xxxxxxxx.appspot.com
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 23 May 2019 00:24:06 GMT
< Content-Type: application/javascript
< Content-Length: 4153789
< Vary: Accept-Encoding
< ETag: #encapsule/holism::kiA2cG3c9FzkpicHzr8ftQ
< Cache-Control: must-revalidate
* Server #encapsule/holism v0.0.13 is not blacklisted
< Server: #encapsule/holism v0.0.13
< Via: 1.1 google
< Alt-Svc: quic=":443"; ma=2592000; v="46,44,43,39"
<
/******/ (function(modules) { // webpackBootstrap
/******/ // The module cache
/******/ var installedModules = {};
/******/
.... and lots more JavaScript. Importantly note ETag in HTTP response headers.
COMPRESSED (FAILING) CASE:
$ curl --verbose -H "Accept-Encoding: gzip" https://xxxxxxxx.appspot.com/javascript/client-app-bundle.js
* Hostname was NOT found in DNS cache
* Trying zzz.yyy.xxx.www...
* Connected to xxxxxxxx.appspot.com (zzz.yyy.xxx.www) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* Server certificate:
* subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.appspot.com
* start date: 2019-05-07 11:31:13 GMT
* expire date: 2019-07-30 10:54:00 GMT
* subjectAltName: xxxxxxxx.appspot.com matched
* issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
* SSL certificate verify ok.
> GET /javascript/client-app-bundle.js HTTP/1.1
> User-Agent: curl/7.38.0
> Host: xxxxxxxx.appspot.com
> Accept: */*
> Accept-Encoding: gzip
>
< HTTP/1.1 200 OK
< Date: Thu, 23 May 2019 00:27:15 GMT
< Content-Type: application/javascript
< Vary: Accept-Encoding
< Cache-Control: must-revalidate
* Server #encapsule/holism v0.0.13 is not blacklisted
< Server: #encapsule/holism v0.0.13
< Content-Encoding: gzip
< Via: 1.1 google
< Alt-Svc: quic=":443"; ma=2592000; v="46,44,43,39"
< Transfer-Encoding: chunked
<
�}{{G���˧�(�.rb�6`����1ƀw���,���4�$�23,���UU_碋-��
No ETag?
To me it seems incorrect that my application's custom ETag HTTP response header is removed; the gzip compression on the server and subsequent decompression in the user agent should be wholly encapsulated as an implementation detail of the network transport?
This behavior is caused by the NGINX proxy side car container that handles requests on GAE flex.
NGINX removes ETag headers when compressing content, maybe to comply with the strong identity semantics of the ETag, which is byte-by-byte, but not sure of that.
Unfortunately there is no way to configure the NGINX proxy in GAE Flex (other than manually SSHing into the container in each instance, changing nginx.comf, and restarting the nginx proxy).
The only workaround I know is to loosen the ETag strictness by making it "weak" by prepending "W/" to the value as specified in https://www.rfc-editor.org/rfc/rfc7232.
This is not documented. There's already an internal feature request to the App Engine documentation team to include this behavior in our public documentation.

"Your API key is not valid on this domain" when calling Disqus from WP7

I'm trying to access the REST Disqus API using the following url:
http://disqus.com/api/3.0/threads/listPosts.json
?api_key=myKey
&forum=myForum
&thread:ident=myIdent
When I go to the url in Chrome, it works fine. When I try to download it in WebClient, I have difficulty:
WebClient data = new WebClient();
Uri queryUri = new Uri(DisqusQuery + ident, UriKind.Absolute);
data.DownloadStringCompleted += new DownloadStringCompletedEventHandler(onDownloadCompleted);
data.DownloadStringAsync(queryUri);
The DownloadStringCompletedEventArgs contain the following error:
{"The remote server returned an error: NotFound."}
at System.Net.Browser.ClientHttpWebRequest.InternalEndGetResponse(IAsyncResult asyncResult)
at System.Net.Browser.ClientHttpWebRequest.<>c__DisplayClass2.<EndGetResponse>b__1(Object sendState)
at System.Net.Browser.AsyncHelper.<>c__DisplayClass4.<BeginOnUI>b__1(Object sendState)
at System.Reflection.RuntimeMethodInfo.InternalInvoke(RuntimeMethoThe thread '<No Name>' (0xfc10086) has exited with code 0 (0x0).
What could I be doing wrong?
Update: Looking in Fiddler shows that the response is this:
HTTP/1.1 400 BAD REQUEST
Date: Sun, 28 Aug 2011 14:51:39 GMT
Server: Apache/2.2.14 (Ubuntu)
Vary: Cookie,Accept-Encoding
p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"
Content-Length: 68
Connection: close
Content-Type: application/json
X-Pad: avoid browser bug
{"code": 11, "response": "Your API key is not valid on this domain"}
Here is the response when the request is from Chrome Incognito (not logged in to disqus):
HTTP/1.1 200 OK
Date: Mon, 29 Aug 2011 17:00:29 GMT
Server: Apache/2.2.14 (Ubuntu)
X-Ratelimit-Remaining: 1000
Content-Encoding: gzip
Vary: Cookie,Accept-Encoding
X-Ratelimit-Limit: 1000
p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"
X-Ratelimit-Reset: 1314640800
Content-Length: 3120
Connection: close
Content-Type: application/json
/* expected JSON response */
Update 2: The above error is using my public key. Using the secret key results in:
HTTP/1.1 403 FORBIDDEN
Date: Sun, 28 Aug 2011 20:40:32 GMT
Server: Apache/2.2.14 (Ubuntu)
Vary: Cookie,Accept-Encoding
p3p: CP="DSP IDC CUR ADM DELi STP NAV COM UNI INT PHY DEM"
Connection: close
Transfer-Encoding: chunked
Content-Type: application/json
2a
{"code": 5, "response": "Invalid API key"}
0
FIX:
Add something similar to the following line to your HttpRequest:
client.Headers[HttpRequestHeader.Referer] = "http://mywebsite.com";
Longer Description:
The problem has to do with the way Windows Phone is setting the HTTP Referer header.
When running the successful request from the browser address bar, Fiddler showed me this:
GET /api/3.0/forums/listPosts.json?forum=disqus&api_key=jRml... HTTP/1.1
Accept: */*
Accept-Language: en-US
Accept-Encoding: gzip, deflate, peerdist
User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; Zune 4.7; InfoPath.3; MS-RTC LM 8)
Connection: Keep-Alive
Host: disqus.com
Cookie: disqus_unique=...
X-P2P-PeerDist: Version=1.0
When I examined the request sent by Silverlight in Fiddler, I saw the following:
GET /api/3.0/forums/listPosts.json?forum=disqus&api_key=jRml... HTTP/1.1
Accept: */*
Referer: file:///Applications/Install/9036AAF3-F213-4CFB-B57E-576A05E1896D/Install/
Accept-Encoding: identity
User-Agent: NativeHost
Host: disqus.com
Connection: Keep-Alive
By removing the Referer header and resubmitting via Fiddler, the query worked as I expected! So... all you need to do is manually set the HTTP Referer header to something you control (rather than letting Silverlight do it for you) and you should be good to go.
Oh - and also make sure you're using your public key, not the secret key.
/ck
Looks like the browser is getting additional info like username or something: X-User: anon:182210122933. This is missing when WebClient gets its response back. I guess this has something todo with the fact that you are logged in in the browser or that you have a typo in your api key.
Another interesting pointroject for you would be a library like http://disqussharp.codeplex.com/ which handles authentication most of the time.
Good luck!

SharePoint 2010 / IIS 7.5 Byte-Range Request Responds With Entire File

I'm having problems getting SharePoint 2010/IIS 7.5 to respect byte-range requests. I'm developing a SharePoint 2010 Web Part using Silverlight, and am trying to retrieve part of a document stored inside SharePoint.
When I request a byte range of a file in SharePoint, the server responds with the entire file. However, if I request the same byte range from a file sitting on an Apache server, everything works as expected. Below are the http headers observed with Fiddler.
Any help would be really appreciated! Thanks.
Sent:
GET http://example.com/file.abc HTTP/1.1
Accept: */*
Accept-Language: en-US
Referer: http://example.com/index.html
Accept-Encoding: identity
Range: bytes=1061285-1064594
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.375.127 Safari/533.4
Host: example.com
Connection: Keep-Alive
SharePoint also takes login credentials:
Authorization: Negotiate TlRMTVNTUAABAAAAl4II4gAAAAAAAAAAAAAAAAAAAAAGAbAdAAAADw==
Received from Apache:
HTTP/1.1 206 Partial Content
Date: Wed, 25 Aug 2010 22:40:34 GMT
Server: Apache/2.0.54
Last-Modified: Fri, 20 Aug 2010 23:27:18 GMT
ETag: "b68e346-103ea9-a3c20180"
Accept-Ranges: bytes
Content-Length: 3310
Vary: User-Agent
Content-Range: bytes 1061285-1064594/1064617
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
Content-Type: application/x-zip
Received from SharePoint 2010 / IIS 7.5
HTTP/1.1 200 OK
Cache-Control: private,max-age=0
Content-Length: 1064617
Content-Type: application/octet-stream
Expires: Tue, 10 Aug 2010 22:40:56 GMT
Last-Modified: Wed, 25 Aug 2010 19:28:39 GMT
ETag: "{5A1DF927-D8CD-4BC0-9590-8188CF777A3D},1"
Server: Microsoft-IIS/7.5
SPRequestGuid: 99799011-5bdc-489f-99fd-d060a56d3ae4
Set-Cookie: WSS_KeepSessionAuthenticated={7703be10-bb56-4fa1-ba8b-cd05f482859f}; path=/
X-SharePointHealthScore: 5
ResourceTag: rt:5A1DF927-D8CD-4BC0-9590-8188CF777A3D#00000000001
X-Content-Type-Options: nosniff
Content-Disposition: attachment; filename=file.abc
X-Download-Options: noopen
Public-Extension: http://schemas.microsoft.com/repl-2
Set-Cookie: WSS_KeepSessionAuthenticated={7703be10-bb56-4fa1-ba8b-cd05f482859f}; path=/
Persistent-Auth: true
X-Powered-By: ASP.NET
MicrosoftSharePointTeamServices: 14.0.0.4762
Date: Wed, 25 Aug 2010 22:40:56 GMT
The problem is that SharePoint caching is off be default, and needs to be turned on to enable byte-range requests. See Disk-Based Caching for Binary Large Objects.
Note http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.2:
"A server MAY ignore the Range header."
Thus whenever you are using a Range header you must be able to handle a 200 response. The fact that your server doesn't appear to support range serving is unfortunate, but conformant.

Resources