In Firefox 57 you can intercept the body of http requests as well as responses (using filterResponseData). Is it possible to do the same for web sockets?
I do not think you can use the API to look at the WebSocket communication.
The webRequestAPI only applies to the WebSocket handshake, which is still normal HTTP. But once the HTTP upgrade is complete, it will no longer be observable by the API.
To quote from a related bugfix on Chrome:
Support WebSocket in WebRequest API.
This CL makes WebRequest API support intercepting the WebSocket handshake
request. Since the handshake is done by means of an HTTP upgrade request, its
flow fits into HTTP-oriented WebRequest model. Additional restriction applies,
that WS request redirects triggered by extensions are ignored.
Note that WebRequest API does not intercept:
Individual messages sent over an established WebSocket connection.
WebSocket closing connection
As Mozilla typically tries to follow the Chrome extension, I would expect that Firefox should behave the same.
Related
I'm having an issue with a web app I'm building. The web app consists of an angular 4 frontend and a dotnet core RESTful api backend. One of the requirements is that requests to the backend need to be authenticated using SSL mutual authentication; i.e., client certificates.
Currently I'm hosting both the frontend and the backend as Azure app services and they are on separate subdomains.
The backend is set up to require client certificates by following this guide which I believe is the only way to do it for Azure app services:
https://learn.microsoft.com/en-us/azure/app-service/app-service-web-configure-tls-mutual-auth
When the frontend makes requests to the backend, I set withCredentials to true — which, [according to the documentation][1], should also work with client certificates.
The XMLHttpRequest.withCredentials property is a Boolean that indicates whether or not cross-site Access-Control requests should be made using credentials such as cookies, authorization headers or TLS client certificates. Setting withCredentials has no effect on same-site requests.
Relevant code from the frontend:
const headers = new Headers({ 'Content-Type': 'application/json' });
const options = new RequestOptions({ headers, withCredentials: true });
let apiEndpoint = environment.secureApiEndpoint + '/api/transactions/stored-transactions/';
return this.authHttp.get(apiEndpoint, JSON.stringify(transactionSearchModel), options)
.map((response: Response) => {
return response.json();
})
.catch(this.handleErrorObservable);
On Chrome this works, when a request is made the browser prompts the user for a certificate and it gets included in the preflight request and everything works.
For all the other main browsers however this is not the case. Firefox, Edge and Safari all fail the preflight request because the server shuts the connection when they don't include a client certificate in the request.
Browsing directly to an api endpoint makes every browser prompt the user for a certificate, so I'm pretty sure this is explicitly relevant to how most browsers handle preflight requests with client certificates.
Am doing something wrong? Or are the other browsers doing the wrong thing by not prompting for a certificate when making requests?
I need to support other browsers than Chrome so I need to solve this somehow.
I've seen similar issues being solved by having the backend allow rather than require certificates. The only problem is that I haven't found a way to actually do that with Azure app services. It's either require or not require.
Does anyone have any suggestions on how I can move on?
See https://bugzilla.mozilla.org/show_bug.cgi?id=1019603 and my comment in the answer at CORS with client https certificates (I had forgotten I’d seen this same problem reported before…).
The gist of all that is, the cause of the difference you’re seeing is a bug in Chrome. I’ve filed a bug for it at https://bugs.chromium.org/p/chromium/issues/detail?id=775438.
The problem is that Chrome doesn’t follow the spec requirements on this, which mandate that the browser not send TLS client certificates in preflight requests; so Chrome instead does send your TLS client certificate in the preflight.
Firefox/Edge/Safari follow the spec requirements and don’t send the TLS client cert in the preflight.
Update: The Chrome screen capture added in an edit to the question shows an OPTIONS request for a GET request, and a subsequent GET request — not the POST request from your code. So perhaps the problem is that the server forbids POST requests.
The request shown in https://i.stack.imgur.com/GD8iG.png is a CORS preflight OPTIONS request the browser automatically sends on its own before trying the POST request in your code.
The Content-Type: application/json request header your code adds is what triggers the browser to make that preflight OPTIONS request.
It’s important to understand the browser never includes any credentials in that preflight OPTIONS request — so the server the request is being sent to must be configured to not require any credentials/authentication for OPTIONS requests to /api/transactions/own-transactions/.
However, from https://i.stack.imgur.com/GD8iG.png it appears that server is forbidding OPTIONS requests to that /api/transactions/own-transactions/. Maybe that’s because the request lacks the credentials the server expects or maybe it’s instead because the server is configured to forbid all OPTIONS requests, regardless.
So the result of that is, the browser concludes the preflight was unsuccessful, and so it stops right there and never moves on to trying the POST request from your code.
Given what’s shown in https://i.stack.imgur.com/GD8iG.png it’s hard to understand how this could actually be working as expected in Chrome — especially given that no browsers ever send credentials of any kind in the preflight requests, so any possible browsers differences in handling of credentials would make no difference as far as the preflight goes.
I am currently working on a multi threaded proxy server that supports keep-alive connections. I see some weird issues while handling requests from firefox browser. I connect to my local proxy using localhost:10001/http://url, and I can access all the links on this host. The process is as below.
1. Create a socket bind it to port 10001
2. Accept connections and if a client is connected fork()
3. Keep on processing the client request as persistent connection.
Now the problem is that when I open a new tab in firefox to access a second url with different host with using localhost:10001/http://url2, the strange thing is that that request goes to my client socket connection created during first connection. I initially thought that it might be due to my code, but then i tried to do the same using telnet and all the new connections would create a separate process. Are there any specific settings that is making firefox browser do this??
HTTP keep-alive is a way to reuse an underlying TCP connection for multiple requests so that one can skip the overhead of creating a new TCP connection all the time. Since the target of the connection is the same all the time in your case it makes sense for the browser to reuse the same TCP connection. The comparison with telnet is flawed since with telnet you do a new TCP connection all the time.
If HTTP keep-alive gets used is specified by the HTTP version the Connection header and on the behavior of both server and client. Both server and client can decide to close the idle connection any time after a request was done, i.e. they are not required to keep it open after the request is done. Additionally they can signal that they like to have the connection open by using the Connection: keep-alive HTTP header or that they like to close after the request with Connection: close. These headers have default values depending on the HTTP version, i.e. keep-alive is on with HTTP/1.1 while off with HTTP/1.0 unless explicitly specified.
Apart from that the "proxy" you are implementing with the use of URL's like http://proxy/real-url is not a real HTTP proxy. A real HTTP proxy would be configured as a proxy inside the browser and the URL's you use would stay the same which also means that no URL rewriting would need to be done by the proxy. Worse is that your idea of a proxy effectively merges all hosts inside the same origin (i.e. origin is the proxy) and thus effectively disables a major security concept of the browser: the same-origin policy. This means for example that some rogue advertisement server would share with your implementation the origin with ebay and thus could get access to the ebay cookies and hijack the session and misuse it for identity theft
HTTP persistent connection is also used with the proxy, not only with the destination.
For firefox you could try to alter the behavior with the proxy by setting network.http.proxy.version to 1.0. But you'll have to enhance your proxy (and perhaps rethink completely its inner workings) to be able to deal with these reused connections. I'm sure it's not limited to Firefox.
Also make sure your proxy doesn't answer with HTTP/1.1 because it's not.
I am currently designing a web application using AngularJS. In this I am fetching and posting data via Rest API(s) with different methods. The data I retrieving is fetched in the form of JSON.
Problem:
Issue here is, while I am using https, the data sent and received via HTTP requests can still be seen in proxy tool or traffic monitors. All the JSON can be easily read from this.
Each of my request has a token attached in it's header which takes care of authentication. However, once authorized, there is some part I don't want to be displayed in/ caught in such monitoring tools.
Question:
This data is stored in an encrypted way in database and all, however while coming via HTTP request, it is first decrypted and then sent. How can I hide/protect this data?
You can't.
If you give it to the client, then the client has to be able to see it.
If the user has configured their browser to proxy requests, then the proxy is the client.
Once the data leaves your server in an HTTP response then anyone/anything thing the user of the client wants to trust with that data can access it. You don't have control at that point.
proxy tool or traffic monitors will see https data only if the client has accepted the man-in-the-middle (MITM) by installing the ssl certificate used by the MITM:
To see the content (other than the host name) of an https connection, someone who is neither the client or the server must do a MITM.
If someone do a MITM with a certificate not trusted by the client, the client will reject the connection.
WARNING: If the server do NOT use HSTS, the person doing the MITM can do an SSLSTRIP attack if the first connection is http. In that case, the MITM do not need a trusted certificate because the connection will stay in plain text (http)
Now I just get and post Firebase request through sending https request using C language.
Is there any way to use callback for this?
Because I want to get the latest data, now I just polling the https get requests. The SSL handshake may cause delay, so I want to add callback for this.
We would like to call a websphere web service from silverlight.
If I have understood it correctly:
Silverlight only supports async web service calls
Websphere does not support async calls
Is this correct?
Is it possible to call websphere web services from silverlight?
A general answer to your first question: There is no need for a webservice server to support asynchronous calls. Because HTTP is stateless, the server handles one request in one thread.
Generally speaking, the client can choose whether to wait for the response (synchronous) or to let a new thread wait for the response and do other things meanwhile (asynchronous).
The decision of doing synchronous or asynchronous calls is therefore only part of the client.
It should be possible.
Silverlight is asynchronous only in that the HTTP Web Request (GET, POST) is not linked to the receipt of the the HTTP response. You send an HTTP Request which is one action and separately from the Request you receive and handle the HTTP Response, you don't send a request then wait on the same thread for a response.
On your web server it makes no difference how you receive the request and send the response, so it could be handled synchronously or asynchronously, the Silverlight app would be oblivious to that.
Saying that 'Silverlight only supports async web service calls' only means that it does not block the calling thread while waiting for a response. The request is sent on one thread, the response is received on another thread.