I am testing my proxy that simply sends a client's request to a proxied server and returns a response back. The current realization requires that the client must send the fully prepared valid request to the proxy (the Host header value must match with a DNS of the predefined in source code proxied server).
Heres my custom request to the proxy that proxies a www.example.com:
But the result request that ARC sends to the localhost is:
GET / HTTP/1.1
Host: localhost:1234
connection: close
then it is sent to the www.example.com but the Host header is invalid for it so 404 is returned as a result.
I just noticed that this refers to the old version of ARC for Chrome. Support for Chrome apps is scheduled to end soon the the app is no longer supported. Instead, please, install desktop client from https://install.advancedrestclient.com/
To move your data from one app to another follow instructions from https://docs.advancedrestclient.com/moving-from-chrome-application-to-desktop-client
Related
I am new to React and Express. The thing that I am trying to do is auto redirect all existing requests from HTTP to HTTPS. I want this redirection to be done automatically so clients does not need to change anything. In Express I am running Http on port 3000 and then running my application on port 3001 over HTTPS with valid certificates. When testing locally, I am able to see the redirection happening whereby http gets changed to https and port changes from 3000 to 3001 . So I passed http://localhost:3000 which got modified to https://localhost:3001. But when I deploy to my application server, when clients request for http://example.com:3000 , only the protocol changes from http to https but port no does not change. Its like https://example.com:3000. I want to know why the port number does not get changed as well. Strange thing is when I do curl -X GET http://example.com:3000 it says found and redirecting to https://example.com:3001. Had been struggling for while on this.
I am expecting both protocol and port number should reflect in browser rather only protocol changes but port number is not getting changed
I'm using ReactJs and Axios to send API requests to my server but I keep getting the same error:
Failed to load http://***: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access. The response had HTTP status code 400.
I'm trying to perform a POST request. I've also tried to download a Chrome Plugin to allow CORS. It did work for the GET requests, but not working for POST it looks like.
If I try to make requests to https://jsonplaceholder.typicode.com/users it's working fine. So I guess there's something wrong with the server.
My server is using Nginx and is on a CentOS 7 OS.
Q: How is it possible to enable CORS just for my local development (localhost) or specific websites?
EDIT: I have already tried using this config on my Nginx server - without luck: https://enable-cors.org/server_nginx.html
While I can't answer you with some specific code, here's what happens (at least what happened last time I tried Angular and had similar issues):
Before sending any further requests, there'll be a header only OPTIONS HTTP request sent to the server URL. When answering this call, the server is supposed to send a header field Access-Control-Allow-Origin containing a whitelist of domains allowed to do further calls using the API. To whitelist all requests in your dev environment it should be enough to set Nginx to answer with Access-Control-Allow-Origin: *.
For development I use the Firefox extension CORS Everywhere. It modifies all web traffic, to include the correct CORS headers. (It works at least with the somewhat dated Firefox in Opensuse 42.3.)
https://addons.mozilla.org/en-US/firefox/addon/cors-everywhere/
Note, that this subverts a security mechanism of the browser.
For deployment you must configure the server, to send the correct CORS headers. (I did never do this, the finished website is planned to work in a single IP.)
If you have access to the server and the server is using Nodejs, this should work for you:
CD into your server folder:
cd server-folder
Then run this command in order to install the 'cors' package:
npm install cors
In order to access this package, go into your server file in your IDE and in the next available line:
const cors = require('cors');
Next, add this line to use the middleware (assuming you are using Express):
app.use(cors());
I am currently working on a multi threaded proxy server that supports keep-alive connections. I see some weird issues while handling requests from firefox browser. I connect to my local proxy using localhost:10001/http://url, and I can access all the links on this host. The process is as below.
1. Create a socket bind it to port 10001
2. Accept connections and if a client is connected fork()
3. Keep on processing the client request as persistent connection.
Now the problem is that when I open a new tab in firefox to access a second url with different host with using localhost:10001/http://url2, the strange thing is that that request goes to my client socket connection created during first connection. I initially thought that it might be due to my code, but then i tried to do the same using telnet and all the new connections would create a separate process. Are there any specific settings that is making firefox browser do this??
HTTP keep-alive is a way to reuse an underlying TCP connection for multiple requests so that one can skip the overhead of creating a new TCP connection all the time. Since the target of the connection is the same all the time in your case it makes sense for the browser to reuse the same TCP connection. The comparison with telnet is flawed since with telnet you do a new TCP connection all the time.
If HTTP keep-alive gets used is specified by the HTTP version the Connection header and on the behavior of both server and client. Both server and client can decide to close the idle connection any time after a request was done, i.e. they are not required to keep it open after the request is done. Additionally they can signal that they like to have the connection open by using the Connection: keep-alive HTTP header or that they like to close after the request with Connection: close. These headers have default values depending on the HTTP version, i.e. keep-alive is on with HTTP/1.1 while off with HTTP/1.0 unless explicitly specified.
Apart from that the "proxy" you are implementing with the use of URL's like http://proxy/real-url is not a real HTTP proxy. A real HTTP proxy would be configured as a proxy inside the browser and the URL's you use would stay the same which also means that no URL rewriting would need to be done by the proxy. Worse is that your idea of a proxy effectively merges all hosts inside the same origin (i.e. origin is the proxy) and thus effectively disables a major security concept of the browser: the same-origin policy. This means for example that some rogue advertisement server would share with your implementation the origin with ebay and thus could get access to the ebay cookies and hijack the session and misuse it for identity theft
HTTP persistent connection is also used with the proxy, not only with the destination.
For firefox you could try to alter the behavior with the proxy by setting network.http.proxy.version to 1.0. But you'll have to enhance your proxy (and perhaps rethink completely its inner workings) to be able to deal with these reused connections. I'm sure it's not limited to Firefox.
Also make sure your proxy doesn't answer with HTTP/1.1 because it's not.
I have a Flask app running on a domain - let's say abc.com. It logs all 404 errors so I can review them and in the log it includes the request.url value. This works well but occasionally I get a log entry with a request.url value of "http://imagefreak007.com/2ch.php". It's always the same url - same domain and page.
I thought the request object, and specifically the url value, indicated the url the client is trying to access - which I assumed must be on the same domain/s that the Flask app is running on. I am not sure where the url value is coming from - it is not an intentional part of the Flask app.
A client can always lie about the hostname it is trying to access. If your site is example.com, the following would be a normal HTTP request:
GET /some/path
Host: example.com
The requests you recieve are also sent to example.com, but are probably something like:
GET /2ch.php
Host: imagefreak007.com
You can try this out with the telnet command: execute telnet example.com 80, then type one of the requests in the prompt and hit enter twice at the end.
I must develop proxy server that work with only HTTP 1.0 in Linux and by c .
I need some hint to start developing .
I assume you are confident in using linux and the language c (no hints for that, else don't start with developing a proxy)
Read and understand the RFC 1945 HTTP/1.0 (pay attention to the specific mentioning of proxy)
Determine what kind of proxy you want (web/caching/content-filter/anonymizer/transparent/non-transparent/reverse/gateway/tunnel/...)
Start developing the server
Basic steps
Open port
Listen on port
Get all request sent from the client to that port (maybe make the whole thing multithreaded to be able to handle more than 1 request at a time)
Determine if it is a valid HTTP 1.0 request
Extract the request components
Rebuild the request according to what type of proxy you are
Send the new request
Get the response
Send response to client
How to create a proxy server:
Open a port to listen on
Catch all incoming requests on that report
Determine the web address requested
Open a connection to the host and forward the request
Receive response
Send the response back to the requesting client
Additionally: Use threads to allow for multiple requests to the server.