HTTP keep connection alive without blocking on socket receive? - c

I am using the below code to make a HTTPS GET request.
I'd like to keep the connection alive to make multiple requests without having to connect each time, so I set "Connection: Keep-Alive\r\n". However, once I did this, the behaviour changed and the code now blocks on BIO_read() (equivalent to ::recv()). Now I cannot return to process the received data.
How can I only connect once, but not block on BIO_read()/::recv()?

You have to pay attention to what you read. Parse the response headers. There would be wither a "Content-Length: xxx", or "Transfer-Encoding: chunked". That woukd give you an information on where the body ends.

HTTP/2 uses, according to the specification, persistent connections. So it should work to replace HTTP 1.1 with HTTP/2. Note, that you then have to remove the Connection: Keep-Alive line, as it is prohibited in HTTP/2. It was just a hint anyway, and didn't guarantee a persistent connection (see MDN Web Docs).
Edit: Turns out, the HTTP/2 support of websites is less than 50% now, so my answer can't be a general solution in any way. So, uhm, take it as a FYI.

Related

CURL share interface and cookies in multi-threaded app

I use libcurl share+easy interface and I need to "fix up" some cookie info that is set by a webserver.
In my case I use multiple threads and I would like to know at what point received cookie is "shared" to all other curl handles and when it's the right time to fix received cookie data:
right when I received it from remote server (but at this point I'm not sure if the corrupt cookie data might be picked up by some other thread that was making a new http request at the same time)
on making new requests to ensure that I don't end up using corrupt cookie in new http requests.
Here's my code flow. I call curl_easy_perform. When response containing Set-Cookie comes in, libcurl at first parses that cookie and stores it in its internal store (which gets shared in case of curl share interface).
Then curl_easy_perform returns and now I try to check if server send specific cookie that I need to "fix up". To check that cookie the only way is to use CURLINFO_COOKIELIST.
My question is: from the time curl parsed incoming Set-Cookie header (with invalid cookie data) to the time when I inspect cookies using CURLINFO_COOKIELIST the updated invalid cookie might be picked up by another thread. That means that to avoid that issue I don't see any other options other than inspecting cookies on each new request in case if there is another thread out there that might have updated cookies with invalid data.
Even in this case I still may end up using invalid cookie data. In other words, there is no proper solution for this problem.
What's the right approach?
Typically when using libcurl in multiple threads, you use one handle in each thread and they don't share anything. Then it doesn't matter when you modify cookies since each handle (and thus thread) operates independently.
If you make the threads share cookie state, like with the share interface, then you have locking mutexes setup that protects the data objects from being accessed from more than one thread at a time anyway so you can just proceed and update the cookies using the correct API whenever you feel like.
If you're using the multi interface, it does parallel transfers in the same thread and thus you can update cookies whenever you like without risking any problems with parallelisms.

Chrome:POST/OPTIONS requests Fails with net::ERR_TIMED_OUT

The OPTION/POST Request is failing inconsistently with a console Error as err_timed_out. We get the issue inconsistently, it's only observed sometimes. Otherwise the request gets proper response from the back end. When it's timing out, the request doesn't even reach the server.
I have done some research on the stuff and found that due to maximum 6 connections to a resource restrictions it may wait for getting a connection released. But, I don't see any other requests pending ,all the other requests were completed.
In the timeline I can always see that it stalled for 20.00 seconds. Most of the time the time is same. But, it only shows that its been stalled for some time nothing else in the timeline.
The status of the request shows failed ERR_Connection_Timed_Out. Please help.
The Network Timing
Console Error
I've seen this issue when I use an authenticated proxy server and usually a refresh of the page fixes it.
Are you using an authenticated proxy server where you are seeing this behavior? Have you tried on a pc with direct access (i.e. without proxy) to the Internet?
I've got the same problem when I choose another ISP. I thought I would have only to put my new ID and password, but it wasn't the case.
I have an ADSL modem with a dry loop.
All others services were fine (DNS resolution, IP telephony, FTP, etc).
I did a lot of tests (disable firewall, try some others navigator, try under Linux, modem default factory, etc) none of those tests were successful.
To resolve the problem ERR_TIMED_OUT, I had to adjust the MTU and MRU values. I put 1458 rather than 1492, which is the default value.
It works for me. Maybe some ISPs use different values. Good luck.

Midori: changing source code to change http handling behaviour

I am working in a project where i need to change a browser source code to change the way it behaves when receiving a certain http response Status-Code. When these kind of packets are received i need to catch them, analize the message body and do something accordingly.
I am struggling to get access to a HTTP message body. Either request or response. I have tried pretty much everything. I can use/alter headers as i wish, i can insert my messages in the queue(calling libsoup/midori primitives).
Midori uses libsoup session signals for handling the messages. "request-started" and "request-queued". I added the avaiable "request-unqueued" which allows me for further granular control in the http life-cycle.
I know there are Soup_Message specific signals but i have not found how to work with them although i feel like i should.
Please feel free to help me, guide me to any links, documentation, anything that can give me an hint.
TL:DR: Need to access Http response body/content and can only read Headers.

SpringMVC randomly not returning a response

I've got a SpringMVC application that is randomly not returning a response to AJAX requests. Or rather, it would appear that it is not returning the response.
In my Network graph (Chrome or Firefox), I see a GET request being made, and I see the full stack trace on the server side which is handling/responding to the request. However, the browser never seems to receive a response to the request as the GET method never completes.
I am completely clueless as to how/where to start tracking this down.
I am running on Tomcat 7.0.42 and using AngularJS on the front side. I have my firewall completely stopped, so I do not believe that it is related to blocked ports/communications.
Where/how can I validate that a response is being committed? Furthermore, how can I isolate where this disconnection is occurring and why the browser isn't receiving any response? I cannot seem to replicate this behaviour when I issue manual requests via Postman.
I am doing the dev work on OSX v10.7.5.
Wow. After several hours of trying to dig around and find the solution, I installed Wireshark and decided to look at actual packets. Turns out I was getting double requests for a single get, but to 2 different ports. After further inspection (checking to see what was listening on the port), I noticed that it was the Sophos Anti-Virus that was seemingly intercepting the request and not responding.
I'm still not sure quite how the AV intercepts the requests before passing them along, nor how it decides to abort a response, but turning off has made a world of difference.
Hopefully this learning experience will help someone else if they get stuck with something similar.
SpringMVC is pretty rock solid and the only thing I can imagine is that your handler is not returning a response under certain instances. Look in your code for conditionals or exception handlers that don't return a proper response.

Not able to send request for HTTPS post server using CURL

Iam writing a C program to interact with HTTPs server. Server is expecting the data without
any assignments(Ex: normally a request can be "https://xz.aspx?name=google" where as is it
possible to send the name "https://xz.aspx?google"). Currently server is getting an entry
log for my request but not able to fetch request data.
1.Is it possible to send a value with out assignment?
2.Will .net look for default assignments?
3.Is there anything else to probe?
The data you're sending is just whatever you put in the query part of the request-uri in the HTTP request.
That data can be almost anything you like (as long as it is using letters that are valid according to RFC2616). The "assignments" concept is not something HTTP knows or uses, it is just a common way for clients and servers to deal with the data.
so... Yes, you can send a value "without assignment" with curl. Weather the receiver will like it or understand it is a completely different matter.

Resources