I'm writing a sniffer program with the pcap library that checks the http traffic. I succeed when I m looking for GET messages or status codes but I don't know why it doesn't work for the post requests.
I tried to use wireshark and I saw that for POST requests, in addition to the http protocol there is also an Line-based text data: application/x-www-form-urlencoded "protocol".
When I try to print the content of the payload I didn't get results or I get strange characters.. so I was thinking that maybe the problem is this "Line-based.." stuff..
Any idea of the possible cause?
The strange characters may be from utf-8 encoded as opposed to ascii encoded POSTs. It also depends which applications you are looking to capture, as some Flash apps use POST requests but encrypt them to prevent tampering.
EDIT: See my answer to your other question
This is what I'm capturing with tcpdump. What do you see?
POST /xml/crud/posttest.cgi HTTP/1.1
Host: www.snee.com
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Fedora/3.6.12-1.fc13 Firefox/3.6.12
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Referer: http://www.snee.com/xml/crud/posttest.html
Content-Type: application/x-www-form-urlencoded
Content-Length: 21
fname=test&lname=test
Related
In my application, the front end (ReactJS using axios, if that matters) makes some API calls to the backend (Node/Express, again if that matters). In all of the responses, server does responds with Access-Control-Allow-Origin:* (This is a test environment, appropriate changes will be made to allow specific origins in production).
In the Chrome Developer Tools Network tab, I observe that for every request say POST /assets , POST /filters, PUT /media etc., a preflighted OPTIONS request is sent. Now I do understand from here, the reason for those and that's fine.
OPTIONS Request Headers
OPTIONS /api/v1/content/bb54fbf52909f78e015f/f91659797e93cba7ae9b/asset/all
HTTP/1.1
Host: XX.X.XX.XXX:5000
Connection: keep-alive
Access-Control-Request-Method: POST
Origin: http://localhost:3000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36
Access-Control-Request-Headers: authorization,content-type
Accept: */*
DNT: 1
Referer: http://localhost:3000/main/93f1ced0f15f35024402/assets
Accept-Encoding: gzip, deflate
Accept-Language: en,en-US;q=0.8,mr;q=0.6
Response Headers
HTTP/1.1 204 No Content
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: GET,HEAD,PUT,PATCH,POST,DELETE
Vary: Access-Control-Request-Headers
Access-Control-Allow-Headers: authorization,content-type
Date: Sat, 05 Aug 2017 10:09:16 GMT
Connection: keep-alive
My observation is that this is sent for literally every requests, and repetitively i.e. even if the same request is being made again (immediately or otherwise).
My questions are
Is this necessarily a bad thing (i.e. would it cause any performance issues, even minor)?
Why doesn't browser remember the header responses for the same server, same request?
Is there anything I am missing to configure on the front end or backend for making this sticky?
You need to send the Access-Control-Max-Age header to tell the browser that it’s OK to cache your other Access-Control-* headers for that many seconds:
Access-Control-Max-Age: 600
I'm trying to figure out how an HTTP server is encoding/spliting a file during an http download.
When I'm using Wireshark I can find four HTTP Headers (see below) and a bunch of TCP packets without any headers. I would like to know how tcp packets are formed and if I can retrieve any concrete data from them (like the name of the file, any ID or something substantial).
First header :
GET /upload/toto.test HTTP/1.1
Host: 192.168.223.167:90
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Range: bytes=3821-3821
If-Range: "40248-5800428-4fab43ec800ce"
Second header :
HTTP/1.1 206 Partial Content
Date: Sat, 31 May 2014 21:25:31 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sat, 31 May 2014 15:59:21 GMT
ETag: "40248-5800428-4fab43ec800ce"
Accept-Ranges: bytes
Content-Length: 1
Content-Range: bytes 3821-3821/92275752
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Third :
GET /upload/toto.test HTTP/1.1
Host: 192.168.223.167:90
Connection: keep-alive
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36
Accept-Encoding: gzip,deflate,sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Range: bytes=3821-92275751
If-Range: "40248-5800428-4fab43ec800ce"
Last one :
HTTP/1.1 206 Partial Content
Date: Sat, 31 May 2014 21:25:31 GMT
Server: Apache/2.2.22 (Debian)
Last-Modified: Sat, 31 May 2014 15:59:21 GMT
ETag: "40248-5800428-4fab43ec800ce"
Accept-Ranges: bytes
Content-Length: 92271931
Content-Range: bytes 3821-92275751/92275752
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
TCP Packet following the fourth http header (in ASCII) :
PV)?FEM##cZU:P-O"-~zLW^2&$Z$f5APzve~BuH5/}`z2MI"{lIQCBmTO-ah6O)497Kro+gS((R
8n8_lMXusDp{Qs1g?j~iZaB.ADI|yp((t3#4SA4[MV#N1(2He|a9}Dw`'=k^C;G%#KUD``Sw:KXYG1{pxP,*`BSAMO0?FlFb(~X/|Ub=H[b7Y'NAP])IARH(g*LI}AE%BzFOzN5Xf7$D|.Hw00AUh[lE)ovKAUmcSuFnzQS+T0=z7;#nKX2!>ik)p73a5{h2ZZo~etin"UCFc+#ZjgB60y()-1{e|XRj9r:zDM(ulcSAayGeZCks7Nnz{L8(&L8Ew?J9}WA/t?^xS{sbnw8J7/%Iqt0i4_h*D6?|[&3zFngl~ku>#RVp+:`'RdtKh(",MPJqx5
tov&pZV8)'X?iW(J1d-!]FM>_Q\V=&xYH C9G?dp6&
\td|k$AY!D^`HnW=OsMcbV(*(RQL-xhWPa\:C>-M'oH fGwr:0=\K7!lMoPH)fB2OSUrg89
For the curious, this file is an image of Android (sample for the question).
EDIT for CodeCaster :
I'm trying to limit the output bandwidth generated by a download requested on a nodejs server, the thing is I have to do this at a network level (with Iptables actually) and not at a code level. To do this and because it is a per user limit I have to gather a significant string that I could use to filter packets (an ASCII string or an hexa string) and limit the user download bandwith. My original question is about how the content is formated/encoded, I'm not trying to find another way (because I know there are) it is a context constraint.
TCP is a protocol in the OSI model, and PDU's (aka packets) are processed in each layer of the OSI model. In each layer, the PDU gets another header, so by the time it reaches the transport layer, it already has one header from the application layer. TCP then puts on its own header, and the PDU goes on to the network layer for further processing.
As far as data size of the PDU, that depends on the physical protocol's MTU (maximum transfer unit) For instance, Ethernet's MTU is 1500 bytes.
And as far as getting data, if you mean from the header, it's simple enough to code a solution that searches for certain attributes (like Content-Length or Server). If you mean to get data from the data PDU, that is generally not a good idea unless you are looking for analytic purposes, in which case Wireshark should work. (If I recall; it's been a long time since I used Wireshark.)
I need to develop an HTTP proxy server. My proxy server is able to retrieve the HTTP request from web brownser. And I also able to connect to the server. I am not able to understand how to move further:
It how send the request to the Remote Server from proxy server.
I have following queries:
The format of request header to be send from HTTP proxy server to Remote Server
Is it the same header I received from the web brownser for GET,HEAD and POST methods.
I have tried sending the entire header:
GET http://www.gmail.com/ HTTP/1.1
Accept: text/html, application/xhtml+xml, /
Accept-Language: en-US
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)
Accept-Encoding: gzip, deflate
Proxy-Connection: Keep-Alive
Host: www.gmail.com
Or:
GET / HTTP/1.1
Host:www.gmail.com:80
The fundamental transformation you need to do from a proxy request to an HTTP server request is to change the first line:
GET http://www.gmail.com/ HTTP/1.1
to
GET / HTTP/1.1
The full URL is required when the browser sends the request to the proxy, so that the proxy can make the further connection to the real server. However, an HTTP request to the server must not contain the protocol and hostname parts on the GET line.
However, this may not be the only thing you need to do. An HTTP proxy is a fairly complex application, due to things like different protocol version numbers and connection options on the browser-proxy connection versus the proxy-server connection.
RFC 2616 contains a considerable amount of information regarding the correct behaviour of HTTP proxy applications.
I am trying to build a web app that lets the customer add demo data to any Salesforce instance. My demo builder uses OAuth 2 Authorization Code Grant.
I am trying to get the switch instance portion working. However once the user connects to one instance
GET /services/oauth2/authorize?response_type=code&client_id=blabla.UKP&redirect_uri=https%3A%2F%2Fsfblademo.bla.com%2Foauth%2Fcallback HTTP/1.1
Host: na9.salesforce.com
Connection: keep-alive
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.12 Safari/535.2
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Cookie: cookie_bla; disco=5:00D50000000Ii39:00550000001ifEp:0|; autocomplete=1; inst=APP5
It redirects to the previous instance. Seems like its reading cookies and redirecting
HTTP/1.1 302 Found
Server:
Location: https://na3.salesforce.com/setup/secur/RemoteAccessAuthorizationPage.apexp?source=blablabla
Content-Type: text/html
Content-Length: 525
Date: Fri, 16 Sep 2011 21:46:58 GMT
The URL has moved here
Is there a way to sign out or clear the cookies salesforce has. I am not running my app on salesforce.
Thanks !
The API logout() call isn't going to work because that will only invalidate the API session and not the UI session stored in the browser cookie on the *.salesforce.com domain, to which your app won't have direct access. That's not to say it isn't still recommended, but to clear that UI cookie, you'll need to redirect the end user to /secur/logout.jsp on the instance_url of the previous session. To make it transparent to end users, you can load it in a hidden iframe like this:
<iframe src='https://{instance_url}/secur/logout.jsp' width='0' height='0' style='display:none;'></iframe>
Before switching to other instance, you can try making the logout call, as described here WS Guide :http://www.salesforce.com/us/developer/docs/api/Content/sforce_api_calls_logout.htm
This will invalidate the previous session hopefully..
(the actual question has been edited because I was successful doing live streaming, BUT NOW I DO NOT UNDERSTAND THE COMMUNICATION between client and my C code.)
Okay I finally did live streaming using my C code. BUT I COULD NOT UNDERSTAND HOW "HTTP" IS WORKING HERE.
I studied the communication b/w my browser and the server at the link http://www.flumotion.com/demosite/webm/ using wireshark.
I found that the client first sends this GET request
GET /ahiasfhsasfsafsgfg.webm HTTP/1.1
Host: localhost
Connection: keep-alive
Referer: file:///home/anirudh/Desktop/anitom.html
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13
Accept-Encoding: gzip,deflate,sdch
Accept-Language: en-US,en;q=0.8
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3
Range: bytes=0-1024
to this get request the server responds by sending this reply
HTTP/1.0 200 OK
Date: Tue, 01 Mar 2011 06:14:58 GMT
Connection: close
Cache-control: private
Content-type: video/webm
Server: FlumotionHTTPServer/0.7.0.1
and then the server sends the data until the client disconnects. The client disconnects when it receives a certain amount of data. The CLIENT then connects to the server on a new port and the same GET request is sent to the server. The server again gives the same reply but this time the client does not disconnect but continuously reads the packets until the server disconnects. I wrote a C code which in which I have a server socket which replicates the above behavior. (thanks to wireshark, flumotion and stackoverflow)
BUT BUT BUT, I could not understand why does the client need to send two requests and why does it resets on the first request and again send the same request on a new port and this time it listens to the data as if its getting live streamed.
Also I do not know how I can live stream using chunked encoding.
The same thing in detail is available here : http://systemsdaemon.blogspot.com/2011/03/live-streaming-video-tutorial-for.html
and here http://systemsdaemon.blogspot.com/2011/03/http-streaming-video-using-program-in-c.html
Please help me out. Thanks in advance.
The first request is limited to 1024 bytes in order to test that the stream is actually a valid video source and not say a 600MB Windows executable.