Getting Host field from TCP packet payload - c

I'm writing a kernel module in C, and trying to get the Host field from a TCP packet's payload, carrying http request headers.
I've managed to do something similar with FTP (scan the payload and look for FTP commands), but I can't seem to be able to do the same and find the field.
My module is connected to the POST_ROUTING hook.
each packet that goes to that hook, if it has a dst port of 80, is being recognized as an HTTP packet, and so my module starts to parse it.
for some reason, I can't seem to be able to get the HOST line (matter of fact, I only see the server HTTP 200 ok)
are these headers always go on the packets that use port 80?
if so, what is the best way to parse those packt's payload? seems like going char by char is a lot of work. is there any better way?
Thanks
EDIT:
Got some progress.
every packet I get from the server, I can read the payload with no problem. but every packet I send - it's like the payload is empty.
I thought it's a problem of skb pointer, but i'm getting the TCP ports fine. just can't seem to read this damn payload.
this is how i parse it:
unsigned char* user_data = (unsigned char *)((int)tcphd + (int)(tcphd->doff * 4));
unsigned char *it;
for (it = user_data; it != tail; ++it) {
unsigned char c = *(unsigned char *)it;
http_command[http_command_index] = c;
http_command_index++;
}
where tail:
tail = skb_tail_pointer(skb);
The pointer doesn't advance at all on the loop. it's like it's empty from the start or something, and I can't figure out why.
help, please.

I've managed to solve this.
using this
, I've figured out how to parse all of the packet's payload.
I hope this code explains it
int http_command_offset = iphd->ihl*4 + tcphd->doff*4;
int http_command_length = skb->len - http_command_offset;
http_command = kmalloc(http_command_length + 1, GFP_ATOMIC);
skb_copy_bits(skb, http_command_offset , (void*)http_command, http_command_length);
skb_cop_bits, just copies the payload entirely into the buffer i've created. parsing it now is pretty simple.

Related

Sending Image Data via HTTP Websockets in C

I'm currently trying to build a library similar to ExpressJS in C. I have the ability to send any text (with res.send() functionality) or textually formatted file (.html, .txt, .css, etc.).
However, sending image data seems to cause a lot more trouble! I'm trying to use pretty much the exact same process I used for reading textual files. I saw this post and answer which uses a MAXLEN variable, which I would like to avoid. First, here's how I'm reading the data in:
// fread char *, goes 64 chars at a time
char *read_64 = malloc(sizeof(char) * 64);
// the entirety of the file data is placed in full_data
int *full_data_max = malloc(sizeof(int)), full_data_index = 0;
*full_data_max = 64;
char *full_data = malloc(sizeof(char) * *full_data_max);
full_data[0] = '\0';
// start reading 64 characters at a time from the file while fread gives positive feedback
size_t fread_response_length = 0;
while ((fread_response_length = fread(read_64, sizeof(char), 64, f_pt)) > 0) {
// internal array checker to make sure full_data has enough space
full_data = resize_array(full_data, full_data_max, full_data_index + 65, sizeof(char));
// copy contents of read_64 into full_data
for (int read_data_in = 0; read_data_in < fread_response_length / sizeof(char); read_data_in++) {
full_data[full_data_index + read_data_in] = read_64[read_data_in];
}
// update the entirety data current index pointer
full_data_index += fread_response_length / sizeof(char);
}
full_data[full_data_index] = '\0';
I believe the error is related to this component here. Likely something with calculating data length with fread() responses perhaps? I'll take you through the HTTP response creating as well.
I split the response sending into two components (as per the response on this question here). First I send my header, which looks good (29834 seems a bit large for image data, but that is an unjustified thought):
HTTP/1.1 200 OK
Content-Length: 29834
Content-Type: image/jpg
Connection: Keep-Alive
Access-Control-Allow-Origin: *
I send this first using the following code:
int *head_msg_len = malloc(sizeof(int));
// internal header builder that builds the aforementioned header
char *main_head_msg = create_header(status, head_msg_len, status_code, headers, data_length);
// send header
int bytes_sent = 0;
while ((bytes_sent = send(sock, main_head_msg + bytes_sent, *head_msg_len - bytes_sent / sizeof(char), 0)) < sizeof(char) * *head_msg_len);
Sending the image data (body)
Then I use a similar setup to try sending the full_data element that has the image data in it:
bytes_sent = 0;
while ((bytes_sent = send(sock, full_data + bytes_sent, full_data_index - bytes_sent, 0)) < full_data_index);
So, this all seems reasonable to me! I've even taken a look at the file original file and the file post curling, and they each start and end with the exact same sequence:
Original (| implies a skip for easy reading):
�PNG
�
IHDR��X��d�IT pHYs
|
|
|
RU�X�^Q�����땵I1`��-���
#QEQEQEQEQE~��#��&IEND�B`�
Post using curl:
�PNG
�
IHDR��X��d�IT pHYs
|
|
|
RU�X�^Q�����땵I1`��-���
#QEQEQEQEQE~��#��&IEND�B`
However, trying to open the file that was created after curling results in corruption errors. Similar issues occur on the browser as well. I'm curious if this could be an off by one or something small.
Edit:
If you would like to see the full code, check out this branch on Github.

data send using DeviceIoControl from app to driver

I can send data driver to app.
In app:
DeviceIoControl(dHandle, IOCTL_TEST, (PVOID)InputBuffer, sizeof(InputBuffer), (PVOID)OutputBuffer, sizeof(OutputBuffer), &dwRet, 0);
printf("num : %s\n", OutputBuffer);
In driver:
char pData[1024];
pData="eeee";
case IOCTL_TEST:
pInputBuffer = Irp->AssociatedIrp.SystemBuffer;
pOutputBuffer = Irp->AssociatedIrp.SystemBuffer;
inputBufferLength = pStack->Parameters.DeviceIoControl.OutputBufferLength;
RtlCopyMemory(pOutputBuffer, pData, strlen(pData));
break;
Irp.IoStatus.Information=1024;
The result is printed "eeee" in application console.
But i don't know how to send app data to driver.
DeviceIoControl's 3, 4 parameters are input buffer and length.
If I add char InputBuffer[1024] = "InputBuffer's data"; in app, how driver can receive this data?
I want to use DbgPrint() for accepted data from app.
I want to select answer. plz answer not comment.
I solved it.
The solution is
the driver receive data from app with Irp->AssociatedIrp.Systembuffer; So, just print this pointer's data.

Using rtnetlink, reply message type to RTM_GETROUTE message?

I have sent an RTM_GETROUTE message to the kernel using netlink sockets. Now I am listening to kernel for messages.
Kernel sends nlmsghdr structure in reply via netlink sockets. I need to know what is the message type (nlmsg_type) for it? (my code is also listening to rout delete/create events, I want to distinguish).
Is it again RTM_GETROUTE in reply? Any example code or link is appreciated.
For routing, I could only find NEWROUTE, DELROUTE and GETROUTE messages but all 3 seem to have other purposes. (1st when a route is created, 2nd when one is deleted, and third for requesting as I used.)
Here is my code for sending the message.
struct nlmsghdr* hdr;
struct rtmsg* nl_p;
nl_p = (struct rtmsg*) NLMSG_DATA(hdr);
memset(&nl_p, 0, sizeof(nl_p));
hdr->nlmsg_pid = 0;
hdr->nlmsg_seq = ++seq_num;
hdr->nlmsg_type = RTM_GETROUTE;
nl_p->rtm_family = AF_INET;
nl_p->rtm_dst_len = 0;
nl_p->rtm_src_len = 0;
nl_pload->rtm_table = RT_TABLE_MAIN;
rtable_success = send(fd, hdr, hdr->nlmsg_len, 0)
There is an example of parsing received message, but I need to know my required message type (nlmsg_type) to filter others out.
There is libdnet project at:
http://libdnet.sourceforge.net/
You can find answer for your question there in route_get function.

C proxy server dropping requests

I have a tiny C proxy server which I just want to get one request at a time from the client and send back the response from the server. No pipelining, no anything advanced, just a persistent http connection.
Structs:
typedef struct http_request {
char* h_data; // Header raw data
int h_size; // Header size
char host[5000]; // Host to connect to
char resource[5000]; // Resource to get
} http_request;
typedef struct http_response {
char* h_data; // Header raw data
int h_size; // Header size
char* b_data; // Body raw data
int b_size; // Content-length of the body
} http_response;
Code:
while(1){
// Waiting for user to connect
int sock_user = accept(sock, (struct sockaddr*)NULL, NULL);
int sock_host=-1;
// Accept 1 request at a time and respond
while(1){
http_request req;
http_response resp;
// 1. Client ==> Proxy Server
http_parse_request(sock_user, &req); // uses recv(sock_user)
// 2. Client Proxy ==> Server
if (sock_host < 0)
sock_host=proxy_connect_host(req.host);
write(sock_host, req.h_data, req.h_size);
// 3. Client Proxy <== Server
http_parse_response(sock_host, &resp); // uses send(sock_host)
// 4. Client <== Proxy Server
write(sock_user, resp.h_data, resp.h_size);
write(sock_user, resp.b_data, resp.b_size);
}
}
Now this works good for a few first pages. Then the program blocks at step 1 and the browser just shows Waiting for www.calcoolate.com... all the time.
Firebug:
All of those GET, are requests sent to my proxy. I however receive only the first two of them. I double checked the return value of each write() and recv() and they seem to match exactly with what is expected. I checked for both -1s and 0s.
There must be something wrong with the logic of my proxy.. Any ideas?

Missing characters from input stream from fastcgi request

I'm trying to develop simple RESTful api using FastCGI (and restcgi). When I tried to implement POST method I noticed that the input stream (representing request body) is wrong. I did a little test and looks like when I try to read the stream only every other character is received.
Body sent: name=john&surname=smith
Received: aejh&unm=mt
I've tried more clients just to make sure it's not the client messing with the data.
My code is:
int main(int argc, char* argv[]) {
// FastCGI initialization.
FCGX_Init();
FCGX_Request request;
FCGX_InitRequest(&request, 0, 0);
while (FCGX_Accept_r(&request) >= 0) {
// FastCGI request setup.
fcgi_streambuf fisbuf(request.in);
std::istream is(&fisbuf);
fcgi_streambuf fosbuf(request.out);
std::ostream os(&fosbuf);
std::string str;
is >> str;
std::cerr << str; // this way I can see it in apache error log
// restcgi code here
}
return 0;
}
I'm using fast_cgi module with apache (not sure if that makes any difference).
Any idea what am I doing wrong?
The problem is in fcgio.cpp
The fcgi_steambuf class is defined using char_type, but the int underflow() method downcasts its return value to (unsigned char), it should cast to (char_type).
I encountered this problem as well, on an unmodified Debian install.
I found that the problem went away if I supplied a buffer to the fcgi_streambuf constructor:
const size_t LEN = ... // whatever, it doesn't have to be big.
vector<char> v (LEN);
fcgi_streambuf buf (request.in, &v[0], v.size());
iostream in (&buf);
string s;
getline(in, s); // s now holds the correct data.
After finding no answer anywhere (not even FastCGI mailing list) I dumped the original fastcgi libraries and tried using fastcgi++ libraries instead. The problem disappeared. There are also other benefits - c++, more features, easier to use.
Use is.read() not is >> ...
Sample from restcgi documentation:
clen = strtol(clenstr, &clenstr, 10);
if (*clenstr)
{
cerr << "can't parse \"CONTENT_LENGTH="
<< FCGX_GetParam("CONTENT_LENGTH", request->envp)
<< "\"\n";
clen = STDIN_MAX;
}
// *always* put a cap on the amount of data that will be read
if (clen > STDIN_MAX) clen = STDIN_MAX;
*content = new char[clen];
is.read(*content, clen);
clen = is.gcount();

Resources