I'm trying to make a forwarding proxy but I keep getting an
Alert(Level: Fatal, Description: Decode Error)
after the Client sends...
Client Key Exchange, Change Cipher Spec, Encrypted Handshake Message
Any ideas as to what I'm doing wrong?
I can't seem to get a grasp on what the error even means. Does it mean
the initial encrypted packet by the client fails to be decrypted by
the server? If so, then why?
UPDATE 1
I just was looking at the packets and I noticed a significant difference between using my proxy, and not using the proxy.
The DFE key isn't being interpereted with my proxy.
Any ideas as to what I'm doing wrong?
You're not forwarding the exact amount of data that the proxy is supposed to forward.
But I see you're going further now than at the beginning of your question (good !)
You are implementing a proxy which forwards every single byte which it receives, in both ways, and either it sends too much to the server, or not enough. Check your code again for any conditions when you stop reading the input data to forward, be sure you're forwarding exactly everything. Nothing more, nothing less.
RFC 5246, about Decode Error :
decode_error
A message could not be decoded because some field was out of the
specified range or the length of the message was incorrect. This
message is always fatal and should never be observed in
communication between proper implementations (except when messages
were corrupted in the network).
Related
I'm currently working on a C application sending message to a Meteor server over websocket.
I'm using jansson for JSON conversion and nopoll as websocket library.
Everything is working fine in both way (sending / receiving) except when I try to send very large messages (about 15 000 000 characters). I think (I'm not sure) that the message is sent to the server so the nopoll library should not be the source of the issue. But, I'm sure that the message is not treated by Meteor as he should be, because the method (RPC) is never called.
I found that the websocket limitation is equal to the maximum value of a 64-bit unsigned value so this shouldn't be the problem.
On the other hand, I didn't found the maximum length for a DDP message even in the DDP specification.
Have you any idea of the DDP limitation or other parameters that I didn't think about ?
As I was working on an architecture where the client and the server was on the same machine, I was not limited by the network. I think I was pushing too much informations too fast, and the socket was simply full of data.
The solution is simple : cut the data in several fragments as LPs suggests and implement flow-control.
I also found the Mongo C driver that could be another solution because I should be able to push data directly on the database as suggests Mikkel.
Thank you both for your help.
There's something that bothers me: I'd like to distinguish between a packet coming from Youtube and a packet coming from Wikipedia: they both travel on HTTPS and they both come from the port 443.
Since they travel on HTTPS, their payload is not understandable and I can't do a full Deep Packet Inspection: I can only look at Ethernet, IP and TCP struct headers. I may look at the IP address source of both packets and see where they actually come from, but to know if they are from Youtube or Wikipedia I should already know the IP addresses of these two sites.
What I'm trying to figure out is a way to tell from a streaming over HTTP (like Youtube does) and a simple HTML transport (Wikipedia) without investigating the payload.
Edit 1: in a Wireshark session started during a reproducing video I got tons of packets. Maybe I should start looking at the timeout between packets coming from the same address.
If you are just interested in following the data stream in Wireshark you can use the TCP stream index, filter would be something like tcp.stream == 12
The stream index starts at zero with the first stream that wireshark encounters and increments for each new stream (persistent connection).
So two different streams between the same IPs would have two different numbers. For example a video stream might be 12 and an audio stream, between the same IP addresses, might be 13.
If you started the capture before the stream was initiated you'll be able to see the original traffic setting up the SSL connection (much of this is in clear text)
You may consider looking at the server certificate. It will tell you whether it's youtube (google) or facebook.
That would give you an idea whether SSL connection is to youtube, which one is to facebook.
You can try looking at the TCP header options, but generally the traffic is encrypted for a reason... so that it wouldn't be seen by man-in-the-middle. If it were possible, it would be, by definition, a poor encryption standard. Since you have the capture and all the information known to the user agent, you are not "in-the-middle". But you will need to use the user agent info to do the decryption before you can really see inside the stream.
this link: Reverse ip, find domain names on ip address
indicates several methods.
Suggest running nslookup on the IP from within a C program.
And remembering that address/ip values can be nested within the data of the packet, it may (probably will) take some investigation of the packet data to get to the originator of the packet
Well, you have encountered a dilema. How to get the info users are interchanging with their servers when they have explicitly encrypted the information to get anonymity. The quick response is you can't. But only if you can penetrate on the SSL connection you'll get more information.
Even the SSL certificate interchanged between server and client will be of not help, as it only identifies the server (and not the virtual host you'll try behind this connecton), and more than one SSL server (with the feature known as HTTP virtual host) several servers can be listening for connections on the same port of the same address.
SSL parameters are negotiated just after connection, and virtual server is normally selected with the Host http header field of the request (see RFC-2616) but these ocurr after the SSL negotiation has been finished, so you don't have access to them.
The only thing you can do for sure is to try to identify connections for youtube by the amounts and connection patterns this kind of traffic exhibit.
I am working on a client/server model based on Berkeley sockets and have almost finished but I'm stuck with a way to know that all of the data has been received whilst minimising the processing being executed on the client side.
The client I am working with has very little memory and battery and is to be deployed in remote conditions. This means that wherever possible I am trying to avoid processing (and therefore battery loss) on the client side. The following conditions on the client are outside of my control:
The client sends its data 1056 bytes at a time until it has ran out of data to send (I have no idea where the number 1056 came from but if you think that you know I would be very interested)
The client is very unpredictable in when it will send the data (it is attached to a wild animal and sends data determined by connection strength and battery life)
The client has an unknown amount of data to send at any given time
The data is transmitted though a GRSM enabled phone tag (Not sure that this is relevant but I'm assuming that extra information could only help)
(I am emulating the data I am expecting to receive from the client through localhost, if it seems to work I will ask the company where I am interning to invest in a static ip address to allow "real" tcp transfers, if it doesn't I won't. I don't think this is relevant but, again, I would rather provide too much information than too little)
At the moment I am using a while loop and incrementing the number of bytes received in order to "recv()" each of the 1056 byte sections. My problem is that the server needs to receive an unknown number of these. To me, the most obvious solutions are to send the number of sections to be received in an initial header from the client or to mark the last section being sent in some way. However, both of these approaches would require processing on the client side, I was wondering if there was a way to check whether the client has closed its socket from the server side? Or even whether something like closing the connection from the server after a pre-determined period of time without information from the client would be feasible? If these aren't possible then I would love to hear any other suggestions.
TLDR: What condition can I use here to minimise client-side processing?
while(!(/* Client has ran out of data to send*/)) {
receive1056Section();
}
Also, I know that it is bad practise to make a stackOverflow account and immediately ask a question, I didn't know what else to do, I'm sorry. Please don't hesitate to be mean if I've missed something very obvious.
Here is a suggestion for how to do the interaction:
The client:
Client connects to server via tcp.
Client sends chunks of data until all data has been sent. Flush the send buffer after each chunk.
When it is done the client issues a shutdown on the socket, sleeps for a couple of seconds and then closes the connection.
The client then sleeps until the next transmission. If the transmission was unsuccessful, the sleep time should be shorter to prevent unsent data to overflow the avaiable memory.
If the client is unable to connect for an extended period of time, you would have to discard data that doesn't fit in the memory.
I am assuming that sleep reduces power consumption.
The server:
The server programcan be single-threaded unless you need massive scalability. It is listening for incoming connections on the agreed port.
Whenever a client connects, a new socket is created.
Use select() to see which sockets has data (don't forget to include the listening socket!), and non-blocking reads to read from the sockets.
When you get the appropriate error (no more data to read and the other side has shutdown it's side of the connection), then you can close that socket.
This should work fine up to a couple of thousand simultaneous connections.
Example that handles many of the difficulties of implementing a server
For various reasons, I am trying to download a CRL file using crude tools in C. I'm opening a tcp connection using good old socket(), sending a hardcoded plaintext http request via send(), reading the results into a buffer via recv(), and then writing that buffer into a file (which I will later use to verify various certs).
The recv() and write-to-file portions are inside a while loop so that I can get it all.
My problem is that I'm having a heck of a time coming up with a reliable means of determining when I'm done receiving the file (and therefore can break out of the while loop). Everything I've come up with so far has either had false positives or false negatives (getting back 0 bytes happens too frequently, and either the EOF marker wasn't there or I was looking in the wrong byte for it). Preferably, it would be a technique that wouldn't introduce a lot of additional complexity.
Really, I have a host, port, and a path (all as char*). On the far end, there's a friendly http server (though not one that I control). I'd be happy with anything that could get me the file without a large quantity of additional code complexity. If I had access to a command line, I'd go for something like wget, but I haven't found any direct equivalents over on the C API side, and system() is a poor choice for the situation.
'Getting back zero bytes', by which I assume you mean recv() returning zero, only happens when the peer has finished sending data and has closed the connection. Unless the peer is sending you multiple files per connection, this is an infallible sign of the end of this file. 'Too frequently' is nonsense: it can only happen once per connection.
But if the peer is an HTTP server it should be sending you a Content-length header. See RFc 2616.
I have a simple c program to copy an image from the server using TCP
The problem is it always fails to work with certain images, it only receives 'x' bytes and then times out.
The program is not the problem here since i have tried with different programs(C and python using bigger recv buffers) using TCP and they still fail at 'x'th bytes.
server: vxworks
client: linux
if i try connecting from SUN client using the same code, it has no problem receiving the image. I did a bit of packet sniffing and found that my client is requesting packet 'A' which has the 'x'th byte in it. The server sends it or re-transmits it, but the client never acknowledges it and timesout eventually.
Question is why is this image specific ? and only happens on linux client?
the file written to the client is always 'x' bytes long
It looks like network issue for me. What is the size of packet? Sounds strange but could not it be MTU blackhole between server and linux?
My friend once experienced this exact same problem and it turned out to that the payload of the binary image he was transferring was triggering a bug in a filtering router along way. The route would just drop the connection when a particular byte sequence passed through. Bizarre but true.