Sending multiple files over tcp golang - simple protocol - file

I'm trying to build a simple protocol to send files over TCP in golang. After reading some stuff I decided to use GOB package to send a header with information about file being sent and then file in using raw socket. Between messages I'm using a delimiter ("/r/n")
So the sendflow looks like that:
Client(sends a file)(C)
Server(S)
(C)Reading file metadata(filesize, filename, etc)
(C)Encoding file metadata into struct
(C)Intialize connection with server
(C)Send encoded gob
(S)Receive a gob and decode
(C)Send a delimiter
(S)Receive a delimiter
(C)Start sending file using buffer (1024)
(S)Start receiving file and saving to created file until size from header message will be exceeded.
(C)Send a delimiter
(S)Receive a delimiter
(C)Close connection
Hopefully I explained it well. A problem is that in my unittests when I'm checking checksums of file being transfered sometimes I have wrong files it looks like sometimes the delimiter is added also. My question is if my simple protocol does make sense, if not could someone give my some advices how to build it to be robust and reliable.
Thanks in advance!

Related

C zlib gzfwrite() alternative?

I'm working on an assignment for class that involves sending .gz binary data from a server to a client using a socket. On the client end, it receives the stream and stores it into an unsigned char buffer. How can I write this compressed data into a new .gz file on the client side?
I can't use gzfwrite() because it was added in 2017 and the old ass machines they're using here to grade my program don't have zlib that recently updated. Any attempt to use it yields "undefined reference".
I've tried using fopen():
FILE *gz = fopen("test.gz", "wb"); // create new .gz on client end
fwrite(buffer, 1, buflen, gz); // buffer contains binary data sent from server
fclose(gz);
but calling gunzip on the resulting .gz fails since file test.gz outputs test.gz: data; as in it's not recognized as a .gz file.
I don't think gzprintf() will work either since it expects a null terminated string, and I have binary data. Ideally I'd like to just sent .gz from the server and write that info to a new .gz file on the client. Any suggestions?
Use gzwrite(gz, buffer, buflen) instead. That function has been there since time immemorial.

How can i send files asynchronously in a client-server application?(using winsock2.h, in C)

I want to send files asynchronously. I got on sending a file client->server->another client, but if i want to send a very big file, the client can't send any other commands to server until the file is totally sent. For every file client wants to send, i create a new thread in which i'll read 1kb of the file at a time and sending to the server, then the server will receive the 1kb and send further to the desired client. The problem is that when the client sends the file, the socket is full with bytes from server. I should make one client-server socket for every file i want to send? I've tried everything but nothing was a success.
Creating dedicated sockets for each transfer is one solution, and it's not a bad one unless the number of simultaneous connections is large (only so many IP ports are available on a system, and the server will need twice as many). Threads don't simplify this as much as you might think, and introduce their own challenges; select is a simpler way to efficiently transfer data on multiple sockets from a single thread/process. It works by exposing the underlying operating system's knowledge of which sockets are ready for reading and writing to the program.
The challenge for you with the multi-socket approach, regardless of threading choices, is that the server will have to tell the recipient to open a new connection back to the server for each new transfer. Now you need a command mechansim to tell the recipient to open a new connection for the next file.
Another option would be to open only one socket, but send multiple files simultaneously over the socket. You might accomplish this by sending a data structure containing the next parts of each file instead of simply streaming the file directly. For example, you might send a message that looks something like this (rendered in JSON for clarity, but it would be a valid transport format):
[
{
"name": "file.txt",
"bytes": "some smallish chunk of content",
"eof": false
},
{
"name": "another.txt",
"bytes": "chunk of another.txt content",
"eof": true
}
]
This example is of course naively simplistic, but hopefully it's enough to get the idea across: By structuring the messages you're sending, you can describe to which files, which chunks of bytes belong, and then send multiple chunks of multiple files at once. Because of your client->server->client approach, this seems like the best path forward to me.
using a struct similar to:
struct transferPacket
{
unsigned packetOfFile; // = 0 when starting a new file
unsigned fileNumber; // incremented with each new file
unsigned byteCount;
char payload[ MAX_PAYLOAD_LEN ];
};
When packetOfFile == 0 then starting a new file and payload contains filename
Otherwise indicates which part of file is being transfered.
When byteCount = 0 then EOF for that fileNumber
The above only takes a single TCP socket
multiple files can be transferred at one time.
The receiver knows which file the packet belongs to and which position in the file the payload belongs at.
The sender sends that same number of bytes each time, except for the first packet of a file and the EOF packet or the last data packet of the file

Binary file corruption over http using Apache Camel

I'm trying to read a binary file from a local filesystem, send it over HTTP, then in a different application I need to receive the file and write it out to the local file system, all using Apache Camel.
My (simplified) client code looks like this:
from("file:<path_to_local_directory>")
.setHeader(Exchange.HTTP_PATH, header("CamelFileNameOnly"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/octet-stream"))
.to("http4:localhost:9095");
And my server code is:
restConfiguration()
.component("spark-rest")
.port(9095);
rest("/{fileName}")
.post()
.consumes("application/octet-stream")
.to("file:<path_to_output_dir>?fileName=${header.fileName}");
As you can see, I'm using the Camel HTTP4 Component to send the file and the Spark-Rest component to receive it.
When I run this, and drop a file into the local directory, both the client and server applications work and the file is transmitted, received and written out again. The problem I'm seeing is that the original file is 5860kb, but the received file is 9932kb. As it's a binary file it's not really readable, but when I open it in a text editor I can easily see that it has changed and many characters are different.
It feels like it's being treated as a text file and it's being received and written out in a different character set to that in which it is written. As a binary file, I don't want it to be treated as a text file which is why I'm handling it as application/octet-stream, but this doesn't seem to be honoured. Or maybe it's not a character set problem and could be something else? Plain text files are transmitted and received correctly, with no corruption, which leads me to think that it is the special characters in the binary file that are causing the problem.
I'd like to resolve this so that the received file is identical to the sent file, so any help would be appreciated.
I got the same issue. By default, Camel will serialize it as a String when producing to the http endoint.
You should explicitly convert the GenericFile to byte[] by doing a simple : .convertBodyTo(byte[].class) before your .to("http4:..")

how to send a txt file from server to client using http in java

I have a problem in my grails application which reads a txt file stored in the disk and then sends the file to the client.
Now I am achieving this by reading the file line by line and storing them in a String array.
After reading all lines from the file, the String array is sent to the client as JSON.
In my gsp's javascript I get that array and display the array contents in a text area as
textarea.value = arr.join("\n\n");
This operation happens recursively for every 1 minute which is achieved using ajax.
My problem is, the txt which the server is reading consists of about 10,000 to 20,000 lines.
So reading all those 10,000+ lines and sending them as array creates problem in my IE8 which gets hung-up and finally crashes.
Is there any other easy way of sending the whole file through http and displaying it in browser?
Any help would be greatly appreciated.
Thanks in advance.
EDIT:
On Googling I found that, file input/output streaming is a better way to display the file contents in a browser but I couldn't find an example on how to do it.
Can anyone share some example on how to do it?

unable to send binary data over unix tcp socket

I'm trying to implement ftp commands GET and PUT thru a UNIX socket for file transfer using usual functions like fread(), fwrite(), send() and recv().
It works fine for text files, but fails for binary files (diff says: "binary files differ")
Any suggestions regarding the following will be appreciated:
Are there any specific commands to read and write binary data?
Can diff be used to compare binary files?
Is it possible to send the binary parts in chunks of memory?
the FTP protocol has 2 modes of operation: text and binary.
try it in any FTP client -- I believe the commands for switching in between are ASCII and BIN. The text mode has only effect from what I recall on the CR/LF pairs though.
If you're reading from a file and then writing the file's data to the socket, make sure you open the file in binary mode.
Yes, diff can be used to compare binary files, typically with the -q option to suppress the actual printing of differences, which rarely makes sense for binary files. You can also use md5 or cmp if you have them.

Resources