I'm working on an assignment for class that involves sending .gz binary data from a server to a client using a socket. On the client end, it receives the stream and stores it into an unsigned char buffer. How can I write this compressed data into a new .gz file on the client side?
I can't use gzfwrite() because it was added in 2017 and the old ass machines they're using here to grade my program don't have zlib that recently updated. Any attempt to use it yields "undefined reference".
I've tried using fopen():
FILE *gz = fopen("test.gz", "wb"); // create new .gz on client end
fwrite(buffer, 1, buflen, gz); // buffer contains binary data sent from server
fclose(gz);
but calling gunzip on the resulting .gz fails since file test.gz outputs test.gz: data; as in it's not recognized as a .gz file.
I don't think gzprintf() will work either since it expects a null terminated string, and I have binary data. Ideally I'd like to just sent .gz from the server and write that info to a new .gz file on the client. Any suggestions?
Use gzwrite(gz, buffer, buflen) instead. That function has been there since time immemorial.
Related
I want to send files asynchronously. I got on sending a file client->server->another client, but if i want to send a very big file, the client can't send any other commands to server until the file is totally sent. For every file client wants to send, i create a new thread in which i'll read 1kb of the file at a time and sending to the server, then the server will receive the 1kb and send further to the desired client. The problem is that when the client sends the file, the socket is full with bytes from server. I should make one client-server socket for every file i want to send? I've tried everything but nothing was a success.
Creating dedicated sockets for each transfer is one solution, and it's not a bad one unless the number of simultaneous connections is large (only so many IP ports are available on a system, and the server will need twice as many). Threads don't simplify this as much as you might think, and introduce their own challenges; select is a simpler way to efficiently transfer data on multiple sockets from a single thread/process. It works by exposing the underlying operating system's knowledge of which sockets are ready for reading and writing to the program.
The challenge for you with the multi-socket approach, regardless of threading choices, is that the server will have to tell the recipient to open a new connection back to the server for each new transfer. Now you need a command mechansim to tell the recipient to open a new connection for the next file.
Another option would be to open only one socket, but send multiple files simultaneously over the socket. You might accomplish this by sending a data structure containing the next parts of each file instead of simply streaming the file directly. For example, you might send a message that looks something like this (rendered in JSON for clarity, but it would be a valid transport format):
[
{
"name": "file.txt",
"bytes": "some smallish chunk of content",
"eof": false
},
{
"name": "another.txt",
"bytes": "chunk of another.txt content",
"eof": true
}
]
This example is of course naively simplistic, but hopefully it's enough to get the idea across: By structuring the messages you're sending, you can describe to which files, which chunks of bytes belong, and then send multiple chunks of multiple files at once. Because of your client->server->client approach, this seems like the best path forward to me.
using a struct similar to:
struct transferPacket
{
unsigned packetOfFile; // = 0 when starting a new file
unsigned fileNumber; // incremented with each new file
unsigned byteCount;
char payload[ MAX_PAYLOAD_LEN ];
};
When packetOfFile == 0 then starting a new file and payload contains filename
Otherwise indicates which part of file is being transfered.
When byteCount = 0 then EOF for that fileNumber
The above only takes a single TCP socket
multiple files can be transferred at one time.
The receiver knows which file the packet belongs to and which position in the file the payload belongs at.
The sender sends that same number of bytes each time, except for the first packet of a file and the EOF packet or the last data packet of the file
I'm trying to build a simple protocol to send files over TCP in golang. After reading some stuff I decided to use GOB package to send a header with information about file being sent and then file in using raw socket. Between messages I'm using a delimiter ("/r/n")
So the sendflow looks like that:
Client(sends a file)(C)
Server(S)
(C)Reading file metadata(filesize, filename, etc)
(C)Encoding file metadata into struct
(C)Intialize connection with server
(C)Send encoded gob
(S)Receive a gob and decode
(C)Send a delimiter
(S)Receive a delimiter
(C)Start sending file using buffer (1024)
(S)Start receiving file and saving to created file until size from header message will be exceeded.
(C)Send a delimiter
(S)Receive a delimiter
(C)Close connection
Hopefully I explained it well. A problem is that in my unittests when I'm checking checksums of file being transfered sometimes I have wrong files it looks like sometimes the delimiter is added also. My question is if my simple protocol does make sense, if not could someone give my some advices how to build it to be robust and reliable.
Thanks in advance!
I'm trying to read a binary file from a local filesystem, send it over HTTP, then in a different application I need to receive the file and write it out to the local file system, all using Apache Camel.
My (simplified) client code looks like this:
from("file:<path_to_local_directory>")
.setHeader(Exchange.HTTP_PATH, header("CamelFileNameOnly"))
.setHeader(Exchange.CONTENT_TYPE, constant("application/octet-stream"))
.to("http4:localhost:9095");
And my server code is:
restConfiguration()
.component("spark-rest")
.port(9095);
rest("/{fileName}")
.post()
.consumes("application/octet-stream")
.to("file:<path_to_output_dir>?fileName=${header.fileName}");
As you can see, I'm using the Camel HTTP4 Component to send the file and the Spark-Rest component to receive it.
When I run this, and drop a file into the local directory, both the client and server applications work and the file is transmitted, received and written out again. The problem I'm seeing is that the original file is 5860kb, but the received file is 9932kb. As it's a binary file it's not really readable, but when I open it in a text editor I can easily see that it has changed and many characters are different.
It feels like it's being treated as a text file and it's being received and written out in a different character set to that in which it is written. As a binary file, I don't want it to be treated as a text file which is why I'm handling it as application/octet-stream, but this doesn't seem to be honoured. Or maybe it's not a character set problem and could be something else? Plain text files are transmitted and received correctly, with no corruption, which leads me to think that it is the special characters in the binary file that are causing the problem.
I'd like to resolve this so that the received file is identical to the sent file, so any help would be appreciated.
I got the same issue. By default, Camel will serialize it as a String when producing to the http endoint.
You should explicitly convert the GenericFile to byte[] by doing a simple : .convertBodyTo(byte[].class) before your .to("http4:..")
i have made one client server application in which client sends file (i.e ODT,PDF,MP3,MP4, etc) and server receives file.
i am dividing file in chunks and then transmits them in while loop.
below i have given main logic for both client and server.
when i do loop-back with 127.0.0.1, this code works successfully.
but when i runs client and server on two different PC, after transmitting a file client exits but server keeps receiving and then i have to press ctrl^C. the size of file at server side reaches over 1GB even if file size at client side is only around 4.2 MB.
and in loopback i am not getting such problem.
please tell me the needed corrections.
client.c
#define SIZE 512 // or anything else
char sendbuff[SIZE];
FILE *fr;
fr = fopen("1.mp3","r");
while(!feof(fr)){
count = fread(sendbuff, SIZE,1,fr);
count = send(clientsd, sendbuff,SIZE,0); //clientsd is socket descriptor.
}
send(clientsd, "xyz", 3, 0); //sending '1'. tells server, transmission is over now.
close(fr);
server.c
#define SIZE 512 // same as client side
char recvbuff[SIZE];
FILE *fw;
fw = fopen("2.mp3","w");
while(1){
count = recv(connsd, recvbuff, SIZE,0);
if(!strcmp(recvbuff,"xyz"))
break;
fwrite(recvbuff,SIZE, 1, fw);
memset(recvbuff,0,SIZE);
}
printf("Exit while\n");
fclose(fw);
any other simple and efficient way to do this ?
NOTE : I have changed my question. here some answers are on my old question where i have transmitted "1" instead of "xyz". which was an error.
The most obvious problem is your stop condition on the server side.
You assume that if the first byte received is '1' (0x31) than the transfer is over, but it might be a byte of the data (if the first byte of the chunk in the file is actually '1'). So you need some other way to signal the end of the file. One possibility is to use a wrapping for each packet sent, for example, before each packet send a specific value (for example '1') followed by the length, and when the transfer is complete send '0' to signal that the transfer is completed.
The other problems I can see are that:
You open the files as read text ("r") and write text ("w") which will stop processing if the EOF sequence appears in the middle of the file, instead you need to open them as read/write binary ("rb" / "wb" respectively).
You use chunks of 512 bytes, what if the file is not a multiple of 512 bytes?
I was having problem in storing .png images , fetched from HTTP server using a C code . I must mention that the images are successfully fetched from server as i analysed wireshark while running the code and content length in header matches buffer size. When i write the buffer data to a .png file using fwrite like this:
Your logic of writing upto *pt != '\0' is not correct in this loop
while((c=*pt)!='\0')
{
fwrite(&c, sizeof(c),1, fp);
pt++;;
}
There can be '\0' character anywhere in the binary data of .png file. So you will not write data after that, hence you are seeing your file size smaller than file size on server.
You should parse the HTTP headers and get value for Content-Length, which gives size of data in bytes and read those many from server and write in local file.
Look for HTTP RFC for more details about the protocol.
There are other problems in your code like
char *pt=malloc(100000);
pt=strstr(rep,"\r\n\r\n");
With this malloc() to pt is not required and leaks the memory.
Try to open the file for output in binary mode , give FILE *fp the flag wb.
Also, why do you try to write byte by byte ? try fwrite(pt,10000,1,fp)