Consider the following struct:
struct msg {
int id;
int size;
double *data;
}
Now, this struct is to be used to communicate trough a pipe between a Producer and a Consumer processes.
As it is, it won't work, due to the data pointer... so it must be changed to actual data (not the pointer to data). But the complication arises from the fact that Producer must be able to send ANY amount of data (and receiver... works accordingly).
Does any one can, please point me a solution?
Specifically:
What is the best solution for defining the data structures?
Is union with a char* c_data (passing it to the write) the way to go?
How to implement read for accounting the size?
Thank you very much for your feedback.
There unfortunately is no native way of sending arbitrary objects through a pipe. However, you can achieve what you want pretty easily by sending raw data with the help of fread() and fwrite() as a very simple way of serializing the data in binary form.
Please keep in mind that in order for the following to work, both the producer and the consumer programs need to be compiled on the same machine, using the same data structure definitions and possibly the same compiler flags.
Here's a simple solution:
Create a common definition of an header structure to be used both by the producer and the receiver:
struct msg_header {
int id;
int size;
};
This will hold information about the real data. I would suggest you to use size_t to store the size, as it is unsigned and more suitable for this purpose.
In the producer, prepare the data to be sent along with the correct header, for example:
struct msg_header header = {.id = 0, .size = 4};
double *data = {1.23, 2.34, 3.45, 4.56};
It doesn't obviously need to be declared like this, it could even be dynamically sized and allocated through malloc(), the important thing is that you know the size.
Still in the producer, send the header followed by the data through the pipe:
// Use fdopen() if you don't already have a FILE*, otherwise skip this line.
FILE *pipe = fdopen(pipe_fd, "w");
// Send the header through the pipe.
fwrite(&header, sizeof(header), 1, pipe);
// Send the data through the pipe.
fwrite(&data, sizeof(*data), header.size, pipe);
In the consumer, read the header and then use the .size value to read the correct amount of data:
// Use fdopen() if you don't already have a FILE*, otherwise skip this line.
FILE *pipe = fdopen(pipe_fd, "r");
struct msg_header header;
double *data;
// Read the header from the pipe.
fread(&header, sizeof(header), 1, pipe);
// Allocate the memory needed to hold the data.
data = malloc(sizeof(*data) * header.size);
// Read the data from the pipe.
fread(&data, sizeof(*data), header.size, pipe);
Keep in mind that you have to check for errors after each of the above function calls. I did not add error checking in my examples just to make the code simpler. Refer to the manual pages for more information.
Related
I have a file I stored in a structure in the segment from a different process A. Now from process B I need to get this file and convert it to bytes so I can send it or send it while reading its bytes , what would be an ideal way of doing this? see below:
typedef struct mysegment_struct_t {
FILE *stream;
size_t size;
}
so I have the mapping to the segment and all just not sure how to get it now
size_t bytes_sent;
struct mysegment_struct_t *fileinfo =
(struct mysegment_struct_t *)mmap(NULL,size,PROT_READ | PROT_WRITE, MAP_SHARED, fd,0);
//read stream into a byte array? (how can this be done in c)
//FILE *f = fopen(fileinfo->stream, "w+b"); //i a bit lost here, the file is in the segment already
//send bytes
while (bytes_sent < fileinfo->size) {
bytes_sent +=send_to_client(buffer, size); //some buffer containing bytes?
}
I am kind of new to C programming but I cant find something like read the file in memory to a byte array for example.
Thanks
from blog https://www.softprayog.in/programming/interprocess-communication-using-posix-shared-memory-in-linux
there has to be a way i can share the file between processes using the shared memory.
You simply can't do this. The pointer stream points to objects that only exist in the memory of process A, and are not in the shared memory area (and even if they were, they wouldn't typically be mapped at the same address). You're going to have to design something else.
One possibility is to send the file descriptor over a Unix domain socket, see Portable way to pass file descriptor between different processes. However, it is probably worth stepping back and thinking about why you want to pass an open file between processes in the first place, and whether there is a better way to achieve your overall goal.
I'm interested in the basic principles of Web-servers, like Apache or Nginx, so now I'm developing my own server.
When my server gets a request, it's searching for a file (e.g index.html), if it exists - read all the content to the buffer (content) and write it to the socket after. Here's a simplified code:
int return_file(char* content, char* fullPath) {
file = open(fullPath, O_RDONLY);
if (file > 0) { // File was found, OK
while ((nread = read(file, content, 2048)) > 0) {}
close(file);
return 200;
}
}
The question is pretty simple: is it possible to avoid using buffer and write file content directly to the socket?
Thanks for any tips :)
There is no standardized system call which can write directly from a file to a socket.
However, some operating systems do provide such a call. For example, both FreeBSD and Linux implement a system call called sendfile, but the precise details differ between the two systems. (In both cases, you need the underlying file descriptor for the file, not the FILE* pointer, although on both these platforms you can use fileno() to extract the fd from the FILE*.)
For more information:
FreeBSD sendfile()
Linux sendfile()
What you can do is write the "chunk" you read immediately to the client.
In order to write the content, you MUST read it, so you can't avoid that, but you can use a smaller buffer, and write the contents as you read them eliminating the need to read the whole file into memory.
For instance, you could
unsigned char byte;
// FIXME: store the return value to allow
// choosing the right action on error.
//
// Note that `0' is not really an error.
while (read(file, &byte, 1) > 0) {
if (write(client, &byte, 1) <= 0) {
// Handle error.
}
}
but then, unsigned char byte; could be unsigned char byte[A_REASONABLE_BUFFER_SIZE]; which would be better, and you don't need to store ALL the content in memory.
}
No, it is not. There must be an intermediate storage that you use for reading/writing the data.
There is one edge case: when you use memory mapped files, the mapped file's region can be used for writing into socket. But internally, the system would anyway perform a read into memory buffer operation.
I'm using Sun RPC to implement a simple pseudo-distributed storage system. I have three instances of the same server, and one client on the same machine.
Server RPC implementation goes something like this:
char **
fileread64k_1_svc(char *filename, long offset, struct svc_req *rqstp)
{
static char * readResult;
//chunkName is a function of (fileName, offset)
FILE *chunkFile = fopen(chunkName, "r");
readResult = (char *) malloc(sizeof(char) * (CHUNKSIZE + 2));
fread(readResult, 1, CHUNKSIZE, chunkFile);
readResult[CHUNKSIZE] = '\0';
fclose(chunkFile);
return &readResult;
}
I give my client a list of files to read, and the client creates 3 threads (one for each instance of the server) and the threads distribute the files among them, and call the read RPC like this:
while all files are not read:
//pthread_mutex_lock(&lock);
char **out = fileread64k_1(fileName, offset, servers[id]);
//char *outData = *out;
//pthread_mutex_unlock(&lock);
But the data in out is replaced by another thread before I have a chance to process it. If I enable the commented lines (the mutex and the outData variable), I get the data in outData and I seem to be able to safely use it.
Can anyone explain why this happens and if there is a better workaround?
Because "readResult" is declared static. That means that all invocations of the method use the same space in memory for that variable, including concurrent invocations in different threads.
The problem should be taken care of if you just don't declare readResult as static -- but in that case, you won't be able to return its address, you should return the value of readResult itself.
Incidentally, which code has the responsibility of free()ing the allocated memory?
I need to split a large image or text file into multiple chunks of 10 bytes. These chunks will be sent via UDP to a server. The problem is:
1. Im unsure about this code. Is this a good way of splitting files?
2. The programs memory usage is pretty high. 400 KB only for this function.
int nChunks = 0;
char chunk[10];
FILE *fileToRead;
fileToRead = fopen(DEFAULT_FILENAME, "rb");
while (fgets(chunk, sizeof(chunk), fileToRead)) {
char *data = malloc(sizeof(chunk));
strcpy(data, chunk);
packet *packet = malloc(sizeof(packet));
packet->header = malloc(sizeof(packetHeader));
packet->header->acked = 0;
packet->header->id = ++nChunks;
packet->header->last = 0;
packet->header->timestamp = 0;
packet->header->windowSize = 10;
packet->data = data;
list_append(packages, packet);
}
typedef struct packetHeader{
...
}packetHeader;
typedef struct packet{
packetHeader *header;
void *data;
}packet;
Is this a good way of splitting files?
When the file consists of ten-character chunks of text, this is OK; since in your case the file is an image, not a text, you should use fread instead of fgets.
In addition, you should be passing sizeof(chunk), not sizeof(chunk)+1 as the size. Otherwise, fgets writes an extra byte past the end of the memory space allocated by your program.
Finally, you shouldn't be using strcpy to copy general data: use memcpy instead. In fact, you can avoid copying altogether if you manage buffers inside the loop, like this:
#define CHUNK_SIZE 10
...
// The declaration of chunk is no longer necessary
// char chunk[10];
...
for (;;) {
char *data = malloc(CHUNK_SIZE);
size_t len = fread(data, CHUNK_SIZE, 1, fileToRead);
if (len == 0) {
free(data);
break;
}
// At this point, the ownership of the "data" can be transferred
// to packet->data without making an additional copy.
...
}
The programs memory usage is pretty high. 400 KB only for this function.
This is because reading a file goes much faster than sending it via UDP. That's why the program manages to read the whole file into memory before the first few UDP packets get sent. You will be better off reading and sending in the same loop, rater than queuing up a whole list of packets upfront. When you read as you send, the slower part of the process controls the overall progress. This would let your program use a tiny portion of the memory space that it uses now, because there would be no buffering.
I am trying to do a chunked response (of large files) in libevent this way::
evhttp_send_reply_start(request, HTTP_OK, "OK");
int fd = open("filename", O_RDONLY);
size_t fileSize = <get_file_size>;
struct evbuffer *databuff = NULL;
for (off_t offset = 0;offset < fileSize;)
{
databuff = evbuffer_new();
size_t bytesLeft = fileSize - offset;
size_t bytesToRead = bytesLeft > MAX_READ_SIZE ? MAX_READ_SIZE : bytesLeft;
evbuffer_add_file(databuff, fd, offset, bytesToRead);
offset += bytesToRead;
evhttp_send_reply_chunk(request, databuff); // send it
evbuffer_free(databuff); // destroy it
}
evhttp_send_reply_end(request);
fclose(fptr);
Problem is with this I have a feeling the add_file is asynchronous so the 3rd or so evhttp_send_reply_chunk gives me an error (or something similar):
[warn] evhttp_send_chain Closed(45): Bad file descriptor
I set MAX_READ_SIZE to be 8 to actually test out chunked transfer encoding.
I noticed there was a evhttp_request_set_chunked_cb (struct evhttp_request *, void(*cb)(struct evhttp_request *, void *)) method I could use but could not find any examples on how to use.
For instance, how could I pass an argument to the callback? The argument seems to be the same argument that was passed to the request handler which is not what I want, because I want to create an object that holds the file descriptor and the file offset I am sending out.
Appreciate all help.
Thanks in advance
Sri
The libevent v2 documentation doesn't say that it's async, but it does say that it closes the file descriptor which your code doesn't account for.
I believe you need to move int fd = open("filename", O_RDONLY); inside your loop.
you can also test the chunk handling outside of your file code by just building a string buffer from scratch.
aside from that, (and the last line which should be fclose(fp); your example looks correct
Nice Nice mate. Thanks for that. I just realised that the only reason I wanted to chunked transfers were to avoid buffer reads. But since evbuffer_add_file already uses sendfile (if it finds it) this is not really an issue.
So I removed the loop completely and tried. But the contents are still not being sent. but atleast this time I dont have the bad file descriptor error (you are right - this was due to the file being closed - a check of file handles confirmed this!).