I am sending integer array of size from one raspberry pi to another using POSIX sockets in C. I am writing an integer array of size 131072 from one pi, the return value from the write command shows that all 131072 values have been written. ret = write(socket, &array, sizeof(array)) Using the same method for receive ret = read(socket, &array, sizeof(array)) shows that all the sent values are not been read rather the number of values read correctly is also not constant but varies between 10000 to 20000.
I tried to use the read function inside a loop where I read integer in each loop iteration
for(int i =0; i<131072; i++){
ret = read(socket, &value, sizeof(value));
data[i] = value;}
I was able to receive all the values with no error or losses.
The underlying protocol (probably TCP and IP) splits the data into packets, which might arrive as individual data blocks on the receiving application. Theoretically, read() might receive every single byte individually (i.e. return 1 each time when called 2000 times). Your application needs to be able to work with that.
You might need to use code similar to this:
multi-rw.h
#ifndef JLSS_ID_MULTI_RW_H
#define JLSS_ID_MULTI_RW_H
#include <sys/types.h>
extern ssize_t multi_read(int fd, char *buffer, size_t nbytes);
extern ssize_t multi_write(int fd, const char *buffer, size_t nbytes);
#endif
multi-rw.c
It's a little unfortunate that the block of test code has to precede the active functions, but that's necessary (and test code is necessary — at least, I find it extremely helpful and reassuring). I suppose it could go into a separate header file (multi-rw-test.h or thereabouts) which would be conditionally included; it would be better for presentation on SO, but otherwise is just another file to worry about.
#include "multi-rw.h"
#include <unistd.h>
#ifdef TEST
#ifndef MAX_WRITE_SIZE
#define MAX_WRITE_SIZE 64
#endif
#ifndef MAX_READ_SIZE
#define MAX_READ_SIZE 64
#endif
static inline size_t min_size(size_t x, size_t y) { return (x < y) ? x : y; }
static inline ssize_t pseudo_read(int fd, char *buffer, size_t nbytes)
{
return read(fd, buffer, min_size(MAX_READ_SIZE, nbytes));
}
static inline ssize_t pseudo_write(int fd, const char *buffer, size_t nbytes)
{
return write(fd, buffer, min_size(MAX_READ_SIZE, nbytes));
}
#undef read
#undef write
#define read(fd, buffer, nbytes) pseudo_read(fd, buffer, nbytes)
#define write(fd, buffer, nbytes) pseudo_write(fd, buffer, nbytes)
#endif
ssize_t multi_read(int fd, char *buffer, size_t nbytes)
{
ssize_t nb = 0;
size_t nleft = nbytes;
ssize_t tbytes = 0;
while (nleft > 0 && (nb = read(fd, buffer, nleft)) > 0)
{
tbytes += nb;
buffer += nb;
nleft -= nb;
}
if (tbytes == 0)
tbytes = nb;
return tbytes;
}
ssize_t multi_write(int fd, const char *buffer, size_t nbytes)
{
ssize_t nb = 0;
size_t nleft = nbytes;
ssize_t tbytes = 0;
while (nleft > 0 && (nb = write(fd, buffer, nleft)) > 0)
{
tbytes += nb;
buffer += nb;
nleft -= nb;
}
if (tbytes == 0)
tbytes = nb;
return tbytes;
}
#ifdef TEST
#include "stderr.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char **argv)
{
if (argc != 0)
err_setarg0(argv[0]);
char buffer[4096];
ssize_t ibytes;
while ((ibytes = multi_read(0, buffer, sizeof(buffer))) > 0)
{
ssize_t obytes;
if ((obytes = multi_write(1, buffer, ibytes)) != ibytes)
err_syserr("failed to write %lld bytes - only wrote %lld bytes\n",
(long long)ibytes, (long long)obytes);
}
if (ibytes < 0)
err_syserr("I/O error reading standard input: ");
return 0;
}
#endif
The test harness allows you to test the code reading from standard input and writing to standard output. You can configure the amount of data read via (for example) compilation command line options -DMAX_WRITE_SIZE=132 and -DMAX_READ_SIZE=103. You need to test it on files smaller than (a) 4096 bytes, and (b) smaller than the maximum read and write sizes, and also on files bigger than 4096 bytes. If you are motivated enough, you can upgrade the pseudo_read() and pseudo_write() functions to generate errors quasi-randomly, to see how the code handles such errors.
Related
Trying to copy the contents of a file to another file by copying n bytes at a time in c. I believe the code below works for copying one byte at a time but am not sure how to make it work for n number of bytes, have tried making a character array of size n and changing the read/write functions to read(sourceFile , &c, n) and write(destFile , &c, n), but the buffer doesn't appear to work that way.
#include <fcntl.h>
#include <unistd.h>
#include <stdint.h>
#include <time.h>
void File_Copy(int sourceFile, int destFile, int n){
char c;
while(read(sourceFile , &c, 1) != 0){
write(destFile , &c, 1);
}
}
int main(){
int fd, fd_destination;
fd = open("source_file.txt", O_RDONLY); //opening files to be read/created and written to
fd_destination = open("destination_file.txt", O_RDWR | O_CREAT);
clock_t begin = clock(); //starting clock to time the copying function
File_Copy(fd, fd_destination, 100); //copy function
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC; //timing display
return 0;
}
how to make it work for n number of bytes
Just read N number of bytes and copy that many bytes that you successfully read.
#define N 4096
void File_Copy(int sourceFile, int destFile, int n){
char c[N];
const size_t csize = sizeof(c)/sizeof(*c);
while (1) {
const ssize_t readed = read(sourceFile, c, csize);
if (readed <= 0) {
// nothing more to read
break;
}
// copy to destination that many bytes we read
const ssize_t written = write(destFile, c, readed);
if (written != readed) {
// we didn't transfer everything and destFile should be blocking
// handle error
abort();
}
}
}
You want to copy a buffer of size n at once:
void File_Copy(int sourceFile, int destFile, int n){
char c[n];
ssize_t st;
while((st = read(sourceFile , c, n)) > 0){
write(destFile , c, st);
}
}
Note, that not necessarily n bytes are always copied at once, it might be less. And you also have to check the return value of write() and handle the situation, when less bytes were written, as it fits your needs.
One example is a loop:
while (st > 0) {
int w = write(destFile, c, st);
if (w < 0) {
perror("write");
return;
}
st -= w;
}
Another issue: When you create the destination file here
fd_destination = open("destination_file.txt", O_RDWR | O_CREAT);
you do not specify the third mode parameter. This leads to a random mode, which might lead to this open() to fail the next time. So better add a valid mode, for example like this:
fd_destination = open("destination_file.txt", O_RDWR | O_CREAT, 0644);
This might have distorted your test results.
This is my version using lseek (no loop required):
It relies on read and write always processing the complete buffer and never a part of it (I don't know if this is guaranteed).
void File_Copy(int sourceFile, int destFile)
{
off_t s = lseek(sourceFile, 0, SEEK_END);
lseek(sourceFile, 0, SEEK_SET);
char* c = malloc(s);
if (read(sourceFile, c, s) == s)
write(destFile, c, s);
free(c);
}
The following code does not rely on this assumption and can also be used for file descriptors not supporting lseek.
void File_Copy(int sourceFile, int destFile, int n)
{
char* c = malloc(n);
while (1)
{
ssize_t readStatus = read(sourceFile, c, n);
if (readStatus == -1)
{
printf("error, read returned -1, errno: %d\n", errno);
return;
}
if (readStatus == 0)
break; // EOF
ssize_t bytesWritten = 0;
while (bytesWritten != readStatus)
{
ssize_t writeStatus = write(destFile, c + bytesWritten, readStatus - bytesWritten);
if (writeStatus == -1)
{
printf("error, write returned -1, errno is %d\n", errno);
return;
}
bytesWritten += writeStatus;
if (bytesWritten > readStatus) // should not be possible
{
printf("how did 'bytesWritten > readStatus' happen?");
return;
}
}
}
free(c);
}
On my system (PCIe SSD) I get best performance with a buffer between 1MB and 4MB (you can also use dd to find this size). Bigger buffers don't make sense. And you need big files (try 50GB) to see the effect.
Consider the following C function:
static void
write_buf_to_disk(int fd, void *buf, __u64 size) {
const char* char_buf = buf;
__u64 written = 0;
while (size > 0) {
ssize_t res = write(fd, char_buf, size);
if (res == -1) {
if (errno == EINTR) {
// Write interrupted before anything written.
continue;
}
err(EXIT_FAILURE, "write");
}
written += res;
char_buf += res;
size -= res;
}
}
The function reads bytes out of buf until the requested number of bytes have been written. The type of size is out of my control, and must be a __u64.
I don't think this is portable due to friction between ssize_t and __u64.
ssize_t comes from a rather vague POSIX extension which AFAICS guarantees to be:
at least 16-bits wide
signed
the same size as a size_t
So in theory ssize_t could be (unlikely, I know) 512 bits wide, meaning that written += res invokes undefined behaviour.
How does one guard against this in a portable fashion?
res will be no higher than write's third argument, so all you have to do is constrain the 3rd argument of write to be no larger than the largest positive value that res (ssize_t) can store.
In other words, replace
ssize_t res = write(fd, char_buf, size);
with
size_t block_size = SSIZE_MAX;
if (block_size > size)
block_size = size;
ssize_t res = write(fd, char_buf, block_size);
You get:
static void
write_buf_to_disk(int fd, void *buf, __u64 size) {
const char* char_buf = buf;
size_t block_size = SSIZE_MAX;
while (size > 0) {
if (block_size > size)
block_size = size;
ssize_t res = write(fd, char_buf, block_size);
if (res == -1) {
if (errno == EINTR)
continue;
err(EXIT_FAILURE, "write");
}
char_buf += res;
size -= res;
}
}
In
ssize_t res = write(fd, buf, size);
even if ssize_t would be 512 bits wide, as you suggested, the compiler promotes the result of write (64 bits) to that size. So the comparison would still work.
In
written += res;
the compiler would give you a warning, but 64 bits to count the number of bytes written is really gigantic (~1019 bytes max). So you're unlikely to miss any write even though the addition is from a 512 bits to a 64 bits.
You could also assign the size to a ssize_t at the beginning of the function
write_buf_to_disk(int fd, void *buf, __u64 size64) {
ssize_t size = size64;
that would make the rest of the body in line with the system functions.
The C11 standard says (7.19):
The types used for size_t and ptrdiff_t should not have an integer conversion rank
greater than that of signed long int unless the implementation supports objects
large enough to make this necessary.
So size_t and ssize_t are unlikely to be 512 bits, unless you run on a 512-bit processor.
Right now you are extremely likely not to need more than 64 bits for any memory or disk size. That limits the amount of data you can have in a write statement.
I am having trouble sending a string, one char at a time through sock_stream connection. The reason for this is that I am attempting to send multiple strings which are nearly 70000 characters at a time. It seems that the write function I was attempting to use requires a string.
for(i=0;i<BUF_SIZE;i++)
{
write(sockfd,plaintext[i],1);
if(plaintext[i]=='0')
break;
}
write(sockfd,'^',sizeof(char));
Also, how would I read this? Here is how I was attempting it.
int read_line(int fd,char message[])
{
size_t message_len=0;
while (message_len<BUF_SIZE)
{
char c;
int ret = read(fd, &c, 1);
if (ret < 0)
{
message[message_len] = 0;
return len; // EOF reached
}
if (c == '^')
{
read(fd,&c,1);
message[message_len] = 0;
return message_len; // EOF reached
}
data[len++] = c;
}
}
How would I implement this? Thank you.
The signature of write api is:
int write(int fd, const void *buf, size_t nbyte);
So what you can do something like:
#define BUF_SIZE 70000
char *buf = (char*)malloc(BUF_SIZE);
int written = 0;
int wrote;
if (buf)
memset(buf, 1, BUF_SIZE);
else
return written;//some error code
while (written < BUF_SIZE)
{
wrote = write(fd, buf, BUF_SIZE);
if (wrote < 0)
return written;
written += wrote;
}
Similarly you should try to do bulk read, as reading one char at a time is too slow unless you have a very valid reason. Each time you do write or read, its system call and they are costly.
So for read you can try something like
int read_bytes = read(fd, buf, BUF_SIZE);
and read_bytes will have the exact value of how much you have read.
Then do parse_buf(buf) in which you can find the tag you are looking for and then save the rest for the future in case you get more data, else if you get less data, then call read again.
You need change line
write(sockfd,plaintext[i],1);
to
write(sockfd,&plaintext[i],1);
Additionally, you can use
setsockopt(sockfd, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
to enable TCP_NODELAY option.
So I want to make a file named genData.c that when executed for example: ./genData filename.txt will write 1 character to that file 1000 times.
In essence creating a 1kb file.
I would like to be able to modify the for loop, say 100000 times, to generate a 1MB file and so on.
Here is what I have tried and it compiles but when executed causes a segmentation fault.
Any suggestions? Sorry C is a language I've never dabbled in.
#include <stdio.h>
int main (int argc, char *argv) {
char ch = 'A';
FILE *fp;
fp = fopen(argv[1], "wb");
int i;
for (i = 0; i < 1000; i++) {
fwrite(&ch, sizeof(char), 1, fp);
}
fclose(fp);
return 0;
}
If you compile with warnings, you get a hint as to the exact problem:
test.c:3:5: warning: second argument of ‘main’ should be ‘char **’ [-Wmain]
int main (int argc, char *argv) {
^
All your troubles start downstream of this error. Fix this argument, and your code will work.
In the future, get into the habit of compiling with warnings turned on:
$ gcc -Wall foo.c
...
This will help catch typos and other oddities that will cause problems.
Since you tagged it Linux, this is how you can do it with the system-level functions (this should be a correct, most efficient way to do it):
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sysexits.h>
#include <fcntl.h>
#include <string.h>
ssize_t /* Write "n" bytes to a descriptor */
writen(int fd, const void *ptr, size_t n);
int
main (int argc, char** argv) {
char buf[1000];
memset(buf, 'A', sizeof(buf));
int fd;
if((fd = open(argv[1], O_WRONLY|O_CREAT, 0666))<0){
perror(argv[1]);
exit(EX_NOPERM);
}
ssize_t left = writen(fd, buf, sizeof(buf));
if(left)
perror("write error\n");
return !!left;
}
ssize_t /* Write "n" bytes to a descriptor */
writen(int fd, const void *ptr, size_t n) {
size_t nleft;
ssize_t nwritten;
nleft = n;
while (nleft > 0) {
if ((nwritten = write(fd, ptr, nleft)) < 0) {
if (nleft == n)
return(-1); /* error, return -1 */
else
break; /* error, return amount written so far */
} else if (nwritten == 0) {
break;
}
nleft -= nwritten;
ptr += nwritten;
}
return(n - nleft); /* return >= 0 */
}
#include <stdio.h>
#include <stdlib.h>
#define SIZE_OF_FILE 1024
int main(int argc, char *argv[])
{
FILE *fdest;
char ch = '\n';
if(argc != 2)
exit(EXIT_FAILURE);
fdest = fopen(argv[1], "wb");
if (fdest == NULL)
exit(EXIT_FAILURE);
fseek(fdest, SIZE_OF_FILE - 1, SEEK_CUR);
fwrite(&ch, sizeof(char), 1, fdest);
fclose(fdest);
return 0;
}
In essence creating a 1kb file
if the only purpose is creating a file with sizeof x, it is more simple i belive.
I am learning C and I have been trying to read a file and print what I just read. I open the file and need to call another function to read and return the sentence that was just read.
My function will return 1 if everything went fine or 0 otherwise.
I have been trying to make it work for a while but I really dont get why I cant manage to give line its value. In the main, it always prints (null).
The structure of the project has to stay the same, and I absolutely have to use open and read. Not fopen, or anything else...
If someone can explain it to me that would be awesome.
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#define BUFF_SIZE 50
int read_buff_size(int const fd, char **line)
{
char buf[BUFF_SIZE];
int a;
a = read(fd, buf, BUFF_SIZE);
buf[a] = '\0';
*line = strdup(buf);
return (1);
}
int main(int ac, char **av)
{
char *line;
int fd;
if (ac != 2)
{
printf("error");
return (0);
}
else
{
if((fd = open(av[1], O_RDONLY)) == -1)
{
printf("error");
return (0);
}
else
{
if (read_buff_size(fd, &line))
printf("%s\n", line);
}
close(fd);
}
}
Here:
char buf[BUFF_SIZE];
int a;
a = read(fd, buf, BUFF_SIZE);
buf[a] = '\0';
if there are more characters than BUFF_SIZE available to be read, then you will fill your array entirely, and buf[a] will be past the end of your array. You should either increase the size of buf by one character:
char buf[BUFF_SIZE + 1];
or, more logically given your macro name, read one fewer characters:
a = read(fd, buf, BUFF_SIZE - 1);
You should also check the returns from strdup() and read() for errors, as they can both fail.
read(fd, buf, BUFF_SIZE); //UB if string is same or longer as BUFF_SIZE
u need +1 byte to store 0, so use BUFF_SIZE - 1 on reading or +1 on array allocation...also you should check all returned values and if something failed - return 0
Keep it simple and take a look at:
https://github.com/mantovani/apue/blob/c47b4b1539d098c153edde8ff6400b8272acb709/mycat/mycat.c
(Archive form straight from the source: http://www.kohala.com/start/apue.tar.Z)
#define BUFFSIZE 8192
int main(void){
int n;
char buf[BUFFSIZE];
while ( (n = read(STDIN_FILENO, buf, BUFFSIZE)) > 0)
if (write(STDOUT_FILENO, buf, n) != n)
err_sys("write error");
if (n < 0)
err_sys("read error");
exit(0);
}
No need to use the heap (strdup). Just write your buffer to STDOUT_FILENO (=1) for as long as read returns a value that's greater than 0. If you end with read returning 0, the whole file has been read.