Writing to Linux device driver causes infinite loop - c

I have been writing a kernel space device driver that can be read from and written to from user space. The open, read, release operations all work perfectly. The problem I am having is with the user-space code that should access the device driver and and write something to it.
The user-space program writes to two files: 1) to a .txt file (and prints a to the console to let the user know it was completed), and 2) to the device driver (and also prints a text to let the user know it was also completed).
Below is the user-space code in full:
int main() {
FILE *fp;
fp = fopen("./test.txt","w");
fputs("Test\n", fp);
fclose(fp);
printf("Printed to txt\n"); //Prints normally.
fp = fopen("/dev/testchar", "w");
fputs("Test\n", fp);
fclose(fp);
printf("Printed to dev\n"); //Never gets to this point
return 0;
}
When I compile and run the code the program spits out
Printed to txt
and just hangs until ctrl+c is called. It never gets beyond the second fputs().
While monitoring kern.log I see endless calls to write to the device driver.
Here I have extracted relevant code from the device driver:
static char msg[256] = {0};
static struct file_operations fops =
{
.write = dev_write
};
static ssize_t dev_write(struct file *file, const char *buf, size_t len, loff_t *ppos)
{
sprintf(msg, "Input:%s, Chars:%lu\n", buf, len);
printk(KERN_NOTICE "%s\n", msg);
return 0;
}
uname -r: 4.10.0-38-generic
gcc -v: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
My question is: why is the program getting stuck in an infinite loop when writing to the device, and how do I fix it?
Thanks in advance. Any help will be greatly appreciated.

I think the kernel write operation is supposed to return the number of bytes written. You return 0. So the write system call returns to userspace with 0. However, since your userspace code is using stdio, then your userspace code tries the write again, assuming the system call simply didn't write out all the data. If you return the length of the input, then stdio will know all the data was written. Alternatively you can use the write system call directly rather than fputs. Your kernel code will still be incorrect, but your program will terminate.
You can test this using strace and see all the system calls.

Related

Kernel space write a file

I'm trying to write a file /proc/test/enable in kernel 5.10 C code.
I can write it in user space by calling fwrite.
When I using kernel_write in kernel C code, I will get the error kernel write not supported for file test/enable and error code 22.
struct file *filp;
char context[] = "test";
filp = filp_open("/proc/test/enable", O_WRONLY, 0);
int err = kernel_write(filp, (void *)context, sizeof(context), &filp->f_pos);
dev_info(afe->dev, "%s(), DONE, code = %d", __func__, err);
I can't understand why kernel_write can't get proc file write function, but userspace fwrite can
For support "normal" write (from userspace), a file may define .write or .write_iter operations (in its struct file_operations).
But writing from the kernel can use only .write_iter operation.
For support writing your file from the kernel, you need to define .write_iter operation for the file instead of .write one.
The failed check for your case is
/*
* Also fail if ->write_iter and ->write are both wired up as that
* implies very convoluted semantics.
*/
if (unlikely(!file->f_op->write_iter || file->f_op->write))
return warn_unsupported(file, "write");

How to use write() or fwrite() for writing data to terminal (stdout)?

I am trying to speed up my C program to spit out data faster.
Currently I am using printf() to give some data to the outside world. It is a continuous stream of data, therefore I am unable to use return(data).
How can I use write() or fwrite() to give the data out to the console instead of file?
Overall my setup consist of program written in C and its output goes to the python script, where the data is processed further. I form a pipe:
./program_in_c | script_in_python
This gives additional benefit on Raspberry Pi by using more of processor's cores.
#include <unistd.h>
ssize_t write(int fd, const void *buf, size_t count);
write() writes up to count bytes from the buffer starting at buf to
the file referred to by the file descriptor fd.
the standard output file descriptor is: 1 in linux at least!
concern using flush the stdoutput buffer as well, before calling to write system call to ensure that all previous garabge was cleaned
fflush(stdout); // Will now print everything in the stdout buffer
write(1, buf, count);
using fwrite:
size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);
The function fwrite() writes nmemb items of data, each size bytes
long, to the stream pointed to by stream, obtaining them from the
location given by ptr.
fflush(stdout);
int buf[8];
fwrite(buf, sizeof(int), sizeof(buf), stdout);
Please refare to man pages for further reading, in the links below:
fwrite
write
Well, there's little or no win in trying to overcome the already used buffering system of the stdio.h package. If you try to use fwrite() with larger buffers, you'll probably win no more time, and use more memory than is necessary, as stdio.h selects the best buffer size appropiate to the filesystem where the data is to be written.
A simple program like the following will show that speed is of no concern, as stdio is already buffering output.
#include <stdio.h>
int
main()
{
int c;
while((c = getchar()) >= 0)
putchar(c);
}
If you try the above and below programs:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main()
{
char buffer[512];
int n;
while((n = read(0, buffer, sizeof buffer)) > 0)
write(1, buffer, n);
if (n < 0) {
perror("read");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
You will see that there's no significative difference or, even, the first program will be faster, despite it is doing I/O on a per character basis. (as B. Kernighan & Dennis Ritchie wrote it in her first edition of "The C programming language") Most probably the first program will win.
The calls to read() and write() involve a system call each, with a buffer size decided by you. The individual getchar() and putchar() calls don't. They just store the received chars in a memory buffer, as you print them, whose size has been decided by the stdio.h library implementation, based on the filesystem, and it flushes the buffer, once it is full of data. If you grow the buffer size in the second program, you'll see that you get better results increasing it up to a point, but after that you'll see no more increment in speed. The number of calls made to the library is insignificant with respect to the time involved in doing the actual I/O, and selecting a very large buffer, will eat much memory from your system (and a Raspberry Pi memory is limited in this sense, to 1Gb or ram) If you end making swap due to a so large buffer, you'll lose the battle completely.
Most filesystems have a preferred buffer size, because the kernel does write ahead (the kernel reads more than what you asked for, on sequential reads, in prevision that you'll continue reading more after you consumed the data) and this affects the optimum buffer size. For that, the stat(2) system call tells you what is the optimum buffer size, and stdio uses that when it selects the actual buffer size.
Don't think you are going to get better (or much better) than the program listed first above. Even if you use large enough buffers.
What is not correct (or valid) is to intermix calls that do buffering (like all the stdio package's) with basic system calls (like read(2) or write(2) ---as I've seen recommending you to use fflush(3) after write(2), which is totally incoherent--- that do not buffer the data) there's no earn (and probably you'll get your output incorrectly ordered, if you do part of the calls using printf(3) and part using write(2) (this happens more in pipelines like you plan to do, because the buffers are not line oriented ---another characteristic of buffered output in stdio---)
Finally, I recomend you to read "The Unix programming environment" by Dennis Ritchie and Rob Pike. It will teach you a lot of unix, but one very good thing is that it will teach you to use perfectly the stdio package and the unix filesystem calls for reading and writing. With a little of luck you'll find it in .pdf on internet.
The next program shows you the effect of buffering:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main()
{
int i;
char *sep = "";
for (i = 0; i < 10; i++) {
printf("%s%d", sep, i);
sep = ", ";
sleep(1);
}
printf("\n");
}
One would assume you are going to see (on the terminal) the program, writing the numbers 0 to 9, separated by , and paced on one second intervals.
But due to the buffering, what you observe is quite different, you'll see how your program waits for 10 seconds without writing anything at all on the terminal, and at the end, writes everything in one shot, including the final line end, when the program terminates, and the shell shows you the prompt again.
If you change the program to this:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main()
{
int i;
char *sep = "";
for (i = 0; i < 10; i++) {
printf("%s%d", sep, i);
fflush(stdout);
sep = ", ";
sleep(1);
}
printf("\n");
}
You'll see the expected output, because you have told stdio to flush the buffer at each loop pass. In both programs you did 10 calls to printf(3), but there was only one write(2) at the end to write the full buffer. In the second version you forced stdio to do one such write(2) after each printf, and that showed the data out as the program passed through the loop.
Be careful, because another characteristic of stdio can be confounding you, as printf(3), when you print to a terminal device, flushes the output at each \n, but when you run it through a pipe, it does it only when the buffer fills up completely. This saves system calls (in FreeBSD, for example, the buffer size selected by stdio is around 32kb, large enough to force two blocks to write(2) and optimum (you'll not get better going above that size)
The console output in C works almost the same way as a file. Once you have included stdio.h, you can write on the console output, named stdout (for "standard output"). In the end, the following statement:
printf("hello world!\n");
is the same as:
char str[] = "hello world\n";
fwrite(str, sizeof(char), sizeof(str) - 1, stdout);
fflush(stdout);

Linux fread() call to USB device freezes when exposed to high CPU utilization

Recently I've written a driver for a drawing tablet called the "Boogie Board RIP" to be able to use it as an input device for linux. It can be connected via usb to a computer. When the provided pen is near or touching the device's screen, it will send data telling where the pen is on the screen.
Basically the driver works great. I can write on it as if it was a wacom tablet.
At unpredictable times, the program will hang on the line below and the cursor on my computer screen will stay in place
fread(packet, sizeof(char), BYTES, f);
Where:
"packet" is an array of 8 bytes
"BYTES" is 8
"f" is a file opened in binary read (rb) mode. In my case it's /dev/usb/hiddev0
The basic program layout is a while loop that reads a byte at a time. Below is a mock-up of the much larger thing:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
char *path = argv[1];
unsigned char packet[8];
FILE *f = fopen(path, "rb");
int i;
while (1) {
fread(packet, sizeof(char), 8, f);
for(i=0;i<8;i++) {
printf("%x", packet[i]);
fflush(stdout);
}
}
}
I started to notice that my driver would "freeze" more often when I was running more things, like watching youtube, playing music... These aren't great examples. Basically I began to suspect it was related to CPU utilization. So I wrote the program below to test it:
#include <stdio.h>
int main() {
while(1) {
printf("a");
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
fflush(stdout);
}
}
Running this "infinite loop" program in a separate terminal while running the other program above in another terminal will result in harshly more frequent freezes. Basically the program will stop at the fread() line without fail within 2 seconds.
I learned that when the call to fread() doesn't get any data I can still read from the device with another instance of that program or simply printing out the contents of the file via sudo cat /dev/usb/hiddev0. The original process remains stuck while the new program will spit out the data coming from the device.
It seems as though the file simply closes. But that doesn't make sense because then fread would segfault on the next read. Looking for any ideas.
EDIT:
I solved this by using libusb to deal with reading from my device rather than trying to read directly from the device file.

Printing to a file while conducting n simulations for a program (in C)

I am conducting n simulations using a program and albeit everything being correct, there is only one mistake which I am able to see in the output files.
I am printing the outputs of the program to a csv file.
I check the file before I print to it to get it's size which if it is 0, I print the headers. Here is the function which does the same:
void Data_Output(FILE *fp, int node_num, int agg_num, int cnode, int sysdelay, int bwdth_reqt)
{
struct stat buf;
int fd = fileno(fp);
fstat(fd, &buf);
//Debug Statement
fprintf(stderr,"%d-",buf.st_size);
if (!buf.st_size) {
// Writing Headers
fprintf(fp,"Tot_Nodes_Num,Agg_Nodes_Num,Central_Node_Num,Tot_System_Delay,Bandwidth_Reqt\n");
}
// Writing Data
fprintf(fp,"%d,%d,%d,%d,%d\n",node_num,agg_num,cnode,sysdelay,bwdth_reqt);
}
For 100 simulations, the output I get from the debug shows me:
0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-0-
What am I doing wrong? I suspect that the program stores everything in a buffer and only prints everything to the file after it is done with the simulations and the files are closed.
Note: I open and close the files only once during the whole program and not for each simulation.
You are correct. Stdio has it's own output buffering and fstat is only concerned with logical files. So the file writes do get delayed. Try putting fflush(fp); for the last line in your Data_Output function. I hope that helps.

about write function in linux device driver

I wrote a linux device driver and implemented the function device_write like this:
static int device_write(struct file* file,const char* buff,int count, loff * offp)
{
//some implementation
printk("write value %x to card\n",value);
//some implementation
}
I also implement the device_read function and have a printk to print some information in it.
The problem is when I use the read(fd,buff,1) in application program the printk result is displayed but when I use the write(fd,buff,1) there is no printk's result.The device_write function may not be called.What can cause this problem? Have anyone encounter this kind of problem before? Can anybody give me some help and suggestion?
This is only half an answer, but it is too big for a comment.
Are other actions within your device_write function happening?
Do a very simple printk at the top of the device_write function and see if that prints. Something like
static int device_write(struct file* file,const char* buff,int count, loff * offp)
{
printk("%s: %s\n", __FILE__, __func__);
that executes regardless of whatever else happens in the function. If that prints then you can narrow down where to go from there.
If that doesn't work then make sure you are actually setting the function pointer in the device structure. Or maybe your error is in the test application. Are you sure that you've opened up the device with write permissions? That would be an easy mistake to make if you copied code from a program initially written just to test the read functionality.

Resources