I've created a user space program that reads from a text file into a buffer, and writes it to device using fprintf and the store function from said device.
The user space program is supposed to write the entire buffer and send it into the store function's buf parameter.
//store function for chardev
ssize_t modify(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) //sysfs store/write implementation - add new rule
Then the device is supposed the read the strings from the buffer 9 by 9 (hard coded size of line in the buffer).
for(j=0;j<=num_of_rules;j++){
if((read=sscanf(buf,"%s%s%s%s%s%s%s%s%s%n",
rule_name, direction_string, src_ip,
dst_ip, src_port_string, dst_port_string,
protocol_string, ack_string, action_string, &offset))==9)
{
printk("offset = %u",offset);
EDIT: Adding pictures of the run attempt
Picture 1: Running the user space program and printing the contents of buf:
Picture 2: Running dmesg and showing that the data has been written to the kernel module. Also, the offset is 59, the length of the first line:
The reading of the first line goes fine, and the offset is correct, but when I try to advance buf with the offset using buf+=offest, the next run kills the process and prints "Killed".
Why is that and how can I fix it?
Thank you.
Related
I am writing a character device driver, but at the moment it freezes, and i have to reboot to stop it or crashes and the terminal exits
I have a global array
char* array;
On which i use kmalloc(9, GFP_KERNEL) so it should be the size of 9. If i wanted to use file operations .write to set a specific index how would i do that?
This is my current code (which crashes and terminal exits)
ssize_t mydriver_write(struct file *filp, const char* buf, size_t count, loff_t *f_pos)
{
raw_copy_to_user(array[*buf], 'x', 1);
}
EDIT:
I have already tried this version aswell
raw_copy_to_user(&array[3], x, 1);
where x is kmalloc'd to size 1 and x[0]='x'
But in this case my program freezes and i cannot remove the driver and the machine requires a reboot to remove it.
I have been writing a kernel space device driver that can be read from and written to from user space. The open, read, release operations all work perfectly. The problem I am having is with the user-space code that should access the device driver and and write something to it.
The user-space program writes to two files: 1) to a .txt file (and prints a to the console to let the user know it was completed), and 2) to the device driver (and also prints a text to let the user know it was also completed).
Below is the user-space code in full:
int main() {
FILE *fp;
fp = fopen("./test.txt","w");
fputs("Test\n", fp);
fclose(fp);
printf("Printed to txt\n"); //Prints normally.
fp = fopen("/dev/testchar", "w");
fputs("Test\n", fp);
fclose(fp);
printf("Printed to dev\n"); //Never gets to this point
return 0;
}
When I compile and run the code the program spits out
Printed to txt
and just hangs until ctrl+c is called. It never gets beyond the second fputs().
While monitoring kern.log I see endless calls to write to the device driver.
Here I have extracted relevant code from the device driver:
static char msg[256] = {0};
static struct file_operations fops =
{
.write = dev_write
};
static ssize_t dev_write(struct file *file, const char *buf, size_t len, loff_t *ppos)
{
sprintf(msg, "Input:%s, Chars:%lu\n", buf, len);
printk(KERN_NOTICE "%s\n", msg);
return 0;
}
uname -r: 4.10.0-38-generic
gcc -v: gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.4)
My question is: why is the program getting stuck in an infinite loop when writing to the device, and how do I fix it?
Thanks in advance. Any help will be greatly appreciated.
I think the kernel write operation is supposed to return the number of bytes written. You return 0. So the write system call returns to userspace with 0. However, since your userspace code is using stdio, then your userspace code tries the write again, assuming the system call simply didn't write out all the data. If you return the length of the input, then stdio will know all the data was written. Alternatively you can use the write system call directly rather than fputs. Your kernel code will still be incorrect, but your program will terminate.
You can test this using strace and see all the system calls.
I've created WRITE_IOCTL in kernel module and I call it in user mode:
ioctl(fd, WRITE_IOCTL, "Hello, Kernel!");
In kernel mode I have:
static int device_ioctl(struct file *filp,
unsigned int cmd, unsigned long args) {
char buff[14];
switch (cmd) {
case WRITE_IOCTL:
copy_from_user( buff,(char *)args, 14);
printk("This message received from User Space: %s\n", buff);
break;
}
return 0;
}
When I run this ioctl, I have some thing like theses in /var/log/kern.log :
This message received from User Space: Hello, Kernel!vE�
This message received from User Space: Hello, Kernel!M�
This message received from User Space: Hello, Kernel!M�
How can I solve this problem??
Probably copy_from_user() isn't putting the null-byte-terminattor because args is greater-or-equal than your n and printk() is expecting a null-terminatted one, so you're accessing garbage values. For solve that, initialize yourself buf to zeros:
char buff[14 + 1] = {0}; // +1 for make room to 0-byte-terminattor.
It will fill all bytes of buf with zeros.
EDIT:
As #caf mentioned in comments, you need to left some space to null-byte-terminattor. So, instead of give exactly the buffer size to function, pass it n-1 so the function will loop untl n and then put the null-byte.
I'm trying to read data from a COM port line-by-line in Windows. In PuTTY, the COM connection looks fine - my serial device (an MSP430 Launchpad) outputs the string "Data" once per second. However, when I use a simple C program to read the COM port and print out the number of bytes read, then the data itself, it gets completely mangled:
0
6 Data
2 Data
4 ta
6 Data
3 Data
3 a
a
6 Data
6 Data
2 Data
The lines saying 6 Data are correct (four characters, then \r\n), but what's happening to those lines that do not contain a complete message? According to the documentation, ReadFile should read an entire line by default. Is this incorrect - do I need to buffer it myself and wait for a linefeed character?
Note that not all those errors would occur in each run of the code; I did a few runs and compiled a variety of errors for your viewing pleasure. Here's the code I'm using:
#include <windows.h>
#include <stdio.h>
static DCB settings;
static HANDLE serial;
static char line[200];
static unsigned long read;
static unsigned int lineLength = sizeof(line) / sizeof(char);
int main(void) {
int i = 10;
serial = CreateFile("COM4",
GENERIC_READ | GENERIC_WRITE,
0, NULL,
OPEN_EXISTING,
0, NULL);
GetCommState(serial, &settings);
settings.BaudRate = CBR_9600;
settings.ByteSize = 8;
settings.Parity = NOPARITY;
settings.StopBits = ONESTOPBIT;
SetCommState(serial, &settings);
while(i) {
ReadFile(serial, &line, lineLength, &read, 0);
printf("%lu %s\n", read, line);
i--;
}
scanf("%c", &read);
return 0;
}
Compiled in Windows 7 64-bit using Visual Studio Express 2012.
What's happening is that the ReadFile is returning after it gets any data. Since data may come on a serial port at some point in the future, ReadFile will return when it gets some amount of data on the serial port. The same thing happens in Linux as well, if you attempt to read from a serial port. The data that you get back may or may not be an entire line, depending on how much information is in the buffer when your process gets dispatched again.
If you take another look at the documentation, notice that it will only return a line when the HANDLE is in console mode:
Characters can be read from the console input buffer by using ReadFile with a handle to console input. The console mode determines the exact behavior of the ReadFile function. By default, the console mode is ENABLE_LINE_INPUT, which indicates that ReadFile should read until it reaches a carriage return. If you press Ctrl+C, the call succeeds, but GetLastError returns ERROR_OPERATION_ABORTED. For more information, see CreateFile.
I'm taking a networking class at school and am using C/GDB for the first time. Our assignment is to make a webserver that communicates with a client browser. I am well underway and can open files and send them to the client. Everything goes great till I open a very large file and then I seg fault. I'm not a pro at C/GDB so I'm sorry if that is causing me to ask silly questions and not be able to see the solution myself but when I looked at the dumped core I see my seg fault comes here:
if (-1 == (openfd = open(path, O_RDONLY)))
Specifically we are tasked with opening the file and the sending it to the client browser. My Algorithm goes:
Open/Error catch
Read the file into a buffer/Error catch
Send the file
We were also tasked with making sure that the server doesn't crash when SENDING very large files. But my problem seems to be with opening them. I can send all my smaller files just fine. The file in question is 29.5MB.
The whole algorithm is:
ssize_t send_file(int conn, char *path, int len, int blksize, char *mime) {
int openfd; // File descriptor for file we open at path
int temp; // Counter for the size of the file that we send
char buffer[len]; // Buffer to read the file we are opening that is len big
// Open the file
if (-1 == (openfd = open(path, O_RDONLY))) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
// Read from file
if (-1 == read(openfd, buffer, len)) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
(void) close(openfd);
// Send the buffer now
logwrite(stdout, SUC_REQ);
send_head(conn, mime, 200, len);
send(conn, &buffer[0], len, 0);
return len;
}
I dunno if it is just a fact that a I am Unix/C novice. Sorry if it is. =( But you're help is much appreciated.
It's possible I'm just misunderstanding what you meant in your question, but I feel I should point out that in general, it's a bad idea to try to read the entire file at once, in case you deal with something that's just too big for your memory to handle.
It's smarter to allocate a buffer of a specific size, say 8192 bytes (well, that's what I tend to do a lot, anyway), and just always read and send that much, as much as necessary, until your read() operation returns 0 (and no errno set) for end of stream.
I suspect you have a stackoverflow (I should get bonus points for using that term on this site).
The problem is you are allocating the buffer for the entire file on the stack all at once. For larger files, this buffer is larger than the stack, and the next time you try to call a function (and thus put some parameters for it on the stack) the program crashes.
The crash appears at the open line because allocating the buffer on the stack doesn't actually write any memory, it just changes the stack pointer. When your call to open tries tow rite the parameters to the stack, the top of the stack is now overflown and this causes a crash.
The solution is as Platinum Azure or dreamlax suggest, read in the file little bits at a time or allocate your buffer on the heap will malloc or new.
Rather than using a variable length array, perhaps try allocated the memory using malloc.
char *buffer = malloc (len);
...
free (buffer);
I just did some simple tests on my system, and when I use variable length arrays of a big size (like the size you're having trouble with), I also get a SEGFAULT.
You're allocating the buffer on the stack, and it's way too big.
When you allocate storage on the stack, all the compiler does is decrease the stack pointer enough to make that much room (this keeps stack variable allocation to constant time). It does not try to touch any of this stacked memory. Then, when you call open(), it tries to put the parameters on the stack and discovers it has overflowed the stack and dies.
You need to either operate on the file in chunks, memory-map it (mmap()), or malloc() storage.
Also, path should be declared const char*.