I've seen the following snippet of code used repeatedly for Linux hwmon devices:
return sprintf(buf, "%d\n", in_input);
Where buf is a pointer to char char *buf, and in_input usually an int or u16. The purpose is to copy the value read back from the device to the sysfs file created for this device attribute.
As an example, you can take a look at Linux/drivers/hwmon/mcp3021.c (or any hwmon device previous to 4.9 kernel really). You can see the function at line 81 returns a u16, and on line 99 the code stores the u16 into the char *buf.
81 static inline u16 volts_from_reg(struct mcp3021_data *data, u16 val)
82 {
83 return DIV_ROUND_CLOSEST(data->vdd * val, 1 << data->output_res);
84 }
85
86 static ssize_t show_in_input(struct device *dev, struct device_attribute *attr,
87 char *buf)
88 {
89 struct i2c_client *client = to_i2c_client(dev);
90 struct mcp3021_data *data = i2c_get_clientdata(client);
91 int reg, in_input;
92
93 reg = mcp3021_read16(client);
94 if (reg < 0)
95 return reg;
96
97 in_input = volts_from_reg(data, reg);
98
99 return sprintf(buf, "%d\n", in_input);
100 }
Wouldn't this code always cause buffer overflow? We are always storing a u16 into a buffer allocated for char 8-bits. Why is this permitted in Linux device drivers?
Note that with my driver that does the exact same thing, sysfs displays the returned value correctly even though it cannot possibly be stored as a char (8-bits). So wondering if the char * representation is not accurate?
According to the documentation in sysfs.txt, the buffer passed to the show function is of size PAGE_SIZE:
sysfs allocates a buffer of size (PAGE_SIZE) and passes it to the method. Sysfs will call the method exactly once for each read or write.
Since PAGE_SIZE is certainly bigger than the length of a small integer, there is no practical possibility of a buffer overflow.
The posted code shows poor programming style, but if buf is known to point to an array of at least 8 bytes, the behavior is defined and sprintf will not cause a buffer overflow.
Note that show_in_input does not receive the size of the destination buffer, the API would need to be changed for snprintf() to be used.
Related
I'm doing a school assignment and the task at hand is to count the files and folders recursively. I use the readdir() function, it seems to iterate through the directory I gave it.
int listdir(const char *path)
{
struct dirent *entry;
DIR *dp;
dp = opendir(path);
if (dp == NULL)
{
perror("opendir");
return -1;
}
while((entry = readdir(dp)))
puts(entry->d_name);
closedir(dp);
return 0;
}
I want to see the "something++;" step of this function, there should be one, right? All I can find is this line in glibc's dirent/dirent.h
extern struct dirent *readdir (DIR *__dirp) __nonnull ((1));
and
struct dirent *
__readdir (DIR *dirp)
{
__set_errno (ENOSYS);
return NULL;
}
weak_alias (__readdir, readdir)
in dirent/readdir.c
Where does the iteration happen?
Maybe a duplicate of How readdir function is working inside while loop in C?
I tried to grep through glibc source code for readdir - didn't find, searched the Internet - didn't find, although some say there is an obsolete linux system call also called readdir.
There is also this
"
The readdir() function returns a pointer to a dirent structure
representing the next directory entry in the directory stream
pointed to by dirp. It returns NULL on reaching the end of the
directory stream or if an error occurred."
and this
"
The order in which filenames are read by successive calls to readdir()
depends on the filesystem implementation; it is unlikely that the names
will be sorted in any fashion."
in man readdir .
From this answer - https://stackoverflow.com/a/9344137/12847376
I assume OS can hijack functions with LD_PRELOAD, I see no such variable in my default shell. And too many hits in the Debian source search.
I also grepped through the Linux kernel for LD_PRELOAD and readdir and got too many results on the syscall.
I'm not sure exactly what you are trying to accomplish. I have implemented something similar to this for another language's core library, so I can say there is not a ++something. The reason for that, is that the structures returned by the operating system do not have a consistent size. The structure is something like the following:
struct dirent {
long d_ino;
off_t d_off;
unsigned short d_reclen;
char d_type;
char d_name[];
};
You pass a buffer to the system call (I used getdents64), and it fills it in with a bunch of these dirent structures. That d_name[] does not have an officially known size. The size of the entire structure is defined by that d_reclen member of the struct.
In memory, you could have many struct dirent like this:
[0] [1] [2]
44,0,24,DT_REG,"a.txt",41,0,47,DT_DIR,"a_really_long_directory_name",...
Here is a rough translation of how it works:
uint8_t buf[BUFLEN];
long n = getdents64(dfd, buf, BUFLEN);
if (n < 0) {
// error
}
// buf now holds dirent structs
struct dirent* d = buf;
int i = 0;
for (; i < res; i += d->d_reclen) { // <<<< this is the trick
d = &buf[i];
// do something with the d
}
Notice the way we increment i. Since the d_name member does not have an official size, we cannot just say struct dirent d[COUNT];. We don't know how big each struct will be.
Where does the iteration happen?
On Linux, it happens here. As you can see, the code repeatedly calls getdents (system call) to obtain a set of entries from the kernel, and "advances" the dp by updating dirp->offset, etc.
24 /* Read a directory entry from DIRP. */
25 struct dirent *
26 __readdir_unlocked (DIR *dirp)
27 {
28 struct dirent *dp;
29 int saved_errno = errno;
30
31 if (dirp->offset >= dirp->size)
32 {
33 /* We've emptied out our buffer. Refill it. */
34
35 size_t maxread = dirp->allocation;
36 ssize_t bytes;
37
38 bytes = __getdents (dirp->fd, dirp->data, maxread);
39 if (bytes <= 0)
40 {
41 /* Linux may fail with ENOENT on some file systems if the
42 directory inode is marked as dead (deleted). POSIX
43 treats this as a regular end-of-directory condition, so
44 do not set errno in that case, to indicate success. */
45 if (bytes == 0 || errno == ENOENT)
46 __set_errno (saved_errno);
47 return NULL;
48 }
49 dirp->size = (size_t) bytes;
50
51 /* Reset the offset into the buffer. */
52 dirp->offset = 0;
53 }
54
55 dp = (struct dirent *) &dirp->data[dirp->offset];
56 dirp->offset += dp->d_reclen;
57 dirp->filepos = dp->d_off;
58
59 return dp;
60 }
I'm working on something that sends data to a TCP server, but first it is supposed to send the size of the data in 8 bytes.
That is, the server will read the first 8 bytes sent to it and cast them back into a size_t variable. My problem is, when there is a file size that doesn't use any of the top bits (i.e. 83 = 0000000S <- char's, not hex), it only sends the non-zero bytes.
This is how I do it:
void send_file_to_server(char *filename){
struct stat buf;
if (stat(filename, &buf)==-1){ exit(1); }
size_t file_size = buf.st_size;
char *filesize_string = calloc(1, 8);
filesize_string = (char*)&file_size;
//this function actually writes to the server
write_to_server((char*) filesize_string);
// will be code later on that sends the actual file using write_to_server()
}
The char* passed into my write_to_server() function has some weird behavior: it only recognizes it as a string of size 6, and it gets distorted from before it was passed in. Any advice on how to make this work is appreciated.
Note: I do not have to worry about endianness (htonl, etc.) or a differing size of size_t since this is for a project that will only ever be run on a specific VM.
Edits:
here is the other function:
void write_to_server(char *message){
ssize_t bytes_sent = 0;
ssize_t message_size = strlen(message);
while ( bytes_sent < message_size ){
ssize_t ret = write(server_socket, message+bytes_sent, message_size-bytes_sent);
if (ret==0){
print_connection_closed();
exit(1);
}
if (ret==-1 && (errno!=EINTR || errno!=EAGAIN)){
printf("write failed: sent %zd bytes out of %zd\n", bytes_sent, message_size);
exit(1);
}
if (ret!=-1){ bytes_sent+=ret; }
}
}
You can't use strlen() to determine the length of binary data. It'll miscount the data as soon as it sees a zero (NUL) byte in the binary encoding of the length field.
Write a more "primitive" function that takes the address of the data and its length as parameters, e.g.
void write_to_server_raw(const void *message, size_t message_size) {
...
}
If you still need the ability to send NUL terminated strings you can then rewrite your existing write_to_server() function so that it calls the new function to do the real work.
void write_to_server_string(const char *message) {
size_t message_size = strlen(message);
write_to_server_raw(message, message_size);
}
I'm trying to write a linux kernel module that can dump the contents of other modules to a /proc file (for analysis). In principle it works but it seems I run into some buffer limit or the like. I'm still rather new to Linux kernel development so I would also appreciate any suggestions not concerning the particular problem.
The memory that is used to store the module is allocated in this function:
char *get_module_dump(int module_num)
{
struct module *mod = unhiddenModules[module_num];
char *buffer;
buffer = kmalloc(mod->core_size * sizeof(char), GFP_KERNEL);
memcpy((void *)buffer, (void *)startOf(mod), mod->core_size);
return buffer;
}
'unhiddenModules' is an array of module structs
Then it is handed over to the proc creation here:
void create_module_dump_proc(int module_number)
{
struct proc_dir_entry *dump_module_proc;
dump_size = unhiddenModules[module_number]->core_size;
module_buffer = get_module_dump(module_number);
sprintf(current_dump_file_name, "%s_dump", unhiddenModules[module_number]->name);
dump_module_proc = proc_create_data(current_dump_file_name, 0, dump_proc_folder, &dump_fops, module_buffer);
}
The proc read function is as follows:
ssize_t dump_proc_read(struct file *filp, char *buf, size_t count, loff_t *offp)
{
char *data;
ssize_t ret;
data = PDE_DATA(file_inode(filp));
ret = copy_to_user(buf, data, dump_size);
*offp += dump_size - ret;
if (*offp > dump_size)
return 0;
else
return dump_size;
}
Smaller Modules are dumped correctly but if the module is larger than 126,796 bytes only the first 126,796 bytes are written and this error is displayed when reading from the proc file:
*** Error in `cat': free(): invalid next size (fast): 0x0000000001f4a040 ***
I've seem to run into some limit but I couldn't find anything on it. The error seems to be related so memory leaks but the buffer should be large enough so I don't see where this actually happens.
The procfs has a limit of PAGE_SIZE (one page) for read and write operations. Usually seq_file is used to iterate over the entries (modules in your case ?) to read and/or write smaller chunks. Since you are running into problems with only larger data, I suspect this is the case here.
Please have a look here and here if you are not familiar with seq_files.
A suspicious thing is that in dump_proc_read you are not using "count" parameter. I would have expected copy_to_user to take "count" as third argument instead of "dump_size" (and in subsequent calculations too). The way you do, always dump_size bytes are copied to user space, regardless the data size the application was expecting. The bigger dump_size is, the larger the user area that gets corrupted.
I'm trying to configure a SAA6752HS chip (a MPEG-2 encoder) through I2C bus using a Raspberry Pi as a development kit. It was a piece of cake until I had to write at the address 0xC2 of the chip. For this task, I have to use an I2C command that expects a payload of size 189 bytes. So then I stumbled upon a 32 bytes limitation inside the I2C driver, defined by I2C_SMBUS_BLOCK_MAX, in /usr/include/linux/i2c.h. It is not possible to force different values of max limit. Everything around I2C lib end up into the function i2c_smbus_access and any request with more then 32 bytes makes ioctl returns -1. I have no idea how to debug it so far.
static inline __s32 i2c_smbus_access(int file, char read_write, __u8 command,
int size, union i2c_smbus_data *data)
{
struct i2c_smbus_ioctl_data args;
args.read_write = read_write;
args.command = command;
args.size = size;
args.data = data;
return ioctl(file,I2C_SMBUS,&args);
}
I can't understand why there is such limitation, considering that there are devices that require more than 32 bytes of payload data to work (SAA6752HS is such an example).
Are there a way to overcome such limitation without rewrite a new driver?
Thank you in advance.
Here's the documentation for the Linux i2c interface: https://www.kernel.org/doc/Documentation/i2c/dev-interface
At the simplest level you can use ioctl(I2C_SLAVE) to set the slave address and the write system call to write the command. Something like:
i2c_write(int file, int address, int subaddress, int size, char *data) {
char buf[size + 1]; // note: variable length array
ioctl(file, I2C_SLAVE, address); // real code would need to check for an error
buf[0] = subaddress; // need to send everything in one call to write
memcpy(buf + 1, data, size); // so copy subaddress and data to a buffer
write(file, buf, size + 1);
}
if write command is returning -1 make use open fd using
int fd = open("/dev/i2c-1", O_RDWR)
not
int fd = open("/dev/i2c-1", I2C_RDWR)
I have always been told(In books and tutorials) that while copying data from kernel space to user space, we should use copy_to_user() and using memcpy() would cause problems to the system. Recently by mistake i have used memcpy() and it worked perfectly fine with out any problems. Why is that we should use copy_to_user instead of memcpy()
My test code(Kernel module) is something like this:
static ssize_t test_read(struct file *file, char __user * buf,
size_t len, loff_t * offset)
{
char ani[100];
if (!*offset) {
memset(ani, 'A', 100);
if (memcpy(buf, ani, 100))
return -EFAULT;
*offset = 100;
return *offset;
}
return 0;
}
struct file_operations test_fops = {
.owner = THIS_MODULE,
.read = test_read,
};
static int __init my_module_init(void)
{
struct proc_dir_entry *entry;
printk("We are testing now!!\n");
entry = create_proc_entry("test", S_IFREG | S_IRUGO, NULL);
if (!entry)
printk("Failed to creats proc entry test\n");
entry->proc_fops = &test_fops;
return 0;
}
module_init(my_module_init);
From user-space app, i am reading my /proc entry and everything works fine.
A look at source code of copy_to_user() says that it is also simple memcpy() where we are just trying to check if the pointer is valid or not with access_ok and doing memcpy.
So my understanding currently is that, if we are sure about the pointer we are passing, memcpy() can always be used in place of copy_to_user.
Please correct me if my understanding is incorrect and also, any example where copy_to_user works and memcpy() fails would be very useful. Thanks.
There are a couple of reasons for this.
First, security. Because the kernel can write to any address it wants, if you just use a user-space address you got and use memcpy, an attacker could write to another process's pages, which is a huge security problem. copy_to_user checks that the target page is writable by the current process.
There are also some architecture considerations. On x86, for example, the target pages must be pinned in memory. On some architectures, you might need special instructions. And so on. The Linux kernels goal of being very portable requires this kind of abstraction.
This answer may be late but anyway copy_to_user() and it's sister copy_from_user() both do some size limits checks about user passed size parameter and buffer sizes so a read method of:
char name[] = "This message is from kernel space";
ssize_t read(struct file *f, char __user *to, size_t size, loff_t *loff){
int ret = copy_to_user(to, name, size);
if(ret){
pr_info("[+] Error while copying data to user space");
return ret;
}
pr_info("[+] Finished copying data to user space");
return 0;
}
and a user space app read as read(ret, buffer, 10); is OK but replace 10 with 35 or more and kernel will emit this error:
Buffer overflow detected (34 < 35)!
and cause the copy to fail to prevent memory leaks. Same goes for copy_from_user() which will also make some kernel buffer size checks.
That's why you have to use char name[] and not char *name since using pointer(not array) makes determining size not possible which will make kernel emit this error:
BUG: unable to handle page fault for address: ffffffffc106f280
#PF: supervisor write access in kernel mode
#PF: error_code(0x0003) - permissions violation
Hope this answer is helpful somehow.