when use copy_to_user, it gives bad address - c

I am trying to add a proc file to read some information from kernel. But when I try to cat the information from the proc file, it gives "bad address" error.
int proc_read(char *buffer, char **starter, off_t off, int count,
int *eof, void *data)
{
if (off > 0)
{
*eof = 1;
return 0;
}
if (copy_to_user(buffer, info_str, info_str_size))
{
return -EFAULT;
}
return info_str_size;
}
After insmod, use cat to read the proc file, but gives the bad address error; info_str is a global char array.

The answer to your problem is surprisingly simple. In proc_read functions you don't need to use copy_to_user: a simple memcpy will do the job, since the buffer lives in kernel memory. If you're creating a proc_write function, however, you do need to use copy_from_user, since in this case the buffer lives in user memory.
One tip is that you should also probably signal EOF on success. This will save your function from needing to be called twice.
The following should suffice:
int proc_read(char *buffer, char **starter, off_t off, int count,
int *eof, void *data)
{
if (off > 0)
{
*eof = 1;
return 0;
}
memcpy(buffer, info_str, info_str_size);
*eof = 1;
return info_str_size;
}
You should also note that this way of writing file entries is pretty old and you should probably avoid it. The seq_file interface is much less error prone (and will work with pagers like less and more). Take a look at http://lwn.net/Articles/22355/ if you're interested.

Related

reading data from large file into struct in C

I am a beginner to C programming. I need to efficiently read millions of from a file using struct in a file. Below is the example of input file.
2,33.1609992980957,26.59000015258789,8.003999710083008
5,15.85200023651123,13.036999702453613,31.801000595092773
8,10.907999992370605,32.000999450683594,1.8459999561309814
11,28.3700008392334,31.650999069213867,13.107999801635742
I have a current code shown in below, it is giving an error "Error in file"
suggesting the file is NULL but file has data.
#include<stdio.h>
#include<stdlib.h>
struct O_DATA
{
int index;
float x;
float y;
float z;
};
int main ()
{
FILE *infile ;
struct O_DATA input;
infile = fopen("input.dat", "r");
if (infile == NULL);
{
fprintf(stderr,"\nError file\n");
exit(1);
}
while(fread(&input, sizeof(struct O_DATA), 1, infile))
printf("Index = %d X= %f Y=%f Z=%f", input.index , input.x , input.y , input.z);
fclose(infile);
return 0;
}
I need to efficiently read and store data from an input file to process it further. Any help would be really appreciated. Thanks in advnace.
~
~
~
First figure out how to convert one line of text to data
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct my_data
{
unsigned int index;
float x;
float y;
float z;
};
struct my_data *
deserialize_data(struct my_data *data, const char *input, const char *separators)
{
char *p;
struct my_data tmp;
if(sscanf(input, "%d,%f,%f,%f", &data->index, &data->x, &data->y, &data->z) != 7)
return NULL;
return data;
}
deserialize_data(struct my_data *data, const char *input, const char *separators)
{
char *p;
struct my_data tmp;
char *str = strdup(input); /* make a copy of the input line because we modify it */
if (!str) { /* I couldn't make a copy so I'll die */
return NULL;
}
p = strtok (str, separators); /* use line for first call to strtok */
if (!p) goto err;
tmp.index = strtoul (p, NULL, 0); /* convert text to integer */
p = strtok (NULL, separators); /* strtok remembers line */
if (!p) goto err;
tmp.x = atof(p);
p = strtok (NULL, separators);
if (!p) goto err;
tmp.y = atof(p);
p = strtok (NULL, separators);
if (!p) goto err;
tmp.z = atof(p);
memcpy(data, &tmp, sizeof(tmp)); /* copy values out */
goto out;
err:
data = NULL;
out:
free (str);
return data;
}
int main() {
struct my_data somedata;
deserialize_data(&somedata, "1,2.5,3.12,7.955", ",");
printf("index: %d, x: %2f, y: %2f, z: %2f\n", somedata.index, somedata.x, somedata.y, somedata.z);
}
Combine it with reading lines from a file:
just the main function here (insert the rest from the previous example)
int
main(int argc, char *argv[])
{
FILE *stream;
char *line = NULL;
size_t len = 0;
ssize_t nread;
struct my_data somedata;
if (argc != 2) {
fprintf(stderr, "Usage: %s <file>\n", argv[0]);
exit(EXIT_FAILURE);
}
stream = fopen(argv[1], "r");
if (stream == NULL) {
perror("fopen");
exit(EXIT_FAILURE);
}
while ((nread = getline(&line, &len, stream)) != -1) {
deserialize_data(&somedata, line, ",");
printf("index: %d, x: %2f, y: %2f, z: %2f\n", somedata.index, somedata.x, somedata.y, somedata.z);
}
free(line);
fclose(stream);
exit(EXIT_SUCCESS);
}
You've got an incorrect ; after your if (infile == NULL) test - try removing that...
[Edit: 2nd by 9 secs! :-)]
if (infile == NULL);
{ /* floating block */ }
The above if is a complete statement that does nothing regardless of the value of infile. The "floating" block is executed no matter what infile contains.
Remove the semicolon to 'attach' the "floating" block to the if
if (infile == NULL)
{ /* if block */ }
You already have solid responses in regard to syntax/structs/etc, but I will offer another method for reading the data in the file itself: I like Martin York's CSVIterator solution. This is my go-to approach for CSV processing because it requires less code to implement and has the added benefit of being easily modifiable (i.e., you can edit the CSVRow and CSVIterator defs depending on your needs).
Here's a mostly complete example using Martin's unedited code without structs or classes. In my opinion, and especially so as a beginner, it is easier to start developing your code with simpler techniques. As your code begins to take shape, it is much clearer why and where you need to implement more abstract/advanced devices.
Note this would technically need to be compiled with C++11 or greater because of my use of std::stod (and maybe some other stuff too I am forgetting), so take that into consideration:
//your includes
//...
#include"wherever_CSVIterator_is.h"
int main (int argc, char* argv[])
{
int index;
double tmp[3]; //since we know the shape of your input data
std::vector<double*> saved = std::vector<double*>();
std::vector<int> indices;
std::ifstream file(argv[1]);
for (CSVIterator loop(file); loop != CSVIterator(); ++loop) { //loop over rows
index = (*loop)[0];
indices.push_back(index); //store int index first, always col 0
for (int k=1; k < (*loop).size(); k++) { //loop across columns
tmp[k-1] = std::stod((*loop)[k]); //save double values now
}
saved.push_back(tmp);
}
/*now we have two vectors of the same 'size'
(let's pretend I wrote a check here to confirm this is true),
so we loop through them together and access with something like:*/
for (int j=0; j < (int)indices.size(); j++) {
double* saved_ptr = saved.at(j); //get pointer to first elem of each triplet
printf("\nindex: %g |", indices.at(j));
for (int k=0; k < 3; k++) {
printf(" %4.3f ", saved_ptr[k]);
}
printf("\n");
}
}
Less fuss to write, but more dangerous (if saved[] goes out of scope, we are in trouble). Also some unnecessary copying is present, but we benefit from using std::vector containers in lieu of knowing exactly how much memory we need to allocate.
Don't give an example of input file. Specify your input file format -at least on paper or in comments- e.g. in EBNF notation (since your example is textual... it is not a binary file). Decide if the numbers have to be in different lines (or if you might accept a file with a single huge line made of million bytes; read about the Comma Separated Values format). Then, code some parser for that format. In your case, it is likely that some very simple recursive descent parsing is enough (and your particular parser won't even use recursion).
Read more about <stdio.h> and its routines. Take time to carefully read that documentation. Since your input is textual, not binary, you don't need fread. Notice that input routines can fail, and you should handle the failure case.
Of course, fopen can fail (e.g. because your working directory is not what you believe it is). You'll better use perror or errno to find more about the failure cause. So at least code:
infile = fopen("input.dat", "r");
if (infile == NULL) {
perror("fopen input.dat");
exit(EXIT_FAILURE);
}
Notice that semi-colons (or their absence) are very important in C (no semi-colon after condition of if). Read again the basic syntax of C language. Read about How to debug small programs. Enable all warnings and debug info when compiling (with GCC, compile with gcc -Wall -g at least). The compiler warnings are very useful!
Remember that fscanf don't handle the end of line (newline) differently from a space character. So if the input has to have different lines you need to read every line separately.
You'll probably read every line using fgets (or getline) and parse every line individually. You could do that parsing with the help of sscanf (perhaps the %n could be useful) - and you want to use the return count of sscanf. You could also perhaps use strtok and/or strtod to do such a parsing.
Make sure that your parsing and your entire program is correct. With current computers (they are very fast, and most of the time your input file sits in the page cache) it is very likely that it would be fast enough. A million lines can be read pretty quickly (if on Linux, you could compare your parsing time with the time used by wc to count the lines of your file). On my computer (a powerful Linux desktop with AMD2970WX processor -it has lots of cores, but your program uses only one-, 64Gbytes of RAM, and SSD disk) a million lines can be read (by wc) in less than 30 milliseconds, so I am guessing your entire program should run in less than half a second, if given a million lines of input, and if the further processing is simple (in linear time).
You are likely to fill a large array of struct O_DATA and that array should probably be dynamically allocated, and reallocated when needed. Read more about C dynamic memory allocation. Read carefully about C memory management routines. They could fail, and you need to handle that failure (even if it is very unlikely to happen). You certainly don't want to re-allocate that array at every loop. You probably could allocate it in some geometrical progression (e.g. if the size of that array is size, you'll call realloc or a new malloc for some int newsize = 4*size/3 + 10; only when the old size is too small). Of course, your array will generally be a bit larger than what is really needed, but memory is quite cheap and you are allowed to "lose" some of it.
But StackOverflow is not a "do my homework" site. I gave some advice above, but you should do your homework.

#define macro causing segmentation fault or something else causing segfault?

I am using libnfs for C. I am calling nfs_read and it takes the size of bytes to read as a uint64_t variable. I have the size defined as a macro (#define 100). I mostly get a segfault (or sometimes some other error based on what value I choose, but always the same error for the specific value)for any size value greater than 24. I also tried changing #define to global uint64_t. I first had it in a header file and also moved it from header file to c file. But the result is always the same (if size is greater than 24), segfault.
But when I pass the value directly (as hard coded value) to the function nfs_read I do not get segfault, for any value of size (<24 or >24).
I have done fair number of projects in C before and have never faced such an error. Any idea what could be happening here. Thanks.
typedef struct {
int is_nfs;
int fd;
struct nfs_context *nfs;
struct nfsfh *fh;
}nfs_fd_t;
#define COUNT 100
// The open function is same as in the linked example except for I added nfs_dd_t struct to store nfs and fh
int dd_open(const char *path, int flags, mode_t mode, nfs_fd_t *nfs_fd);
ssize_t read_wrapper(nfs_dd_t *nfs_fd)
{
char * buf = malloc(COUNT);
int ret = dd_read(nfs_fd, buf, COUNT);
// follow up logic
}
ssize_t dd_read(nfs_fd_t *nfs_fd, void *buf, uint64_t count)
{
int ret;
if ((ret = nfs_read((*nfs_fd).nfs, (*nfs_fd).fh, count, (char *)buf)) < 0) {
errno = -ret;
return -1;
}
return ret;
}
nfs_dd_t is a struct containing nfs and fh. I am basically following this example only slightly modified for my need.

How to get the hostname of mac os in C

I 'm trying to get the hostname of my school mac os. I can't use gethostname() as it's in section 3 of the man pages of my school macs, instead of section 2. Is there another way of getting the hostname, without using gethostname()? I'm only allowed to use libc functions in man 2 section.
gethostname is just a sysctl, and sysctl is just a syscall.
And syscalls are (per definition) in section 2 of the manual.
So grab your favourite disassembler (or otool -tV if you have none), nm the libraries in /usr/lib/system to find out which ones export _gethostname and _sysctl, and get to work (or look up the source :P).
Below I re-implemented gethostname using sysctl, and sysctl using syscall:
#include <sys/syscall.h> // SYS_sysctl
#include <sys/sysctl.h> // CTL_KERN, KERN_HOSTNAME
#include <unistd.h> // syscall
int sysctl(int *name, u_int namelen, void *oldp, size_t *oldlenp, void *newp, size_t newlen)
{
return syscall(SYS_sysctl, name, namelen, oldp, oldlenp, newp, newlen);
}
int gethostname(char *buf, size_t buflen)
{
int name[] = { CTL_KERN, KERN_HOSTNAME };
size_t namelen = 2;
return sysctl(name, namelen, buf, &buflen, NULL, 0);
}
int puts(const char *s)
{
// left as an exercise to the reader ;)
}
int main(void)
{
#define BUFSIZE 256
char buf[BUFSIZE];
size_t buflen = BUFSIZE;
if(gethostname(buf, buflen) == 0)
{
puts(buf);
}
return 0;
}
The implementation of sysctl isn't too complicated; you really just slap SYS_sysctl (from sys/syscall.h) in front of the other arguments and pass them all on to syscall.
To understand the implementation of gethostname, you have to know how sysctl works:
oldp is where the queried value will be stored.
newp is where the new value will be read from. Since we're not setting any new value, this is NULL here.
name is more or less the actual list of arguments to sysctl, and its contents depend on the actual sysctl being queried.
CTL_KERN denotes that we want something from the kernel.
KERN_HOSTNAME denotes that we'd like to retrieve the hostname.
And since KERN_HOSTNAME doesn't take any arguments, that's all there is to it.
Just for demonstration, had you called KERN_PROCARGS, name would require an additional argument, namely the process ID of which the arguments should be retrieved.
In that case, name would look like this:
int name[] = { CTL_KERN, KERN_PROCARGS, pid };
and namelen would have to be set to 3 accordingly.
Now in the above implementation I've made use of puts, which you're obviously not allowed to do, but I trust you can figure out how to re-implement strlen and use the write syscall with that. ;)

Is there some kind of security issue in this code?

This is my code:
void handle(int s)
{
char inbuf[4096];//we defined inbuf as 4096 size
dup2(s, 0);
dup2(s, 1);
setbuf(stdout, NULL);
alarm(ALARM_TIMEOUT_SEC);
printf("crackme> ");
if (NULL == fgets(inbuf, sizeof(inbuf), stdin)) {
return;
}
right_trim(inbuf);
if (is_correct(inbuf)) {
printf("Good job!\n");
}
}
And if there is, what is the problem?
Little explanation about this program:
The first part of her running Server
When someone connects to the server and enter an input (i did a loop to examine the length of the input)
So this function looks at it and if the pass is correct then its print "good job"
Answering the question as titled: yes. The use of
void setbuf(FILE *stream, char *buffer);
is deprecated due to security issues, and is retained by MSVC only for compatibility purposes. Please use
int setvbuf(FILE *stream, char *buffer, int mode, size_t size);
which is more secure since the buffer size is also provided (as well as more flexibility by providing mode) and it returns a function value to indicate success or failure.

C Delete last n characters from file

I need to delete the last n characters from a file using C code. At fist I was trying to use '\b', but it returns a Segmentation Fault. I have seen interesting answers to similar questions here and here, but I would prefer to use mmap function to do this, if it's possible. I know it could be simpler to truncate the file by creating a temp file, and writing chars to temp until some offset of the original file. The problem is I don't seem to understand how to use mmap function to do this, can't see what parameters I need to pass to that function, specially address, length and offset. From what I've read, I should use MAP_SHARED in flags and PROT_READ|PROT_WRITE in protect.
The function definition says:
void * mmap (void *address, size_t length, int protect, int flags, int filedes, off_t offset)
Here is my main:
int main(int argc, char * argv[])
{
FILE * InputFile;
off_t position;
int charsToDelete;
if ((InputFile = fopen(argv[1],"r+")) == NULL)
{
printf("tdes: file not found: %s\n",argv[1]);
}
else
{
charsToDelete = 5;
fseeko(InputFile,-charsToDelete,SEEK_END);
position = ftello(InputFile);
printf("Pos: %d\n",(int)position);
int i;
//for(i = 0;i < charsToDelete;i++)
//{
// putc(InputFile,'\b');
//}
}
fclose(InputFile);
return 0;
}
Why not use:
#include <unistd.h>
#include <sys/types.h>
int truncate(const char *path, off_t length);
int ftruncate(int fd, off_t length);
like for instance:
charsToDelete = 5;
fseeko(InputFile,-charsToDelete,SEEK_END);
position = ftello(InputFile);
ftruncate(fileno(InputFile), position);
Read all but n bytes from the file and write to a temporary file, close the original file, rename temporary file as original file.
Or use e.g. truncate or similar function if you have it.
Also, failure to open the file doesn't have to be that it can't be found, You should check errno on failure to see what the error is. Use e.g. strerror to get a printable string from the error code.
Unfortunately, mmap does not allow you to change size of underlying file object.
Instead, I would recommend to simply truncate your file, use something like this:
truncate(filename, new_length);

Resources