#define macro causing segmentation fault or something else causing segfault? - c

I am using libnfs for C. I am calling nfs_read and it takes the size of bytes to read as a uint64_t variable. I have the size defined as a macro (#define 100). I mostly get a segfault (or sometimes some other error based on what value I choose, but always the same error for the specific value)for any size value greater than 24. I also tried changing #define to global uint64_t. I first had it in a header file and also moved it from header file to c file. But the result is always the same (if size is greater than 24), segfault.
But when I pass the value directly (as hard coded value) to the function nfs_read I do not get segfault, for any value of size (<24 or >24).
I have done fair number of projects in C before and have never faced such an error. Any idea what could be happening here. Thanks.
typedef struct {
int is_nfs;
int fd;
struct nfs_context *nfs;
struct nfsfh *fh;
}nfs_fd_t;
#define COUNT 100
// The open function is same as in the linked example except for I added nfs_dd_t struct to store nfs and fh
int dd_open(const char *path, int flags, mode_t mode, nfs_fd_t *nfs_fd);
ssize_t read_wrapper(nfs_dd_t *nfs_fd)
{
char * buf = malloc(COUNT);
int ret = dd_read(nfs_fd, buf, COUNT);
// follow up logic
}
ssize_t dd_read(nfs_fd_t *nfs_fd, void *buf, uint64_t count)
{
int ret;
if ((ret = nfs_read((*nfs_fd).nfs, (*nfs_fd).fh, count, (char *)buf)) < 0) {
errno = -ret;
return -1;
}
return ret;
}
nfs_dd_t is a struct containing nfs and fh. I am basically following this example only slightly modified for my need.

Related

How to get the hostname of mac os in C

I 'm trying to get the hostname of my school mac os. I can't use gethostname() as it's in section 3 of the man pages of my school macs, instead of section 2. Is there another way of getting the hostname, without using gethostname()? I'm only allowed to use libc functions in man 2 section.
gethostname is just a sysctl, and sysctl is just a syscall.
And syscalls are (per definition) in section 2 of the manual.
So grab your favourite disassembler (or otool -tV if you have none), nm the libraries in /usr/lib/system to find out which ones export _gethostname and _sysctl, and get to work (or look up the source :P).
Below I re-implemented gethostname using sysctl, and sysctl using syscall:
#include <sys/syscall.h> // SYS_sysctl
#include <sys/sysctl.h> // CTL_KERN, KERN_HOSTNAME
#include <unistd.h> // syscall
int sysctl(int *name, u_int namelen, void *oldp, size_t *oldlenp, void *newp, size_t newlen)
{
return syscall(SYS_sysctl, name, namelen, oldp, oldlenp, newp, newlen);
}
int gethostname(char *buf, size_t buflen)
{
int name[] = { CTL_KERN, KERN_HOSTNAME };
size_t namelen = 2;
return sysctl(name, namelen, buf, &buflen, NULL, 0);
}
int puts(const char *s)
{
// left as an exercise to the reader ;)
}
int main(void)
{
#define BUFSIZE 256
char buf[BUFSIZE];
size_t buflen = BUFSIZE;
if(gethostname(buf, buflen) == 0)
{
puts(buf);
}
return 0;
}
The implementation of sysctl isn't too complicated; you really just slap SYS_sysctl (from sys/syscall.h) in front of the other arguments and pass them all on to syscall.
To understand the implementation of gethostname, you have to know how sysctl works:
oldp is where the queried value will be stored.
newp is where the new value will be read from. Since we're not setting any new value, this is NULL here.
name is more or less the actual list of arguments to sysctl, and its contents depend on the actual sysctl being queried.
CTL_KERN denotes that we want something from the kernel.
KERN_HOSTNAME denotes that we'd like to retrieve the hostname.
And since KERN_HOSTNAME doesn't take any arguments, that's all there is to it.
Just for demonstration, had you called KERN_PROCARGS, name would require an additional argument, namely the process ID of which the arguments should be retrieved.
In that case, name would look like this:
int name[] = { CTL_KERN, KERN_PROCARGS, pid };
and namelen would have to be set to 3 accordingly.
Now in the above implementation I've made use of puts, which you're obviously not allowed to do, but I trust you can figure out how to re-implement strlen and use the write syscall with that. ;)

print out contents of archived files in C

I have the same question posted here:
How to print the name of the files inside an archive file?
but those answers don't necessarily address the problem. I have an archived file week.a and I'd like to print out the names of the files inside that archive, called mon.txt, and fri.txt.
It should work just like the ar -t command, except I'm not allowed to use that.
What I've tried:
My first attempt was to create a for loop and print out the arguments, but then I realized the file is already archived by that point, and so that wouldn't work.
My second attempt was to look at the print_contents function of ar, which I've listed below:
static void
print_contents (bfd *abfd)
{
size_t ncopied = 0;
char *cbuf = (char *) xmalloc (BUFSIZE);
struct stat buf;
size_t size;
if (bfd_stat_arch_elt (abfd, &buf) != 0)
/* xgettext:c-format */
fatal (_("internal stat error on %s"), bfd_get_filename (abfd));
if (verbose)
printf ("\n<%s>\n\n", bfd_get_filename (abfd));
bfd_seek (abfd, (file_ptr) 0, SEEK_SET);
size = buf.st_size;
while (ncopied < size)
{
size_t nread;
size_t tocopy = size - ncopied;
if (tocopy > BUFSIZE)
tocopy = BUFSIZE;
nread = bfd_bread (cbuf, (bfd_size_type) tocopy, abfd);
if (nread != tocopy)
/* xgettext:c-format */
fatal (_("%s is not a valid archive"),
bfd_get_filename (bfd_my_archive (abfd)));
/* fwrite in mingw32 may return int instead of size_t. Cast the
return value to size_t to avoid comparison between signed and
unsigned values. */
if ((size_t) fwrite (cbuf, 1, nread, stdout) != nread)
fatal ("stdout: %s", strerror (errno));
ncopied += tocopy;
}
free (cbuf);
}
But with this route, I don't really know what a lot of that code means or does (I'm very new to C). Could someone help make sense of this code, or point me in the right direction for writing my program? Thank you.
Based on format at wikipedia.org/wiki/Ar_(Unix), the basic shape of your program will be:
fopen(filename)
fscanf 8 characters/* global header */
check header is "!<arch>" followed by LF
while not at end of file /* check return value of fcanf below */
fscanf each item in file header
print filename /* first 16 characters of file header */
check magic number is 0x60 0x0A
skip file size characters /* file contents - can use fseek with origin = SEEK_CUR */
fclose(file)
Refer to the C stdio library documentation for details of functions needed. Or see Wikipedia C file input/output
int counting(FILE *f)
{
int count=0;
rewind(f);
struct ar_hdr myheader;
fseek(f,8,SEEK_CUR);
while(fread(&myheader,sizeof(struct ar_hdr),1,f)>0)
{
long test;
test = atol(myheader.ar_size);
fseek(f,test,SEEK_CUR);
count++;
}
printf("count is : %d\n",count);
return count;
}
this code i had written to count the number of files in archive.. u can use the same to print the file names inside it as well

C Delete last n characters from file

I need to delete the last n characters from a file using C code. At fist I was trying to use '\b', but it returns a Segmentation Fault. I have seen interesting answers to similar questions here and here, but I would prefer to use mmap function to do this, if it's possible. I know it could be simpler to truncate the file by creating a temp file, and writing chars to temp until some offset of the original file. The problem is I don't seem to understand how to use mmap function to do this, can't see what parameters I need to pass to that function, specially address, length and offset. From what I've read, I should use MAP_SHARED in flags and PROT_READ|PROT_WRITE in protect.
The function definition says:
void * mmap (void *address, size_t length, int protect, int flags, int filedes, off_t offset)
Here is my main:
int main(int argc, char * argv[])
{
FILE * InputFile;
off_t position;
int charsToDelete;
if ((InputFile = fopen(argv[1],"r+")) == NULL)
{
printf("tdes: file not found: %s\n",argv[1]);
}
else
{
charsToDelete = 5;
fseeko(InputFile,-charsToDelete,SEEK_END);
position = ftello(InputFile);
printf("Pos: %d\n",(int)position);
int i;
//for(i = 0;i < charsToDelete;i++)
//{
// putc(InputFile,'\b');
//}
}
fclose(InputFile);
return 0;
}
Why not use:
#include <unistd.h>
#include <sys/types.h>
int truncate(const char *path, off_t length);
int ftruncate(int fd, off_t length);
like for instance:
charsToDelete = 5;
fseeko(InputFile,-charsToDelete,SEEK_END);
position = ftello(InputFile);
ftruncate(fileno(InputFile), position);
Read all but n bytes from the file and write to a temporary file, close the original file, rename temporary file as original file.
Or use e.g. truncate or similar function if you have it.
Also, failure to open the file doesn't have to be that it can't be found, You should check errno on failure to see what the error is. Use e.g. strerror to get a printable string from the error code.
Unfortunately, mmap does not allow you to change size of underlying file object.
Instead, I would recommend to simply truncate your file, use something like this:
truncate(filename, new_length);

when use copy_to_user, it gives bad address

I am trying to add a proc file to read some information from kernel. But when I try to cat the information from the proc file, it gives "bad address" error.
int proc_read(char *buffer, char **starter, off_t off, int count,
int *eof, void *data)
{
if (off > 0)
{
*eof = 1;
return 0;
}
if (copy_to_user(buffer, info_str, info_str_size))
{
return -EFAULT;
}
return info_str_size;
}
After insmod, use cat to read the proc file, but gives the bad address error; info_str is a global char array.
The answer to your problem is surprisingly simple. In proc_read functions you don't need to use copy_to_user: a simple memcpy will do the job, since the buffer lives in kernel memory. If you're creating a proc_write function, however, you do need to use copy_from_user, since in this case the buffer lives in user memory.
One tip is that you should also probably signal EOF on success. This will save your function from needing to be called twice.
The following should suffice:
int proc_read(char *buffer, char **starter, off_t off, int count,
int *eof, void *data)
{
if (off > 0)
{
*eof = 1;
return 0;
}
memcpy(buffer, info_str, info_str_size);
*eof = 1;
return info_str_size;
}
You should also note that this way of writing file entries is pretty old and you should probably avoid it. The seq_file interface is much less error prone (and will work with pagers like less and more). Take a look at http://lwn.net/Articles/22355/ if you're interested.

Why can't my program save a large amount (>2GB) to a file?

I am having trouble trying to figure out why my program cannot save more than 2GB of data to a file. I cannot tell if this is a programming or environment (OS) problem. Here is my source code:
#define _LARGEFILE_SOURCE
#define _LARGEFILE64_SOURCE
#define _FILE_OFFSET_BITS 64
#include <math.h>
#include <time.h>
#include <errno.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
/*-------------------------------------*/
//for file mapping in Linux
#include<fcntl.h>
#include<unistd.h>
#include<sys/stat.h>
#include<sys/time.h>
#include<sys/mman.h>
#include<sys/types.h>
/*-------------------------------------*/
#define PERMS 0600
#define NEW(type) (type *) malloc(sizeof(type))
#define FILE_MODE (S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)
void write_result(char *filename, char *data, long long length){
int fd, fq;
fd = open(filename, O_RDWR|O_CREAT|O_LARGEFILE, 0644);
if (fd < 0) {
perror(filename);
return -1;
}
if (ftruncate(fd, length) < 0)
{
printf("[%d]-ftruncate64 error: %s/n", errno, strerror(errno));
close(fd);
return 0;
}
fq = write (fd, data,length);
close(fd);
return;
}
main()
{
long long offset = 3000000000; // 3GB
char * ttt;
ttt = (char *)malloc(sizeof(char) *offset);
printf("length->%lld\n",strlen(ttt)); // length=0
memset (ttt,1,offset);
printf("length->%lld\n",strlen(ttt)); // length=3GB
write_result("test.big",ttt,offset);
return 1;
}
According to my test, the program can generate a file large than 2GB and can allocate such large memory as well.
The weird thing happened when I tried to write data into the file. I checked the file and it is empty, which is supposed to be filled with 1.
Can any one be kind and help me with this?
You need to read a little more about C strings and what malloc and calloc do.
In your original main ttt pointed to whatever garbage was in memory when malloc was called. This means a nul terminator (the end marker of a C String, which is binary 0) could be anywhere in the garbage returned by malloc.
Also, since malloc does not touch every byte of the allocated memory (and you're asking for a lot) you could get sparse memory which means the memory is not actually physically available until it is read or written.
calloc allocates and fills the allocated memory with 0. It is a little more prone to fail because of this (it touches every byte allocated, so if the OS left the allocation sparse it will not be sparse after calloc fills it.)
Here's your code with fixes for the above issues.
You should also always check the return value from write and react accordingly. I'll leave that to you...
main()
{
long long offset = 3000000000; // 3GB
char * ttt;
//ttt = (char *)malloc(sizeof(char) *offset);
ttt = (char *)calloc( sizeof( char ), offset ); // instead of malloc( ... )
if( !ttt )
{
puts( "calloc failed, bye bye now!" );
exit( 87 );
}
printf("length->%lld\n",strlen(ttt)); // length=0 (This now works as expected if calloc does not fail)
memset( ttt, 1, offset );
ttt[offset - 1] = 0; // Now it's nul terminated and the printf below will work
printf("length->%lld\n",strlen(ttt)); // length=3GB
write_result("test.big",ttt,offset);
return 1;
}
Note to Linux gurus... I know sparse may not be the correct term. Please correct me if I'm wrong as it's been a while since I've been buried in Linux minutiae. :)
Looks like you're hitting the internal file system's limitation for the iDevice: ios - Enterprise app with more than resource files of size 2GB
2Gb+ files are simply not possible. If you need to store such amount of data you should consider using some other tools or write the file chunk manager.
I'm going to go out on a limb here and say that your problem may lay in memset().
The best thing to do here is, I think, after memset() ing it,
for (unsigned long i = 0; i < 3000000000; i++) {
if (ttt[i] != 1) { printf("error in data at location %d", i); break; }
}
Once you've validated that the data you're trying to write is correct, then you should look into writing a smaller file such as 1GB and see if you have the same problems. Eliminate each and every possible variable and you will find the answer.

Resources