I'm dealing with a strange errors in code which i wrote in c.
this is the place where the error occur:
char* firstChar = (char*) malloc(ONE_CHAR_STRING);
if (!firstChar) {
*result = MTM_OUT_OF_MEMORY;
return false;
}
if (command != NULL) {
strcpy(firstChar, command);
firstChar[1] = '\0';
}
free(firstChar);
'command' is a string, and ONE_CHAR_STRING defines in the program, (ONE_CHAR_STRING= 2).
the error which appear when the program get into the 'free' function is:
warning: Heap block at 00731528 modified at 00731532 past requested size of 2
this error strangely append only on my PC/eclipse on windows. when i run the code in linux it doesn't prompt this error and works(the specific part) fine.
what could be the reason?
another question again about memory errors, how it is possibly that my program(without this part) works fine on windows, but in linux their is a problem in one of my memory allocations?
I can't write down here the code cause it's too long (and gdb doesn't gives me the lines of where the error occur).. the question is about the possibility and what could be the reasons for it.
Thanks, Almog.
you can use another string copy function to avoid overflow:
strncpy(firstChar,command,ONE_CHAR_STRING);
strcpy may overlap to copy firstChar if command string length is greater than ONE_CHAR_STRING or not null terminated that lead you to strange behavior. You can safely copy command string to firstChar by assign firstChar[0] = command[0]; firstChar[1] = '\0'
If your compiler for both Linux and Windows is gcc (MinGW in Windows) use -fstack-protector as compiler parameter to help you to debug function such as strcpy buffer overflow.
Related
With the same command in my coworker's PC, my program works without the problem.
But in my PC, the program crashes with segfault;
GDB backtrace at core reads as follows:
#0 strrchr () at ../sysdeps/x86_64/strrchr.S:32
32 ../sysdeps/x86_64/strrchr.S: no such file or directory
(gdb) bt
#0 strrchr () at ../sysdeps/x86_64/strrchr.S:32
#1 0x00007f10961236d7 in dirname (path=0x324a47a0 <error: Cannot access memory at address 0x324a47a0>) at dirname.c:31
I'm already compiling the executable with -g -ggdb options.
Odd thing is that.. with valgrind the program works without error in my PC as well.
How can I solve the problem? I've observed that the errors occur only with strrchr, strcmp, strlen, ... string.h functions.
+Edit: the gdb backtrace indicates that the program crashes here:
char* base_dir = dirname(get_abs_name(test_dir));
where get_abs_name is defined as
char* get_abs_name(char* dir) {
char abs_path[PATH_MAX];
char* c = malloc(PATH_MAX*sizeof(char));
realpath(dir, abs_path);
strcpy(c, abs_path);
return c;
}
+Edit2: 'dir' is a path of certain file, like '../program/blabla.jpg'.
Using valgrind,
printf("%s\n", dir)
normally prints '/home/frozenca/path_to_program'.
I can't guess why the program crashes without valgrind..
We cannot know for sure without a Minimal, Complete, and Verifiable example. Your code looks mostly correct (albeit convoluted), except you do not check for errors.
char* get_abs_name(char* dir) {
char abs_path[PATH_MAX];
char* c = malloc(PATH_MAX*sizeof(char)); /* this may return NULL */
realpath(dir, abs_path); /* this may return NULL */
strcpy(c, abs_path);
return c;
}
Now, how could this lead to an error like you see? Well, if malloc returns NULL, you'll get a crash right away in strcpy. But if realpath fails:
The content of abs_path remains undefined.
So strcpy(c, abs_path) will copy undefined content. Which could lead to it copying just one byte if abs_path[0] happens to be \0. But could also lead to massive heap corruption. Which happens depends on unrelated conditions, such as how the program is compiled, and whether some debugging tool such as valgrind is attached.
TL;DR: get into the habit of checking every function that may fail.
char* get_abs_name(char* dir) {
char abs_path[PATH_MAX];
char* c = malloc(PATH_MAX*sizeof(char));
if (!c) { return NULL; }
if (!realpath(dir, abs_path)) {
free(c);
return NULL;
}
strcpy(c, abs_path);
return c;
}
Or, here, you can simplify it alot assuming a GNU system or POSIX.1-2008 system:
char * get_abs_name(const char * dir) {
return realpath(dir, NULL);
}
Note however that either way, in your main program, you also must check that get_abs_name() did not return NULL, otherwise dirname() will crash.
Drop your function entirely and use the return value of realpath(dir, NULL) instead.
Convert type
char* c = malloc(PATH_MAX*sizeof(char));
Thanks!
I am trying to make a function in C to erase all the contents of a temp folder and to erase the folder.
Whilst I already have successfully created the code to cycle through the files and to erase the folder (it is pretty much straight forward) I am having trouble erasing the files using unlink.
Here is the code that I am using:
int delete_folder(char *foldername) {
DIR *dp;
struct dirent *ep;
dp=opendir(foldername);
if (dp!=NULL) {
readdir(dp); readdir(dp);
while (ep=readdir(dp)) {
char* cell = concatenate(concatenate(foldername, "\\"), "Bayesian Estimation.xlsx");//ep->d_name);
printf("%s\n", cell);
remove(cell);
printf("%s\n", strerror(errno));
}
closedir(dp);
}
if (!rmdir(foldername)) {return(0);} else {return(-1);}
}
The code that I wrote is fully functional for all files but those which include spaces in the filename. After some testing, I can guarantee that the unlink functions eliminates all files in the folder (even those with special characters in the filename) but fails if the filename includes a space (however, for this same file, if I remove the space(s), this function works again).
Has anyone else encountered this problem? And, more importantly, can it be solved/circunvented?
(The problem remains even if I introduce the space escape sequences directly)
The error presented by unlink is "No such file or directory" (ENOENT). Mind you that the file is indeed at the referred location (as can be verified by the code outputing the correct filename in the variable cell) and this error also occurs if I use the function remove instead of unlink.
PS: The function concatenate is a function of my own making which outputs the concatenation of the two input strings.
Edit:
The code was written in Codeblocks, in Windows.
Here's the code for the concatenate function:
char* concatenate(char *str1, char *str2) {
int a1 = strlen(str1), a2 = strlen(str2); char* str3[a1+a2+1];
snprintf(str3, a1+a2+2, "%s%s", str1, str2);
return(str3);
}
Whilst you are right in saying that it is a possible (and easy) memory leak, the functions' inputs and outputs are code generated and only for personal use and therefore there is no great reason to worry about it (no real need for foolproofing the code.)
You say "using unlink()" but the code is using remove(). Which platform are you on? Is there any danger that your platform implements remove() by running an external command which doesn't handle spaces in file names properly? On most systems, that won't be a problem.
What is a problem is that you don't check the return value from remove() before printing the error. You should only print the error if the function indicates that it generated an error. No function in the Standard C (or POSIX) library sets errno to zero. Also, errors should be reported on standard error; that's what the standard error stream is for.
if (remove(cell) != 0)
fprintf(stderr, "Failed to remove %s (%d: %s)\n", cell, errno, strerror(errno));
else
printf("%s removed OK\n", cell);
I regard the else clause as a temporary measure while you're getting the code working.
It also looks like you're leaking memory like a proverbial sieve. You capture the result of a double concatenate operation in cell, but you never free it. Indeed, if the nested calls both allocate memory, then you've got a leak even if you add free(cell); at the end of the loop (inside the loop, after the second printf(), the one I deconstructed). If concatenate() doesn't allocate new memory each time (it returns a pointer to statically allocated memory, then I think concatenating a string with the output of concatenate() is also dangerous, probably invoking undefined behaviour as you copy a string over itself. You need to look hard at the code for concatenate(), and/or present it for analyis.
Thank you very much for all your input, after reviewing your comments and making a few experiments myself, I figured out that remove/unlink was not working because the filename was only temporarily saved at variable cell (it was there long enough for it to be printed correctly to console, hence my confusion). After appropriately storing my filename before usage, my problem has been completely solved.
Here's the code (I have already checked it with filenames as complex as I could make them):
int delete_folder(char* foldername) {
DIR *dp;
struct dirent *ep;
dp=opendir(foldername);
if (dp!=NULL) {
readdir(dp); readdir(dp);
while (ep=readdir(dp)) {
char cell[strlen(foldername)+1+strlen(ep->d_name)+1];
strcpy(cell, concatenate(concatenate(foldername, "\\"), ep->d_name));
unlink(cell);
printf("File \"%s\": %s\n", ep->d_name, strerror(errno));
}
closedir(dp);
}
if (!rmdir(foldername)) {return(0);} else {return(-1);}
}
I realize it was kind of a noob mistake, resulting from my being a bit out of practice for a while in programming in C, so... Thank you very much for your all your help!
UPDATE: running git diff with valgrind results in
Syscall param execve(argv) points to uninitialised byte(s)
And the output from strace is not fully decoded--i.e., there are hex numbers among the array of strings that is argv.
...
This started out like a superuser problem but it's definitely moved into SO's domain now.
But anyway, here is my original SU post detailing the problem before I looked at the source very much: https://superuser.com/questions/795751/various-methods-of-trying-to-set-up-a-git-diff-tool-lead-to-fatal-cannot-exec
Essentially, following standard procedure to set up vimdiff as a diff tool by setting the external directive under [diff] in .gitconfig leads to this errors like this:
fatal: cannot exec 'git_diff_wrapper': Bad address
external diff died, stopping at HEAD:switch-monitor.sh.
It happens on my Linux Mint 17 64 bit OS, as well as on an Ubuntu 14.04 64 bit OS on a virtualbox VM, but not in an Ubuntu 14.04 32 bit VM...
Googling reveals no similar problems. I've spent a lot of time looking at git's source to figure this out. Bad address is the description returned by strerror for an EFAULT error. Here is a short description of EFAULT from execve manpage:
EFAULT filename points outside your accessible address space
I've tracked down how the error message is pieced together by git, and have used that to narrow down the source of the problem quite a bit. Let's start here:
static int execv_shell_cmd(const char **argv)
{
const char **nargv = prepare_shell_cmd(argv);
trace_argv_printf(nargv, "trace: exec:");
sane_execvp(nargv[0], (char **)nargv);
free(nargv);
return -1;
}
This function should not return control, but it does due to the error. The actual execvp call is in sane_execvp, but perhaps prepare_shell_cmd is of interest, though I don't spot any problems:
static const char **prepare_shell_cmd(const char **argv)
{
int argc, nargc = 0;
const char **nargv;
for (argc = 0; argv[argc]; argc++)
; /* just counting */
/* +1 for NULL, +3 for "sh -c" plus extra $0 */
nargv = xmalloc(sizeof(*nargv) * (argc + 1 + 3));
if (argc < 1)
die("BUG: shell command is empty");
if (strcspn(argv[0], "|&;<>()$`\\\"' \t\n*?[#~=%") != strlen(argv[0])) {
#ifndef GIT_WINDOWS_NATIVE
nargv[nargc++] = SHELL_PATH;
#else
nargv[nargc++] = "sh";
#endif
nargv[nargc++] = "-c";
if (argc < 2)
nargv[nargc++] = argv[0];
else {
struct strbuf arg0 = STRBUF_INIT;
strbuf_addf(&arg0, "%s \"$#\"", argv[0]);
nargv[nargc++] = strbuf_detach(&arg0, NULL);
}
}
for (argc = 0; argv[argc]; argc++)
nargv[nargc++] = argv[argc];
nargv[nargc] = NULL;
return nargv;
}
It doesn't look like they messed up the terminating NULL pointer (the absence of which is known to cause EFAULT).
sane_execvp is pretty straightforward. It's a call to execvp and returns -1 if it fails.
I haven't quite figured out what trace_argv_printf does, though it looks like it might would affect nargv and maybe screw up the terminating NULL pointer? If you'd like me to include it in this post, let me know.
I have been unable to reproduce an EFAULT with execvp in my own C code thus far.
This is git 1.9.1, and the source code is available here: https://www.kernel.org/pub/software/scm/git/git-1.9.1.tar.gz
Any thoughts on how to move forward?
Thanks
Answer (copied from comments): it seems to be a bug in git 1.9.1. The (old) diff.c code, around line 2910-2930 or so, fills in an array of size 10 with arguments, before calling the run-command code. But in one case it puts in ten actual arguments and then an 11th NULL. Depending on the whims of the compiler, the NULL may get overwritten with some other local variable (or the NULL might overwrite something important).
Changing the array to size 11 should fix the problem. Or just update to a newer git (v2.0.0 or later); Jeff King replaced the hard-coded array with a dynamic one, in commits 82fbf269b9994d172719b2d456db5ef8453b323d and ae049c955c8858899467f6c5c0259c48a5294385.
Note: another possible cause for bad address is the use of run-command.c#exists_in_PATH() by run-command.c#sane_execvp().
This is fixed with Git 2.25.1 (Feb. 2020).
See commit 63ab08f (07 Jan 2020) by brian m. carlson (bk2204).
(Merged by Junio C Hamano -- gitster -- in commit 42096c7, 22 Jan 2020)
run-command: avoid undefined behavior in exists_in_PATH
Noticed-by: Miriam R.
Signed-off-by: brian m. carlson
In this function, we free the pointer we get from locate_in_PATH and then check whether it's NULL.
However, this is undefined behavior if the pointer is non-NULL, since the C standard no longer permits us to use a valid pointer after freeing it.
The only case in which the C standard would permit this to be defined behavior is if r were NULL, since it states that in such a case "no action occurs" as a result of calling free.
It's easy to suggest that this is not likely to be a problem, but we know that GCC does aggressively exploit the fact that undefined behavior can never occur to optimize and rewrite code, even when that's contrary to the expectations of the programmer.
It is, in fact, very common for it to omit NULL pointer checks, just as we have here.
Since it's easy to fix, let's do so, and avoid a potential headache in the future.
So instead of:
static int exists_in_PATH(const char *file)
{
char *r = locate_in_PATH(file);
free(r);
return r != NULL;
}
You now have:
static int exists_in_PATH(const char *file)
{
char *r = locate_in_PATH(file);
int found = r != NULL;
free(r);
return found;
}
I was experimenting with a code described in the "Shell coders handbook" where you overflow a buffer and cause the same code to be executed twice...
void return_input (void)
{ char array[5];
gets (array);
printf(“%s\n”, array);
}
main()
{
return_input();
return 0;
}
The task was to overwrite the buffer and to replace the address of 'return 0' with the address of 'return_input()' so that the entered string is printed twice..
i compiled it as follows
gcc -fno-stack-protector overflow.c
to override the protection mechanisms. The problem is i cant get it to execute twice. in this case the address of the function ri() is at 0x08048440 . I gave the input as follows
./a.out
aaaaaaaaaaaaa\x40\x84\x04\x08
shouldnt this cause the function to be called twice?? It always returns
aaaaaaaaaaaaaaaa��
Segmentation fault (core dumped)
How can i overflow the buffer to call the function twice?
\x40\x84\x04\x08 is not supported. You should use some other program to translate the hex input to bytes.
If you are using bash, you can try echo -e '\x40\x84\x04\x08' | ./a.out. I found that solution at linux shell scripting: hex string to bytes
By definition, the behavior of a buffer overflow is unpredictable. You will only get the same behavior if you happen to be using the same version of the same compiler with the same settings on the same OS, etc., etc.
based on your machine type , you might need to adjust.
http://www.tenouk.com/Bufferoverflowc/Bufferoverflow4.html
I have a piece of C code where I try to write a buffer into an opened output file.I am getting a segmentation fault when I try to run the code.
if (fwrite(header, record_size, 1, uOutfile) != 1)
{
return 0;
}
The header is a properly populated and I am able to print out the contents of the header.the size of the buffer header is definitely greater than the record_size.Is there anything else worth checking.?Any other reason where fwrite can cause a segfault.Gdbing the problem gave the following output
0x00007ffff6b7d66d in _IO_fwrite (buf=0x726d60, size=16, count=1, fp=0x738820) at iofwrite.c:43
43 iofwrite.c: No such file or directory.
in iofwrite.c
it seems to suggest that the output file has not been created.how ever and ls -l on my directory shows the output file of size 0 bytes.
I would greatly appreciate if someone could throw some light on the problem.
EDIT: Code that opens the file:
outfd = open(out, O_RDWR|O_CREAT|O_TRUNC|O_LARGEFILE, 0664);
if (outfd == -1) {
dagutil_panic("Could not open %s for writing.\n", out);
}
uOutfile = fdopen(outfd, "w");
I don't think there's enough here to know for sure what your problems are, but here are some thoughts:
Show us the code involving your FILE * (uOutFile) and your buffer (header) — we can then see if you're borking memory somewhere between.
Run your code through valgrind: You're getting a segfault, so it could probably catch what you're doing wrong.
In gdb, examine the contents of both header and uOutFile (not just the pointer, but the pointed-to-memory.) (You'll have to use some smarts to figure out if uOutFile looks right, but you should be able to up-or-down determine if header is correct.)
To add to this: my general debug strategy when I get segfaults is:
gdb's backtrace. Tells me where the segfault happened. Usually, this is enough to uncover the dumb thing I did.
Look at the pointers in the vincinity of the crash. Is the pointer correct, and is the pointed-to data correct? (esp. if you see something strange like 0xdeadbeef)
Valgrind Valgrind Valgrind
(2 & 3 are in no particular order.)