I'm writing a C program and I would like to be able to store data inside the executable file.
I tried making a function to write a single byte at the end of the file but it looks like it can't open the file because it reaches the printf and then gives "segmentation fault".
void writeByte(char c){
FILE *f;
f = fopen("game","wb");
if(f == 0)
printf("\nFile not found\n");
fseek(f,-1,SEEK_END);
fwrite(&c,1,sizeof(char),f);
fclose(f);
}
The file is in the correct directory and the name is correct. When I try to read the last byte instead of writing it works without problems.
Edit: I know I should abort the program instead of trying to write anyway but my main problem is that the program can't open the file despite being in the same directory.
There are several unrelated problems in your code and the problem you're trying to solve.
First you lack proper error handling. If any function that can fail (like e.g. fopen) fails, you should act accordingly. If, for example you did
#include <error.h>
#include <errno.h>
...
f = fopen("game","wb");
if ( f == NULL ) {
error(1,errno,"File could not be opened");
}
...
You would have recieved an useful error message like
./game: File could not be opened: Text file busy
You printed a message, which is not even correct (the file not beeing able to be opened is somthing different, than not beeing found) and continued the program which resulted in a segmentation fault because you dereferenced the NULL pointer stored in f after the failure of fopen.
Second As the message tells us (at least on my linux machine), the file is busy. That means, that my operating system does not allow me to open the executable I'm running in write mode. The answers to this question lists numerous source of the explanation of this error message. There might be ways to get around this and open a running executable in write mode, but I doubt this is easy and I doubt that this would solve your problem because:...
Third Executable files are stored in a special binary format (usually ELF on Linux). They are not designed to be manually modified. I don't know what happens if you just append data to it, but you could run into serious problems if your not very careful and know what you're doing.
If you just try to store data, use another plain and fresh file. If you're hoping to append code to an executable, you really should gather some background information about ELF files (e.g. from man elf) before continuing.
Related
When you run a C program, is it possible to get its binary code (which you execute with ./foo) from its TEXT segment? If I just copy all the TEXT segment to a file, then can I execute it and run the same program? I am working with Ubuntu.
is it possible to get it binary code
If you run your program under the debugger, then you can copy the bytes from anywhere in the process space, being data or code.
then i can execute it and run the same program?
Simple answer: No!
An executable file is a lot more than just a memory dump.
If your program is static-linked and position-dependent and has no global data (note: the last is not true with any non-toy libc implementation), then in theory the text segment is sufficient to run it. However, you would need an appropriate loader to do so. Normal operating systems' executable file loaders do not load this kind of "raw text segment" as an executable because (1) is has no header information to indicate that that's what it is, or even where to start execution at (i.e. what's the entry point), and (2) it's not generally useful to do so. DOS had something akin to this with .COM files, and uClinux had FLAT binaries that were close to this but with some minimal header, but those are the closest you'll find to a "raw text segment" binary in the past 3-4 decades.
there is the link with familiar question
Is it possible to dissembler running process in Linux?
char *comp = "objdump -d /proc/1234/exe";
fflush(NULL); // always useful before system(3)
int nok = system(comp);
if (nok) {
fprintf(stderr, "compilation %s failed with %d\n", comp, nok);
exit(EXIT_FAILURE);
}
So what i want is a /proc/<PID>/exe file.
I have an application, written in C, which generates various data parameters that I am logging into a text file named debug_log.txt. Whenever this log file reaches 1 MB, I am renaming the filename with timestamp ex debug_log_20200106_133000.txt & moving it in same directory. I am then reopening debug_log.txt to log new parameters.
if(stat("/home/log/debug_log.txt", &statFiledbg) == 0)
{
if(statFiledbg.st_size >= 1048576) // 1MB
{
current_time = time(0);
strftime(time_buffer, sizeof(time_buffer), "%Y%m%d_%H-%M-%S", gmtime(¤t_time));
sprintf(strSysCmddbg, "mv /home/log/debug_log.txt /home/log/debug_log%s.txt", time_buffer);
system(strSysCmddbg);
fp_dbglog = freopen("/home/log/debug_log.txt", "w", fp_dbglog);
}
}
The code works most of the time until it doesn't. After running the application for couple days, I see that debug_log.txt grows beyond 1 MB while the last moved & renamed log file is empty.
What could be the reason?
Use the rename function from the C standard library (in stdio.h) and check errno if it failed to know the exact reason why it is failing.
When working with files, and I/O in general, there are many, many things that can go wrong.
One of my senior developer in the company told me so. Is there anything wrong with using system()?
Yes: it is unnecessary (C and POSIX provide you with a function for basic usages like this), nonportable (it assumes you are in a system that has a "mv"), slower (it needs to spawn another process) and wrong for many use cases (eg. here there is no way to know what exactly failed unless you save the textual output of mv).
See questions and answers like Moving a file on Linux in C for an in-depth explanation.
I'm trying to learn File I/O concepts in C programming language. I'm using GNU / Linux ( Ubuntu 16.04 LTS ) and my IDE is eclipse 3.8. when I try to write in a file through fprintf() method, it doesn't create any files or if the file is even created, it doesn't write in it. I tried to fix the problem by using fflush() or setbuf(file_pointer, NULL) methods as is suggested here but still no change. I guess I'm writing the address of the file in a wrong way.
Here is the code:
#include <stdio.h>
int main(void){
FILE *file_pointer;
file_pointer=fopen("~/.textsfiless/test.txt","w+");
setbuf(file_pointer,NULL);
fprintf(file_pointer,"Testing...\n");
fclose(file_pointer);
return EXIT_SUCCESS;
}
Can someone explain what's wrong here?
On Linux, the ~ in ~/.textsfiless/test.txt is not expanded by the C library fopen... When you use ~ on the command line, it is expanded by your shell (but not by the program using it, started by the shell doing some execve(2)...) into your home directory; the expansion is called globbing. Read glob(7). You are very unlikely to have a directory named ~.
You should read Advanced Linux Programming
So you should check if fopen failed (it is very likely that it did fail). If you want to get a file in the home directory, you'll better use getenv(3) with "HOME" (or perhaps getpwuid(3) & getuid(2)...). See environ(7)
Perhaps a better code might be:
char*homedir = getenv("HOME");
if (!homedir) { perror("getenv HOME"); exit(EXIT_FAILURE); };
char pathbuf[512]; /// or perhaps PATH_MAX instead of 512
snprintf(pathbuf, sizeof(pathbuf),
"%s/.textsfiless/test.txt", homedir);
FILE *file_pointer = fopen(pathbuf, "r");
if (!file_pointer) { perror(pathbuf); exit(EXIT_FAILURE); };
and so on.
Notice that you should check against failures most C standard library (& POSIX) functions. The perror(3) function is useful to report errors to the user on stderr.
(pedantically, we should even test that snprintf(3) returns a length below sizeof(pathbuf) or use and test against failure asprintf(3) instead; I leave that test as an exercise to the reader)
More generally, read the documentation of every external function that you are using.
Beware of undefined behavior (your code is probably having some, e.g. fprintf to a NULL stream). Compile your code with all warnings & debug info (so gcc -Wall -g) and use the gdb debugger. Read What every C programmer should know about undefined behavior.
BTW, look into strace(1) and try it on your original (faulty) program. You'll learn a lot about the system calls used in it.
Most likely your call to fopen() fails. You don't have any checking in your program to ensure fopen even worked. It may not have, and this could be due to a variety of things, like you spelling the path wrong, wrong file or process permissions, etc.
To see what really happened, you should check fopen's return value:
#include <stdio.h>
int main(void){
FILE *file_pointer;
file_pointer=fopen("~/.textsfiless/test.txt","w+");
if (file_pointer == NULL) {
printf("Opening the file failed.");
return EXIT_FAILURE;
}
setbuf(file_pointer,NULL);
fprintf(file_pointer,"Testing...\n");
fclose(file_pointer);
return EXIT_SUCCESS;
}
Edit: Since your comment, you getting the path wrong is most certainly what happened. If you're executing your program from the current directory, and your file is in a folder called "textfiless" in your current directory and your file is called "test.txt", then you'd call fopen like this:
file_pointer=fopen("/textsfiless/test.txt","w+");
I have a situation where I need to get a file name so that I can call the readlink() function. All I have is an integer that was originally stored as a file descriptor via an open() command. Problem is, I don't have access to the function where the open() command executed (if I did, then I wouldn't be posting this). The return value from open() was stored in a struct that I do have access to.
char buf[PATH_MAX];
char tempFD[2]; //file descriptor number of the temporary file created
tempFD[0] = fi->fh + '0';
tempFD[1] = '\0';
char parentFD[2]; //file descriptor number of the original file
parentFD[0] = (fi->fh - 1) + '0';
parentFD[1] = '\0';
if (readlink(tempFD, buf, sizeof(buf)) < 0) {
log_msg("\treadlink() error\n");
perror("readlink() error");
} else
log_msg("readlink() returned '%s' for '%s'\n", buf, tempFD);
This is part of the FUSE file system. The struct is called fi, and the file descriptor is stored in fh, which is of type uint64_t. Because of the way this program executes, I know that the two linked files have file descriptor numbers that are always 1 apart. At least that's my working assumption, which I am trying to verify with this code.
This compiles, but when I run it, my log file shows a readlink error every time. My file descriptors have the correct integer values stored in them, but it's not working.
Does anyone know how I can get the file name from these integer values? Thanks!
If it's acceptable that your code becomes non portable and is tied to being run on a somewhat modern version of Linux, then you can use /proc/<pid>/fd/<fd>. However, I would recommend against adding '0' to the fd as a means to get the string representing the number, because it uses the assumption that fd < 10.
However it would be best if you were able to just pick up the filename instead of relying on /proc. At the very least, you can replace calls to the library's function with a wrapper function using a linker flag. Example of usage is gcc program.c -Wl,-wrap,theFunctionToBeOverriden -o program, all calls to the library function will be linked against __wrap_theFunctionToBeOverriden; the original function is accessible under the name __real_theFunctionToBeOverriden. See this answer https://stackoverflow.com/a/617606/111160 for details.
But, back to the answer not involving linkage rerouting: you can do it something like
char fd_path[100];
snprintf("/proc/%d/fd/%d", sizeof(fd_path), getpid(), fi->fh);
You should now use this /proc/... path (it is a softlink) rather than using the path it links to.
You can call readlink to find the actual path in the filesystem. However, doing so introduces a security vulnerability and I suggest against using the path readlink returns.
When the file the descriptor points at is deleted,unlinked, then you can still access it through the /proc/... path. However, when you readlink on it, you get the original pathname (appended with a ' (deleted)' text).
If your file was /tmp/a.txt and it gets deleted, readlink on the /proc/... path returns /tmp/a.txt (deleted). If this path exists, you will be able to access it!, while you wanted to access a different file (/tmp/a.txt). An attacker may be able to provide hostile contents in the /tmp/a.txt (deleted) file.
On the other hand, if you just access the file through the /proc/... path, you will access the correct (unlinked but still alive) file, even if the path claims to be a link to something else.
I have a text file that I want to edit by rewriting it to a temp file and then overwrite the original. This code doesn't do that as it's simplified but it does include the problem I have. On Windows the EXAMPLE.TXT file will disappear after a seemly random number of runs when the rename function fails. I don't know why but so far it has worked fine on Linux. Why does this happen and how can I solve it going in an entirety different direction, such as overwriting the original file from within the program without renaming?
Furthermore, what other, better methods exist? This method has other flaws on Windows, such as the program being closed by a user just after remove is called but before rename, which would not be a problem on Linux (after getting rid of remove)?
#include <stdio.h>
#include <assert.h>
int main(int argc, char *argv[]) {
unsigned int i=0;
FILE *fileStream, *tempStream;
char fileName[] = "EXAMPLE.TXT";
char *tempName = tmpnam(NULL);
while(1) {
printf("%u\n",i++);
assert(fileStream = fopen(fileName, "r+"));
assert(tempStream = fopen(tempName, "w"));
fprintf(tempStream,"LINE\n");
fflush(tempStream); /* fclose alone is enough on linux, but windows will sometimes not fully flush when closing! */
assert(fclose(tempStream) == 0);
assert(fclose(fileStream) == 0);
assert(remove(fileName) == 0); /* windows fails if the file already exists, linux overwrites */
assert(rename(tempName,fileName) == 0);
}
}
Doing it this way is indeed likely to cause trouble. There are four possible outcomes of your code on Windows:
deletes fine, rename works, no problem
deletes fine, but another process had the file open with delete sharing. Common for malware scanners and file content indexers. Which ensures that the file actually gets deleted when the last handle on the file is closed. Problem is, the rename fails because the file still exists
doesn't delete because the file is locked, your assert fires
nothing at all happens because assert() is a no-op when you build the release version.
Good odds for the last bullet btw, it certainly explains repeatable failure. You'll need a more defensive strategy to deal with the 2nd bullet:
delete filename.bak, report error if that failed
rename fileName to filename.bak, report error if that failed
rename tempName to filename, report error and rename filename.back back if that failed
delete filename.bak, don't report error
This is such a common scenario that the winapi has a function for it, ReplaceFile(). Be sure to use the backup file option for maximum bang for the buck.
-
Sometimes antivirus software can cause such a problem by scanning a file at an inconvenient moment.
If the remove fails, try sleeping for a short time and then retrying.