Sending a string through a thread function to be opened with fopen() - c

I'm taking a string(char *) from a FIFO named pipe, from there I'm putting it in a thread. Once I pass the string(char *) to the thread function, I can print it out just fine. However, if I do
FILE *fp;
fp = fopen(string, "wb");
if(fp){
//it never reaches here
}
the entire function is basically as follows for the relevant parts.
void *threadFunction(void *stringBuf){
char *someString;
someString = (char *) stringBuf;
printf("%s\n", someString); //prints fine
FILE *fp;
fp = fopen(someString, "wb");
if (fp) {
//do stuff, but it never reaches here
}
What am I doing wrong here?

There is nothing obvious in the given code that is specifically thread related that would cause consistent failures.
How is stringBuf allocated in the calling thread? If it is on the stack it could be overwritten by the calling thread between the printf and the fopen, but you'd expect to see that as an intermittent failure.
Allocate the buffer with malloc() to eliminate this as a possibility.

Related

Sending exec output from function to main method

I have a method I call from the main method called that executes ls-l on a certain directory, I want it to execute it and send the result as a string to the main method.
My current flawed code:
char *lsl(){
char *stringts=malloc(1024);
chdir("/Users/file/path");
char * lsargs[] = { "/bin/ls" , "-l", NULL};
stringts="The result of ls-l in the created directory is:"+ execv(lsargs[0], lsargs);
return stringts;
}
Currently I am only getting the exec output on the screen, I understand why this is happening(exec getting called before reaching return point). However I don't know how I could possibly do what I want and if it's actually doable.
I was thinking of using pipes and dup2() so I don't let the exec function use stdout but I don't know if it would be possible to put the output in a string.
As Jonathan Leffler already pointed out in comments, there is no '+' operator for concatenating strings in C.
A possibility to dynamically extends strings is to use realloc together with strcat.
For each number of bytes you read from the pipe, you could check the remaining capacity of the originally allocated memory for the string and, if this is not enough, reallocate twice the size.
You have to keep track of the size of the current string yourself. You could do this with a variable of type size_t.
If you combine this with the popen handling, it could look something like this:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(void) {
FILE *fp;
if ((fp = popen("ls -l", "r")) == NULL) {
perror("popen failed");
return EXIT_FAILURE;
}
size_t str_size = 1024;
char *stringts = malloc(str_size);
if (!stringts) {
perror("stringts allocation failed");
return EXIT_FAILURE;
}
stringts[0] = '\0';
char buf[128];
size_t n;
while ((n = fread(buf, 1, sizeof(buf) - 1, fp)) > 0) {
buf[n] = '\0';
size_t capacity = str_size - strlen(stringts) - 1;
while (n > capacity) {
str_size *= 2;
stringts = realloc(stringts, str_size);
if (!stringts) {
perror("stringts realloation failed");
return EXIT_FAILURE;
}
capacity = str_size - strlen(stringts) - 1;
}
strcat(stringts, buf);
}
printf("%s\n", stringts);
free(stringts);
if (pclose(fp) != 0) {
perror("pclose failed");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
You have several flaws in your code:
char *lsl(){
char *stringts=malloc(1024);
chdir("/Users/file/path");
char * lsargs[] = { "/bin/ls" , "-l", NULL};
stringts="The result of ls-l in the created directory is:"+ execv(lsargs[0], lsargs);
return stringts;
}
If you malloc(3) a 1024 byte buffer into stringts pointer, but then you assign a different value to the pointer, making your buffer to be lost in the immensity of your RAM.
When you do execv(2) call, all the memory of your process is freed by the kernel and reloaded with an execution of the command ls -l, you'll get the output in the standard output of the process, and then you'll get the prompt of the shell. This makes the rest of your program unuseful, as once you exec, there's no way back, and your program is unloaded and freed.
You can add (+) to a pointer value (you indeed add to the address pointing to the string "The result of the ls -l..." and ---as the result of exec is nothing, as a new program is loaded--- you get nothing) If execv fails, then you get a pointer pointing to the previous char to that string, which is a valid expression in C, but makes your program to behave erratically in an Undefined Behaviour. Use strcpy(3), strcat(3), or snprintf(3), depending on the exact text you want to copy in the space of the buffer you allocated.
Your return an invalid address as a result. The problem here is that, if execv(2) works, it doesn't return. Only if it fails you get an invalid pointer that you cannot use (by the reason above), and of course ls -l has not been executed. Well, you don't say what you got as ouptut, so it is difficult for me to guess if you actually exec()d the program or not.
On other side, you have a popen(3) library function that allows you to execute a subprogram and allows you to read from a file descriptor its output (I recommend you not to chdir gratuitously in your program, as that is a global change in your program environment, IMHO it is better to pass ls(1) the directory you want to list as a parameter)
#include <stdio.h>
FILE *lsl() {
/* the call creates a FILE * descriptor that you can use as input and
* read the output of the ls command. It's bad resources use to try to
* read all in a string and return the string instead. Better read as
* much as you can/need and then pclose() the descriptor. */
return popen("/bin/ls -l /Users/file/path|", "rt");
}
and then you can read (as it can be very long output, you probably don't have enought buffer space to handle it all in memory if you have a huge directory)
FILE *dir = lsl();
if (dir) {
char buffer[1024];
while (fgets(buffer, sizeof buffer, dir)) {
process_line_of_lsl(buffer);
}
pclose(dir); /* you have to use pclose(3) with popen(3) */
}
If you don't want to use popen(3), then you cannot use execv(2) alone, and you have to fork(2) first, to create a new process, and exec() in the child process (after mounting the redirection yourself). Read a good introduction to fork()/exec() and how to redirect I/O between fork() and exec(), as it is far longer and detailed to put it here (again)

Segmentation fault SIGSEGV error on BeagleBone

I have still problem with a Segmentation fault in the C code. When I call the function current_live_read(ainpath); for the 8th time I'm getting the error: No source available for "_int_malloc() at 0x25be2"
The main function looks like this:
void current_read(void)
{
system(AINinit);
char *ainpath;
ainpath=init_current();
int *current;
float avgcurr=0;
float allcurr=0;
int i=0;
while(1)
{
//sleep(1);
i++;
current=current_live_read(ainpath);
allcurr=allcurr+*current;
avgcurr=allcurr/i;
printf("\n Current: %d AVG: %f", *current, avgcurr);
//free(current);
}
}
The current_live_read(ainpath); is like that:
int *current_live_read(char *ainpath)
{
//ainpath=init_current();
int curr;
FILE *file = fopen(ainpath, "r");
//free(ainpath);
if(!file)
{
printf("Error opening file: %s\n", strerror(errno));
}
else
{
fscanf(file, "%4d", curr);
fclose(file);
//*current=curr;
}
free(file);
return curr;
}
I know that something could be wrong with the pointers, but I don't know which one and what I can do about it.
You may not free the FILE * pointer after closing it. From the manpage:
Flushes a stream, and then closes the file associated with that stream. Afterwards, the function releases any buffers associated with the stream. To flush means that unwritten buffered data is written to the file, and unread buffered data is discarded.
So fclose() already does the cleaning up as needed to prevent a memory leak. If you call free() on that pointer, you are likely to corrupt your heap. So just remove the free(file);
Furthermore, you have to pass a pointer to fscanf() like this:
fscanf(file, "%4d", &curr);
Otherwise you write to a (pseudo)random memory address. It is usually a good idea to check the return value of fscanf() to see, if the conversion succeeded and handle the error case approriately.
This should eliminate the problem.
So I changed the int *current_live_read(char *ainpath); to int current_live_read(char *ainpath) without pointer type.
Make inside function:
int curr; fscanf(file, "%x", &curr)
And in main function the current is just integer:
int current;

Should I recycle pointer and references or create new ones?

I have a doubt dealing with pointers in C.
During my program execution I must open and close a configuration file but since a different program updates its content i must do this several times while the program runs. So i made a little function which reads the file and closes it and i call it every time I need it.
int readConfigFile()
{
FILE* pfile = fopen("path to file", "r");
fseek(pFile, 0, SEEK_END);
lSize = ftell(pFile);
rewind(pFile);
text = (char*)malloc(sizeof(char) * lSize);
if (text == NULL) {
fputs("Memory error", stderr);
return (0);
}
if (fread(text, 1, lSize, pFile) <= 0) {
printf("Nothing read");
return 0;
}
text[lSize] = '\0';
//Do the actual reading
}
If i am not mistaken this functions creates a new FILE* pointer every time it runs however I was wondering if using a FILE* pointer with a larger scope and recycling it every time the function runs would be better. Meaning:
FILE* globalVaraible;
int readConfigFile(FILE* pFile)
{
*pfile = fopen("path to file", "r");
//Do some stuff here
}
I am worried that, since there is no garbage collector in C++ as far as i know, the first function will create too many FILE* pointers ,if runs for long enough, and overflow my memory. On the other hand having a global pointer, that will open and close in the context of a single function, seems rather wasteful and not very modular, at least for my point of view.
So should I recycle my pointer, or any reference for that matter, or is it okay to declare new ones as needed?
Thanks in advance.
You could use freopen().
http://en.cppreference.com/w/cpp/io/c/freopen
There is no guarantee that it will actually reuse dynamic memory allocated for the "old" file. This function may as well be a wrapper for fclose() and fopen() - it's all implementation defined.
From what I see, newlib (a popular libc for embedded targets) reuses the FILE storage, so there's a small gain here. Whether or not it makes a noticeable difference is another matter.
https://sourceware.org/git/gitweb.cgi?p=newlib-cygwin.git;a=blob;f=newlib/libc/stdio/freopen.c;h=fb1f6c4db3f294f672d7e6998e8c69215b7482c1;hb=HEAD

fopen() causing segfault before it's called

I am having a very weird error, I would try to run valgrind, but I am on OS X Yosemite, so this is not possible. I am getting a segfault with an fopen, it seems before the fopen is ever even called. I have a function called format:
void format(uint16_t sector_size, uint16_t cluster_size, uint16_t disk_size)
{
FILE *fp;
fp=fopen(diskName, "wb");
if(fp != NULL)
{
printf("Disk successfully initialized at: %s",diskName);
}
else
{
printf("There was an error creating the disk.");
return;
}
for(int i=0;i<disk_size;i++)
{
fwrite(0, sizeof(sector_size), cluster_size, fp);
}
}
Diskname is declared globally at the top of the file:
char diskName[32];
Here is my main:
int main(int argc, char *argv[]) {
strcpy(diskName, "test.bin");
printf("%s",diskName);
format(128, 8, 1000);
}
The weird part is that, this code segfaults before it ever prints the diskname:
Run Command: line 1: 16016 Segmentation fault: 11
I have no idea how this is possible, and I've tried a wide-array of solutions, but it all boils down to an error with fopen. When fopen is commented out the code runs. Any idea why this would happen?
printf will buffer its output until you flush the output. This can be done by either printing a newline, or flushing the output using fflush(stdout).
In any case, your error is here:
fwrite(0, sizeof(sector_size), cluster_size, fp);
You may not see your program crash when you comment out the fopen call because the fwrite call will fail earlier. fwrite's signature expects a pointer to the data to write as the first argument, where you have provided zero. This will cause fwrite to attempt to dereference a NULL pointer and thus crash.
You can either allocate a buffer, set it all to zero, then write that to the file using fwrite, e.g.
char* buf = calloc(cluster_size, sector_size); // Remember, calloc initialises all elements to zero!
fwrite(buf, sector_size, cluster_size, fp);
Or just call fputc in a loop
for(int i = 0; i < sector_size * cluster_size; i++)
fputc(0, fp);
Also, sizeof(sector_size) will always return 2 in your example, as you're taking the size of the type. Are you sure this is correct?

problem using fprintf

I'm trying to print to a text file numerous variables yet it doesn't work.
I checked and verified that i write it in the correct syntax.
I also checked the return value and it's positive therefore i know it did write to the file, however when i open the file it's empty.
I would be happy for some help.
This is the code:
I initiate DynsaleDayPtr in the main:
FILE* DynsaleDayPtr = CreateTextFiles("sale_day.txt");
Create function:
FILE* CreateTextFiles (char* fileName)
{
FILE* saleFilePtr=NULL;
if((saleFilePtr=fopen(fileName,"a+"))==NULL)
printf("File couldn't be opened\n");
return saleFilePtr;
}
The call to the function TextAddSale is done from a function that is called in the main:
TextAddSale(DynSaleDayPtr,dynNumOfRecords);
Bool TextAddSale (FILE* DynsaleDayPtr, int* dynNumOfRecords)
{
char id[6];
char name [50];
char priceChar[20];
char* tmp = NULL;
int price=-1;
DynamicRecord * newRec=NULL;
scanf("%s%s%s",id,name,priceChar);
newRec = (DynamicRecord *)malloc(sizeof(DynamicRecord));
if (newRec == NULL)
return False;
tmp = (char*)malloc(strlen(name)+1);
if (tmp == NULL)
{
free (newRec);
return False;
}
strcpy(tmp,name);
newRec->productName = tmp;
strcpy(newRec->productId, id);
newRec->productPrice=atoi (priceChar);
if (fprintf(DynsaleDayPtr,"%d %s %s %d", strlen(newRec->productName),
newRec->productId, newRec->productName, newRec->productPrice)>0)
{
*dynNumOfRecords=(*dynNumOfRecords)+1;
return True;
}
}
thanks!
You need to flush the stream.
fflush(FILE*);
Of course, you have to close the stream if you have done with it.
fclose(FILE*);
Agree with #pmg - try something like this:
FILE *pFile = fopen("foo.txt","w");
if (pFile==NULL)
bad();
fprintf(pfile,"Hello world\n");
fclose(pfile);
make that work first - then fix whatever's wrong in the big app -
A thought:
scanf("%s%s%s",id,name,priceChar);
the above statement is a bit dodgy since you haven't said how many bytes
should go in each string.
better to use fgets() then parse the string retrieving the individual values
or create a better format specifier.
If the above statement causes a memory overwrite the rest of your program
could malfunction causing things like what you describe.
fprintf() most likely uses buffered output. Therefore, you should either fflush() the DynSaleDayPtr stream or, better yet, print a newline to the file. The latter has the added benefit of making the file contents actually readable...
Also, don't forget to close() the stream when you're finished with writing. This will also render fflush() unnecessary.

Resources