Writing to global file with threads in C - c

I'm having an issue with being able to write to a file I've created globally, initialized in main (successfully), and writing to in a function used by multiple threads (on Linux).
#includes
FILE *f;
main(){
// Create threads successfully
f = fopen("fileName.txt", "w");
// Make sure the file was able to be created
if(f = NULL){
printf("Unable to create file");
exit(1);
}
// This much works, the check indicates the file was created
// successfully when I run it
while(1){
// loops for a while, getting input from user to direct threads
// When end is determined, waits for all the threads to finish,
// clears allocated memory, and closes file then returns
fclose(f);
return;
}
}
void *threadProcess(){
// Do stuff
// This printf works fine using the values i give the function, as is here
// The values are determined in 'Do stuff'
printf("%d trying to write \"%d BAL %d TIME %d.%06d %d.%06d\" to the file\n", cid, tmp->reqNum, balance, tmp->seconds, tmp->useconds, endTime.tv_sec, endTime.tv_usec);
fflush(stdout);
// There appears to be a Segmentation fault here
fprintf(f, "%d BAL %d TIME %d.%06d %d.%06d\n", tmp->reqNum, balance, tmp->seconds, tmp->useconds, endTime.tv_sec, endTime.tv_usec);
// Never gets here
}
What am I doing wrong here? As I said, the printf statement right before the fprintf statement works and outputs the correct stuff.
Am I wrong to assume that would ensure I don't have an pointer issues for fprintf?
Thanks

it was in my if(reqLog = NULL) check.... I was assigning not comparing. Sorry to have wasted your time haha. – tompon

Related

Pipe's write overwrites an allocated space of memory

My program it's pretty big, so I'll highlight the main problem and add some details about it.
First part of my code:
int myPipe[2]; //A global variable, so I don't have to pass it to future functions
int main(int argc, char *args[])
{
mode_t pUmask = umask(0000); //Obsolete variable to my problem
errno = 0; //Obsolete variable to my problem
char pattern[2500] = "Test1"; //Obsolete variable to my problem
int p = 0; //DEFAULT NUMBER OF PROCESSES
int deep = 0; //Obsolete variable to my problem
int n = 1; //Obsolete variable to my problem
if(pipe(myPipe))
{
perror("Pipe Error: ");
exit(-1);
}
if( (write(myPipe[1], &p, (sizeof(int)*3))) == -1) //First write works
{
perror("write: ");
exit(-1);
}
//Then a bunch of code releated to file reading
}
Second part:
{
//in another function
//The part where I create fileName
char* fileName = calloc(strlen(fileData->d_name)+4, sizeof(char));
strcpy(fileName, fileData->d_name);
}
Third part:
//in another another function
if(S_ISREG(data.st_mode))
{
printf("\tfileName: %s\n", fileName); //Regular print of the right fileName
printf("\t\tOh boy! It's a regular.\n");
printf("\tfileName: %s\n", fileName); //Regular print of the right fileName
if((read(myPipe[0], &p, (sizeof(int)*3))) == -1) //First time I read
{
perror("\t\t read: ");
exit(-1);
}
printf("fileName: %s", fileName); //SEGMENTATION FAULT
There is a bunch of code in between, but it doesn't affect the fileName at all (in fact, up until the "read", fileName was printed flawlessly), and after it a SEGMENTATION FAULT happens.
At one point by changing the printfs locations I was able to get the fileName AFTER the read, which was basically the fileName value("File1") followed by the p integer value(0), which created the new corrupted fileName("File10").
So what's happening? I reserved the space for fileName, I passed the fileName pointer to the following functions up to that read, and supposedly the fd should have it's own adress space as well. HELP.
P.s. if you need more info, I'm willing to give it to you, even the full code, but it's REALLY complicated, and I think I gave you enough proof that fileName doesn't get corrupted at all until the read part, THANK YOU.
P.p.s.
I never close either of the "MyPipe" extremes, since I have to use them multiple times, I wanted to close them at the end of the program.
The statements that write and read the pipe are causing undefined behavior. p is declared:
int p;
But when you write and read it through the pipe, you use sizeof(int)*3, so you're accessing outside the object.
Change those statements to use just sizeof p.

Multithreaded Chat Program in C using Pipes

For a class assignment I am required to develop a single program, that -when opened in two separate windows - allows the user to type on one window and have the entered text appear on the other, while the other window can also type and have their text appear on the first window.
This was first implemented using two separate programs, one which read the input from stdin, wrote it to a pipe, called fflush on the pipe, then got data from the pipe and put it into stdout before calling fflush on stdout, and the other which did basically the exact opposite.
I'm here because I'm struggling to make the single program version work and I'm not sure if I understand threading correctly.
Here's my main function:
int main()
{
pthread_t threadID[2];
pthread_mutex_init(&globalLock, NULL);
pthread_create(&threadID[0], NULL, InputToPipe, NULL);
pthread_create(&threadID[1], NULL, PipeToOutput, NULL);
pthread_join(threadID[0], NULL);
pthread_join(threadID[1], NULL);
pthread_mutex_destroy(&globalLock);
return 0;
}
As I understand it, this will initialize a global lock (which I'm not sure if I need or not) and then create two threads. The first thread will call InputToPipe and the second will call PipeToOutput. Both should enter their respective functions nearly simultaneously. Once in InputToPipe (which looks like this)
void *InputToPipe()
{
pthread_mutex_lock(&globalLock);
char buffer[100];
FILE *output = fopen("my.pipe2", "w");
if (output == NULL)
{
printf("Error opening pipe\n");
return NULL;
}
while (fgets(buffer, sizeof(buffer), stdin))
{
fputs(buffer, output);
fflush(output);
}
pthread_mutex_unlock(&globalLock);
return NULL;
}
A lock is set, which was intended to keep the second instance of the program from accessing the first function. I thought that this would cause the second instance of the program to only run the PipeToOutput function (shown below),
void *PipeToOutput()
{
char buffer[100];
FILE *input = fopen("my.pipe", "r");
if (input == NULL)
{
printf("Error opening pipe\n");
return NULL;
}
while (fgets(buffer, sizeof(buffer), input))
{
fputs(buffer, stdout);
fflush(stdout);
}
return NULL;
}
but instead I think it would block the second program from doing anything as the first pipe is joined first, and that cannot be done until it is unlocked by the first program, which will not happen before the program is terminated. Needless to say, I am confused and pretty sure that most of my logic is off, but I was unable to find examples or explanations about using two threads to run two different functions in two different console windows (unless I completely misunderstood the assignment somehow and it's just one function running twice, but I don't think that's the case). I would appreciate either some help fixing this program so I can use it as an example to understand what the threads are doing, or just an explanation on how I would implement the threads and why. I know it's probably stupidly simple but I am having trouble with the concept. Thanks in advance.
If the idea is that my.pipe is used to transfer messages from the first program to the second, and my.pipe2 is used to transfer messages in the opposite direction, then it appears that both programs should be identical except swapping my.pipe and my.pipe2.
It sounds like they want each program to have two threads, one of which is responsible for reading from the first pipe and writing to stdout, and the other responsible for reading from stdin and writing to the second pipe.
That being the case, your existing functions look correct with the exception that you don't need the lock at all - all it's doing is stopping both threads from running at once, and your threads aren't sharing any state at all. Your second copy of the program would be the same except swapping my.pipe and my.pipe2 around.
Note that your InputToPipe and PipeToOutput functions contain an identical loop that differs only in the FILE * variables used, so you could separate that out into its own function:
void FileToFile(FILE *output, FILE *input)
{
char buffer[100];
while (fgets(buffer, sizeof(buffer), input))
{
fputs(buffer, output);
fflush(output);
}
}
void *InputToPipe()
{
FILE *output = fopen("my.pipe2", "w");
if (output == NULL)
{
printf("Error opening pipe\n");
return NULL;
}
FileToFile(output, stdin);
fclose(output);
return NULL;
}
void *PipeToOutput()
{
FILE *input = fopen("my.pipe", "r");
if (input == NULL)
{
printf("Error opening pipe\n");
return NULL;
}
FileToFile(stdout, input);
fclose(input);
return NULL;
}
In fact you could go further than this: if you opened the pipes in the main() process before starting the threads, you could just pass the pair of FILE *input and FILE *output variables to the thread functions, which would let you use the same thread function (with different parameters) for each thread.

C remove() function not working

I'm trying to implement a program I wrote on my Mac on Windows systems, and I'm running into a lot of trouble with this last function:
/* clearFiles */
// deletes the section files after the program has executed
// skips any sections listed table that don't appear in the body (b/c no file was written for these)
// also deletes the table & body files
void clearFiles(section_t *tableArray, int *skipArray, char *tableName, char *bodyName)
{
int i, status, index;
char str[SECTNAME];
char command[SECTNAME];
index = 0;
// clear section files
for(i=1; tableArray[i].count!=0;i++)
{
if(i!=skipArray[index])
{
strcpy(str, tableArray[i].shortName);
strcat(str,".txt");
status = remove(str);
if(status!=0)
printf("Warning! File %s not deleted.\n", str);
} else index++;
}
// clear table file
status = remove(tableName);
if(status!=0)
printf("Warning! File %s not deleted.\n", tableName);
// clear body file
status = remove(bodyName);
if(status!=0)
printf("Warning! File %s not deleted.\n", bodyName);
}
The program takes a very large file and first splits it into a table of contents file and a body file. It then takes the body and splits it into hundreds of individual section files, from which it performs the actual task of the program. At the end, I want it to delete all of these extra files, because they just clutter up the directory. It works perfectly on my Unix Mac environment, but when I try to run it in Command Prompt on a PC, my remove() function returns -1 for every section file and doesn't delete it (it does, however, successfully delete the table & body files). I also tried a more brute force method by using system(del fileName), but this didn't work either, because it says that the file is being used by another process. I can't figure out why these files might be open, as every time fopen() appears, I follow it up with an fclose(). The exception is when checking if the files are open, I use
if(fopen(fileName,"r")!=NULL){}
Could this be the problem? Is there either a way to check if a file is open without actually opening it, or is there a way to close a file that was checked this way? I tried assigning a dummy pointer to it and coding:
dummy = fopen(fileName, "r");
if(dummy!=NULL){}
fclose(dummy);
But this didn't work either. Is it possible to pass just a file path to the fclose() function (for example, something similar to fclose(C:\users\USER\desktop\fileName.txt)? Also, I know that the program is attempting to delete the correct fileName, because my error message prints the correct names to the command prompt.
Any input is greatly appreciated!!! Thanks.
NOTE:
The tableArray starts at 1 because of a search function implemented in the program that returns an index if found and 0 if not found. In hindsight, it would have been better to return -1 if not found and start the index at zero, but that is a separate issue
UPDATE:
Below is the code used to create the section files:
if(fopen(word, "r")==NULL){
ofile = fopen(word, "w");
fprintf(ofile, "SECTION %s ", section);
//go until end of file or until found the next section
// bug fix: check the section after that, too (in case the next section isn't there)
while(fscanf(spec, "%s", word)!=EOF && !cease)
{
if(strcmp(word,"SECTION")!=0){
fprintf(ofile, "%s ", word);
}
else{
fscanf(spec, "%s", word);
choice = testNumber(spec,word);
for(j=i+1; tableArray[j].count!=0;j++)
if(strcmp(word,tableArray[j].shortName)==0)
cease = 1;
}
}
fclose(ofile);
}

How to make several threads read several files without interference?

I am studying mutexes and I am stuck in an exercise. For each file in a given directory, I have to create a thread to read it and display its contents (no problem if order is not correct).
So far, the threads are running this function:
void * reader_thread (void * arg)
{
char * file_path = (char*)arg;
FILE * f;
char temp[20];
int value;
f=fopen(file_path, "r");
printf("Opened %s.\n",file_path);
while (fscanf(f, "%s",temp)!=EOF)
if (!get_number (temp, &value)) /*Gets int value from given string (if numeric)*/
printf("Thread %lu -> %s: %d\n", pthread_self(), file_path, value );
fclose(f);
pthread_exit(NULL);
}
Being called by a function that receives a DIR pointer, previously created by opendir().
(I have omitted some error checking here to make it cleaner, but I get no error at all.)
int readfiles (DIR * dir, char * path)
{
struct dirent * temp = NULL;
char * file_path;
pthread_t thList [MAX_THREADS];
int nThreads=0, i;
memset(thList, 0, sizeof(pthread_t)*MAX_THREADS);
file_path=malloc((257+strlen(path))*sizeof(char));
while((temp = readdir (dir))!=NULL && nThreads<MAX_THREADS) /*Reads files from dir*/
{
if (temp->d_name[0] != '.') /*Ignores the ones beggining with '.'*/
{
get_file_path(path, temp->d_name, file_path); /*Computes rute (overwritten every iteration)*/
printf("Got %s.\n", file_path);
pthread_create(&thList[nThreads], NULL, reader_thread, (void * )file_path)
nThreads++;
}
}
printf("readdir: %s\n", strerror (errno )); /*Just in case*/
for (i=0; i<nThreads ; i++)
pthread_join(thList[i], NULL)
if (file_path)
free(file_path);
return 0;
}
My problem here is that, although paths are computed perfectly, the threads don't seem to receive the correct argument. They all read the same file. This is the output I get:
Got test/testB.
Got test/testA.
readdir: Success
Opened test/testA.
Thread 139976911939328 -> test/testA: 3536
Thread 139976911939328 -> test/testA: 37
Thread 139976911939328 -> test/testA: -38
Thread 139976911939328 -> test/testA: -985
Opened test/testA.
Thread 139976903546624 -> test/testA: 3536
Thread 139976903546624 -> test/testA: 37
Thread 139976903546624 -> test/testA: -38
Thread 139976903546624 -> test/testA: -985
If I join the threads before the next one begins, it works OK. So I assume there is a critical section somewhere, but I don't really know how to find it. I have tried mutexing the whole thread function:
void * reader_thread (void * arg)
{
pthread_mutex_lock(&mutex_file);
/*...*/
pthread_mutex_unlock(&mutex_file);
}
And also, mutexing the while loop in the second function. Even both at the same time. But it won't work in any way. By the way, mutex_file is a global variable, which is init'd by pthread_mutex_init() in main().
I would really appreciate a piece of advice with this, as I don't really know what I'm doing wrong. I would also appreciate some good reference or book, as mutexes and System V semaphores are feeling a bit difficult to me.
Thank you very much.
Well, you are passing exactly the same pointer as file path to both threads. As a result, they read file name from the same string and end up reading the same file. Actually, you get a little bit lucky here because in reality you have a race condition — you update the contents of the string pointer by file_path while firing up threads that read from that pointer, so you may end up with a thread reading that memory while it is being changed. What you have to do is allocate an argument for each thread separately (i.e. call malloc and related logic in your while loop), and then free those arguments once thread is exited.
Looks like you're using the same file_path buffer for all threads, just loading it over and over again with the next name. You need to allocate a new string for each thread, and have each thread delete the string after using it.
edit
Since you already have an array of threads, you could just make a parallel array of char[], each holding the filename for the corresponding thread. This would avoid malloc/free.

problem with fprintf

I am running a simulation in C, and need to store 3 100x100 matrices ~1000 times. My program runs just fine when I'm not writing the data to file. But when I run my program and write the data, I get a segmentation error after 250 time steps or so. And I don't understand why.
My save function looks like this
void saveData(Simulation* sim, int number) {
sprintf(pathname_vx, "data/xvel%d.dat", number);
sprintf(pathname_vy, "data/yvel%d.dat", number);
sprintf(pathname_rho, "data/rho%d.dat", number);
FILE* vx_File = fopen(pathname_vx, "w");
FILE* vy_File = fopen(pathname_vy, "w");
FILE* rho_File = fopen(pathname_rho, "w");
int iX, iY;
double ux, uy, rho;
for (iY=0; iY<sim->ly; ++iY) {
for (iX=0; iX<sim->lx; ++iX) {
computeMacros(sim->lattice[iX][iY].fPop, &rho, &ux, &uy);
fprintf(vx_File, "%f ", ux);
fprintf(vy_File, "%f ", uy);
fprintf(rho_File, "%f ", rho);
}
fprintf(vx_File, "\n");
fprintf(vy_File, "\n");
fprintf(rho_File, "\n");
}
fclose(vx_File);
fclose(vx_File);
fclose(vy_File);
}
where 'Simulation' is a struct containing a lattice (100x100 matrix) with 3 different variables 'rho', 'ux', 'uy'. The 'number' argument is just a counting variable to name the files correctly.
gdb says the following, but it doesn't help me much.
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000010
0x00007fff87c6ebec in __vfprintf ()
I'm not that experienced in programing, so I guess there are better ways to write data to file. Any attempt to clarify why my approach doesn't work is highly appreciated.
Thanks
jon
Looks like you're closing vx_File twice, and not closing rho_File at all. This means that you're leaving rho_File open each iteration, and thus using up a file descriptor each time through.
I'd guess the program fails you're running out of file descriptors. (Since this happens on the 250'th iteration, I'd guess your limit is 256). Once you're out of file descriptors, one of the fopen() calls will return NULL. Since you don't check the return value of fopen(), the crash will occur when you attempt to fwrite to a NULL handle.
Looks like a NULL pointer dereference. You need to check the result of fopen() to make sure it succeeded (non-NULL result).
Maybe you run out of memory when you are creating those thousand 100x100 matrices (or whatever is exactly happening). You could then end up with a incomplete sim->lattice that might contain NULL pointers.
Do you check if you malloc() calls succeed? If they can't allocate memory they return NULL.

Resources