MULTITHREADING c - read several files in the same file - c

I'm new at multithreading and I'm trying to simulate banking transactions on the same current account using multithreading.
Each thread reads the actions to perform from a file. The file will contain an operation for each line consisting of an integer. The main program have to create as many threads as files in the path.
int main(int argc,char*argv[]){
DIR *buff;
struct dirent *dptr = NULL;
pthread_t hiloaux[MAX_THREADS];
int i=0,j=0, nthreads=0;
char *pathaux;
memset(hiloaux,0,sizeof(pthread_t)*MAX_THREADS);
diraux=malloc((267+strlen(argv[1]))*sizeof(char));
buff=opendir(argv[1]);
while((dptr = readdir(buff)) != NULL && nthreads<MAX_THREADS)//read files in the path
{
if (dptr->d_name[0]!='.'){
pthread_mutex_lock(&mutex_path);//mutual exclusion
strcpy(pathaux,argv[1]);
strcat (pathaux,"/");
strcat (pathaux,dptr->d_name);//makes the route (ex:path/a.txt)
pthread_create(&hiloaux[nthreads],NULL,readfile,(void *)pathaux);
//creates a thread for each file in the path
nthreads++;
}
}
for (j=0;j<nthreads;j++){
pthread_join(hiloaux[j],NULL);
}
closedir(buff);
return 0;
}
My problem is that the threads don't seem to receive the correct path argument. Even though I have placed a mutex, (mutex_path) they all read the same file. I unlock this mutex inside the function readfile(), .
void *readfile(void *arg){
FILE *fichero;
int x=0,n=0;
int suma=0;
int cuenta2=0;
char * file_path = (char*)arg;
n=rand() % 5+1; //random number to sleep the program each time I read a file line
pthread_mutex_unlock(&mutex_path);//unlock the mutex
fichero = fopen(file_path, "r");
while (fscanf (fichero, "%d", &x)!=EOF){
pthread_mutex_lock(&mutex2);//mutual exclusion to protect variables(x,cuenta,cuenta2)
cuenta2+=x;
if (cuenta2<0){
printf("count discarded\n");
}
else cuenta=cuenta2;
pthread_mutex_unlock(&mutex2);
printf("sum= %d\n",cuenta);
sleep(n); //Each time i read a line,i sleep the thread and let other thread read other fileline
}
pthread_exit(NULL);
fclose(fichero);
}
When I run the program i get this output
alberto#ubuntu:~/Escritorio/practica3$ ./ejercicio3 path
read file-> path/fb
read -> 2
sum= 2
read file-> path/fb
read -> 2
sum= 2
read file-> path/fb
read -> 4
sum= 6
read file-> path/fb
read -> 4
sum= 6
read file-> path/fb
read -> 6
sum= 12
read file-> path/fb
read -> 6
sum= 12
It seems to work well, it reads a line and sleeps for a time, during this time another thread do its work, but the problem is that both threads open the same file (path/fb).
As i said before i think the problem is in path argument, is like the mutex_path did not make his work.
I would really appreciate a little help with this, as I don't really know what's wrong.
Thank you very much.

In your "readfile" function
the line
char * file_path = (char*)arg;
Just copies a pointer the string memory but not the memory itself.
So it can (and will) still be altered by the man thread while worker thread continues.
Make a memory copy there.
Or even better store all arguments to your threads in distinct memory in main thread already, so you wont need the first mutex at all.

First I do not see where u assign memory for pathaux. I am wondering how come strcpy or strcat is working rather than memory segmentation. Try compiling with C++ compiler and it may complain.
As to the problem u are passing the pointer, so every thread points to same location.
Correct approach would be inside readdir loop -
1. create memory and copy the path to it.(Note u want to create memory every time in loop)
2. pass this memory to new thread.
If you do this way :
a. you do not have to use mutex path.
b. call free at end of readfile method.

Related

How to make several threads read several files without interference?

I am studying mutexes and I am stuck in an exercise. For each file in a given directory, I have to create a thread to read it and display its contents (no problem if order is not correct).
So far, the threads are running this function:
void * reader_thread (void * arg)
{
char * file_path = (char*)arg;
FILE * f;
char temp[20];
int value;
f=fopen(file_path, "r");
printf("Opened %s.\n",file_path);
while (fscanf(f, "%s",temp)!=EOF)
if (!get_number (temp, &value)) /*Gets int value from given string (if numeric)*/
printf("Thread %lu -> %s: %d\n", pthread_self(), file_path, value );
fclose(f);
pthread_exit(NULL);
}
Being called by a function that receives a DIR pointer, previously created by opendir().
(I have omitted some error checking here to make it cleaner, but I get no error at all.)
int readfiles (DIR * dir, char * path)
{
struct dirent * temp = NULL;
char * file_path;
pthread_t thList [MAX_THREADS];
int nThreads=0, i;
memset(thList, 0, sizeof(pthread_t)*MAX_THREADS);
file_path=malloc((257+strlen(path))*sizeof(char));
while((temp = readdir (dir))!=NULL && nThreads<MAX_THREADS) /*Reads files from dir*/
{
if (temp->d_name[0] != '.') /*Ignores the ones beggining with '.'*/
{
get_file_path(path, temp->d_name, file_path); /*Computes rute (overwritten every iteration)*/
printf("Got %s.\n", file_path);
pthread_create(&thList[nThreads], NULL, reader_thread, (void * )file_path)
nThreads++;
}
}
printf("readdir: %s\n", strerror (errno )); /*Just in case*/
for (i=0; i<nThreads ; i++)
pthread_join(thList[i], NULL)
if (file_path)
free(file_path);
return 0;
}
My problem here is that, although paths are computed perfectly, the threads don't seem to receive the correct argument. They all read the same file. This is the output I get:
Got test/testB.
Got test/testA.
readdir: Success
Opened test/testA.
Thread 139976911939328 -> test/testA: 3536
Thread 139976911939328 -> test/testA: 37
Thread 139976911939328 -> test/testA: -38
Thread 139976911939328 -> test/testA: -985
Opened test/testA.
Thread 139976903546624 -> test/testA: 3536
Thread 139976903546624 -> test/testA: 37
Thread 139976903546624 -> test/testA: -38
Thread 139976903546624 -> test/testA: -985
If I join the threads before the next one begins, it works OK. So I assume there is a critical section somewhere, but I don't really know how to find it. I have tried mutexing the whole thread function:
void * reader_thread (void * arg)
{
pthread_mutex_lock(&mutex_file);
/*...*/
pthread_mutex_unlock(&mutex_file);
}
And also, mutexing the while loop in the second function. Even both at the same time. But it won't work in any way. By the way, mutex_file is a global variable, which is init'd by pthread_mutex_init() in main().
I would really appreciate a piece of advice with this, as I don't really know what I'm doing wrong. I would also appreciate some good reference or book, as mutexes and System V semaphores are feeling a bit difficult to me.
Thank you very much.
Well, you are passing exactly the same pointer as file path to both threads. As a result, they read file name from the same string and end up reading the same file. Actually, you get a little bit lucky here because in reality you have a race condition — you update the contents of the string pointer by file_path while firing up threads that read from that pointer, so you may end up with a thread reading that memory while it is being changed. What you have to do is allocate an argument for each thread separately (i.e. call malloc and related logic in your while loop), and then free those arguments once thread is exited.
Looks like you're using the same file_path buffer for all threads, just loading it over and over again with the next name. You need to allocate a new string for each thread, and have each thread delete the string after using it.
edit
Since you already have an array of threads, you could just make a parallel array of char[], each holding the filename for the corresponding thread. This would avoid malloc/free.

How to use the same semaphore between program executions?

I have a simple program that only uses one process (each time it's executed), creates a semaphore with a key that is the file's name (ftok() function), and then writes a line to a file. The thing is, the semaphores (in this case, 2) have to do two things: one has to guarantee that no more than two programs write at the same time, and the other has to verify that only 10 lines maximum have been written to the file. So if I execute the program and the file already has 10 lines of text, it won't write anything to it.
This is my code:
#include "semaphores.h"
int main() {
int semaphoreLines = create_semaphore(ftok("Ex5.c", 0), 10);
int semaphoreWrite = create_semaphore(ftok("Ex5.c", 1), 1);
FILE *file;
int ret_val = down(semaphoreLines, 1);
if(ret_val != 0) {
printf("No more lines can be written to the file!\n");
exit(-1);
}
down(semaphoreWrite, 1);
file = fopen("Ex5.txt", "a");
fprintf(file, "This is process %d\n", getpid());
fclose(file);
up(semaphoreWrite, 1);
return 0;
}
When I execute it the first time, semaphoreLines goes to 9 (as intended), locks the semaphoreWrite to 0 (so no other process can write to the file), then writes and frees up the latter back to 1. The process terminates. I manually tell it to run again in Terminal. However, semaphoreLines should be 9 so when I down() it, it goes to 8 and so forth. The issue is, it gets back up at 10 again. I don't want this.
Maybe it's because I'm fairly new to semaphore programming, but I thought semaphores were public if they don't get created with 0 key. With the ftok(), I wanted it to be public so that if I run the program again it decrements it if possible and writes, if not it displays the error code and terminates. I mean, the semaphore doesn't get removed, so the second time the program gets executed it should see the semaphore value is 9, right...?
I don't really want to fork 10 processes and have them write one by one to the file in the same program...or is that the only way to do it?
P.S. The create_semaphore() function is part of my semaphores.h header file, which contains 4 simple functions I wrote so it's easier to use semaphores instead of running all that semget, semop, and semctl stuff every time I want to work with them.
The issue is, it gets back up at 10 again. I don't want this.
If you don't want this, then don't do it. You yourself are setting the semaphore value to 10 in create_semaphore(). Instead, pass IPC_EXCL in addition to IPC_CREAT to semget(), and if that yields errno EEXIST, just return from create_semaphore() and skip the semctl(SETVAL).

C: multi-processes stdio append mode

I wrote this code in C:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
void random_seed(){
struct timeval tim;
gettimeofday(&tim, NULL);
double t1=tim.tv_sec+(tim.tv_usec/1000000.0);
srand (t1);
}
void main(){
FILE *f;
int i;
int size=100;
char *buf=(char*)malloc(size);
f = fopen("output.txt", "a");
setvbuf (f, buf, _IOFBF, size);
random_seed();
for(i=0; i<200; i++){
fprintf(f, "[ xx - %d - 012345678901234567890123456789 - %d]\n", rand()%10, getpid());
fflush(f);
}
fclose(f);
free(buf);
}
This code opens in append mode a file and attaches 200 times a string.
I set the buf of size 100 that can contains the full string.
Then I created multi processes running this code by using this bash script:
#!/bin/bash
gcc source.c
rm output.txt
for i in `seq 1 100`;
do
./a.out &
done
I expected that in the output the strings are never mixed up, as I read that when opening a file with O_APPEND flag the file offset will be set to the end of the file prior to each write and i'm using a fully buffered stream, but i got the first line of each process is mixed as this:
[ xx - [ xx - 7 - 012345678901234567890123456789 - 22545]
and some lines later
2 - 012345678901234567890123456789 - 22589]
It looks like the write is interrupted for calling the rand function.
So...why appear these lines?
Is the only way to prevent this the use file locks...even if i'm using only the append mode?
Thanks in advance!
You will need to implement some form of concurrency control yourself, POSIX makes no guarantees with respect to concurrent writes from multiple processes. You get some guarantees for pipes, but not for regular files written to from different processes.
Quoting POSIX write():
This volume of POSIX.1-2008 does not specify behavior of concurrent writes to a file from multiple processes. Applications should use some form of concurrency control.
(At the end of the Rationale section.)
You open the file in the fully buffered mode. That means that every line of the output first goes into the buffer and when the buffer overflows it gets flushed to the file regardless whether it contains incomplete lines. That causes chunks of output from different processes writing into the same file concurrently to be interleaved.
An easy fix would be to open the file in line buffered mode _IOLBF, so that the buffer gets flushed on each complete line. Just make sure that the buffer size is at least as big as your longest line, otherwise it will end up writing incomplete lines. The buffer is normally flushed with a single write() system call, so that lines from different processes won't interleave each other.
There is no guarantee that write() system call is atomic for different filesystems though, but it normally works as expected because write() normally locks the file descriptor in the kernel with a mutex before proceeding.

problem with fprintf

I am running a simulation in C, and need to store 3 100x100 matrices ~1000 times. My program runs just fine when I'm not writing the data to file. But when I run my program and write the data, I get a segmentation error after 250 time steps or so. And I don't understand why.
My save function looks like this
void saveData(Simulation* sim, int number) {
sprintf(pathname_vx, "data/xvel%d.dat", number);
sprintf(pathname_vy, "data/yvel%d.dat", number);
sprintf(pathname_rho, "data/rho%d.dat", number);
FILE* vx_File = fopen(pathname_vx, "w");
FILE* vy_File = fopen(pathname_vy, "w");
FILE* rho_File = fopen(pathname_rho, "w");
int iX, iY;
double ux, uy, rho;
for (iY=0; iY<sim->ly; ++iY) {
for (iX=0; iX<sim->lx; ++iX) {
computeMacros(sim->lattice[iX][iY].fPop, &rho, &ux, &uy);
fprintf(vx_File, "%f ", ux);
fprintf(vy_File, "%f ", uy);
fprintf(rho_File, "%f ", rho);
}
fprintf(vx_File, "\n");
fprintf(vy_File, "\n");
fprintf(rho_File, "\n");
}
fclose(vx_File);
fclose(vx_File);
fclose(vy_File);
}
where 'Simulation' is a struct containing a lattice (100x100 matrix) with 3 different variables 'rho', 'ux', 'uy'. The 'number' argument is just a counting variable to name the files correctly.
gdb says the following, but it doesn't help me much.
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000010
0x00007fff87c6ebec in __vfprintf ()
I'm not that experienced in programing, so I guess there are better ways to write data to file. Any attempt to clarify why my approach doesn't work is highly appreciated.
Thanks
jon
Looks like you're closing vx_File twice, and not closing rho_File at all. This means that you're leaving rho_File open each iteration, and thus using up a file descriptor each time through.
I'd guess the program fails you're running out of file descriptors. (Since this happens on the 250'th iteration, I'd guess your limit is 256). Once you're out of file descriptors, one of the fopen() calls will return NULL. Since you don't check the return value of fopen(), the crash will occur when you attempt to fwrite to a NULL handle.
Looks like a NULL pointer dereference. You need to check the result of fopen() to make sure it succeeded (non-NULL result).
Maybe you run out of memory when you are creating those thousand 100x100 matrices (or whatever is exactly happening). You could then end up with a incomplete sim->lattice that might contain NULL pointers.
Do you check if you malloc() calls succeed? If they can't allocate memory they return NULL.

Asynchronous io in c using windows API: which method to use and why does my code execute synchronous?

I have a C application which generates a lot of output and for which speed is critical. The program is basically a loop over a large (8-12GB) binary input file which must be read sequentially. In each iteration the read bytes are processed and output is generated and written to multiple files, but never to multiple files at the same time. So if you are at the point where output is generated and there are 4 output files you write to either file 0 or 1 or 2, or 3. At the end of the iteration I now write the output using fwrite(), thereby waiting for the write operation to finish. The total number of output operations is large, up to 4 million per file, and output size of files ranges from 100mb to 3.5GB. The program runs on a basic multicore processor.
I want to write output in a separate thread and I know this can be done with
Asyncronous I/O
Creating threads
I/O completion ports
I have 2 type of questions, namely conceptual and code specific.
Conceptual Question
What would be the best approach. Note that the application should be portable to Linux, however, I don't see how that would be very important for my choice for 1-3, since I would write a wrapper around anything kernel/API specific. For me the most important criteria is speed. I have read that option 1 is not that likely to increase the performance of the program and that the kernel in any case creates new threads for the i/o operation, so then why not use option (2) immediately with the advantage that it seems easier to program (also since I did not succeed with option (1), see code issues below).
Note that I read https://stackoverflow.com/questions/3689759/how-can-i-run-a-specific-function-of-thread-asynchronously-in-c-c, but I dont see a motivation on what to use based on the nature of the application. So I hope somebody could provide me with some advice what would be best in my situation. Also from the book "Windows System Programming" by Johnson M. Hart, I know that the recommendation is using threads, mainly because of the simplicity. However, will it also be fastest?
Code Question
This question involves the attempts I made so far to make asynchronous I/O work. I understand that its a big piece of code so that its not that easy to look into. In any case I would really appreciate any attempt.
To decrease execution time I try to write the output by means of a new thread using WINAPI via CreateFile() with FILE_FLAGGED_OVERLAP with an overlapped structure. I have created a sample program in which I try to get this to work. However, I encountered 2 problems:
The file is only opened in overlapped mode when I delete an already existing file (I have tried using CreateFile in different modes (CREATE_ALWAYS, CREATE_NEW, OPEN_EXISTING), but this does not help).
Only the first WriteFile is executed asynchronously. The remainder of WriteFile commands is synchronous. For this problem I already consulted http://support.microsoft.com/kb/156932. It seems that the problem I have is related to the fact that "any write operation to a file that extends its length will be synchronous". I've already tried to solve this by increasing file size/valid data size (commented region in code). However, I still do not get it to work. I'm aware of the fact that it could be the case that to get most out of asynchronous io i should CreateFile with FILE_FLAG_NO_BUFFERING, however I cannot get this to work as well.
Please note that the program creates a file of about 120mb in the path of execution. Also note that print statements "not ok" are not desireable, I would like to see "can do work in background" appear on my screen... What goes wrong here?
#include <windows.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#define ASYNC // remove this definition to run synchronously (i.e. using fwrite)
#ifdef ASYNC
struct _OVERLAPPED *pOverlapped;
HANDLE *pEventH;
HANDLE *pFile;
#else
FILE *pFile;
#endif
#define DIM_X 100
#define DIM_Y 150000
#define _PRINTERROR(msgs)\
{printf("file: %s, line: %d, %s",__FILE__,__LINE__,msgs);\
fflush(stdout);\
return 0;} \
#define _PRINTF(msgs)\
{printf(msgs);\
fflush(stdout);} \
#define _START_TIMER \
time_t time1,time2; \
clock_t clock1; \
time(&time1); \
printf("start time: %s",ctime(&time1)); \
fflush(stdout);
#define _END_TIMER\
time(&time2);\
clock1 = clock();\
printf("end time: %s",ctime(&time2));\
printf("elapsed processor time: %.2f\n",(((float)clock1)/CLOCKS_PER_SEC));\
fflush(stdout);
double aio_dat[DIM_Y] = {0};
double do_compute(double A,double B, int arr_len);
int main()
{
_START_TIMER;
const char *pName = "test1.bin";
DWORD dwBytesToWrite;
BOOL bErrorFlag = FALSE;
int j=0;
int i=0;
int fOverlapped=0;
#ifdef ASYNC
// create / open the file
pFile=CreateFile(pName,
GENERIC_WRITE, // open for writing
0, // share write access
NULL, // default security
CREATE_ALWAYS, // create new/overwrite existing
FILE_FLAG_OVERLAPPED, // | FILE_FLAG_NO_BUFFERING, // overlapped file
NULL); // no attr. template
// check whether file opening was ok
if(pFile==INVALID_HANDLE_VALUE){
printf("%x\n",GetLastError());
_PRINTERROR("file not opened properly\n");
}
// make the overlapped structure
pOverlapped = calloc(1,sizeof(struct _OVERLAPPED));
pOverlapped->Offset = 0;
pOverlapped->OffsetHigh = 0;
// put event handle in overlapped structure
if(!(pOverlapped->hEvent = CreateEvent(NULL,TRUE,FALSE,NULL))){
printf("%x\n",GetLastError());
_PRINTERROR("error in createevent\n");
}
#else
pFile = fopen(pName,"wb");
#endif
// create some output
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
// determine how many bytes should be written
dwBytesToWrite = (DWORD)sizeof(aio_dat);
for(i=0;i<DIM_X;i++){ // do this DIM_X times
#ifdef ASYNC
//if(i>0){
//SetFilePointer(pFile,dwBytesToWrite,NULL,FILE_CURRENT);
//if(!(SetEndOfFile(pFile))){
// printf("%i\n",pFile);
// _PRINTERROR("error in set end of file\n");
//}
//SetFilePointer(pFile,-dwBytesToWrite,NULL,FILE_CURRENT);
//}
// write the bytes
if(!(bErrorFlag = WriteFile(pFile,aio_dat,dwBytesToWrite,NULL,pOverlapped))){
// check whether io pending or some other error
if(GetLastError()!=ERROR_IO_PENDING){
printf("lasterror: %x\n",GetLastError());
_PRINTERROR("error while writing file\n");
}
else{
fOverlapped=1;
}
}
else{
// if you get here output got immediately written; bad!
fOverlapped=0;
}
if(fOverlapped){
// do background, this msgs is what I want to see
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
_PRINTF("can do work in background\n");
}
else{
// not overlapped, this message is bad
_PRINTF("not ok\n");
}
// wait to continue
if((WaitForSingleObject(pOverlapped->hEvent,INFINITE))!=WAIT_OBJECT_0){
_PRINTERROR("waiting did not succeed\n");
}
// reset event structure
if(!(ResetEvent(pOverlapped->hEvent))){
printf("%x\n",GetLastError());
_PRINTERROR("error in resetevent\n");
}
pOverlapped->Offset+=dwBytesToWrite;
#else
fwrite(aio_dat,sizeof(double),DIM_Y,pFile);
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
for(j=0;j<DIM_Y;j++){
aio_dat[j] = do_compute(i, j, DIM_X);
}
#endif
}
#ifdef ASYNC
CloseHandle(pFile);
free(pOverlapped);
#else
fclose(pFile);
#endif
_END_TIMER;
return 1;
}
double do_compute(double A,double B, int arr_len)
{
int i;
double res = 0;
double *xA = malloc(arr_len * sizeof(double));
double *xB = malloc(arr_len * sizeof(double));
if ( !xA || !xB )
abort();
for (i = 0; i < arr_len; i++) {
xA[i] = sin(A);
xB[i] = cos(B);
res = res + xA[i]*xA[i];
}
free(xA);
free(xB);
return res;
}
Useful links
http://software.intel.com/sites/products/documentation/studio/composer/en-us/2011/compiler_c/cref_cls/common/cppref_asynchioC_aio_read_write_eg.htm
http://www.ibm.com/developerworks/linux/library/l-async/?ca=dgr-lnxw02aUsingPOISIXAIOAPI
http://www.flounder.com/asynchexplorer.htm#Asynchronous%20I/O
I know this is a big question and I would like to thank everybody in advance who takes the trouble reading it and perhaps even respond!
You should be able to get this to work using the OVERLAPPED structure.
You're on the right track: the system is preventing you from writing asynchronously because every WriteFile extends the size of the file. However, you're doing the file size extension wrong. Simply calling SetFileSize will not actually reserve space in the MFT. Use the SetFileValidData function. This will allocate clusters for your file (note that they will contain whatever garbage the disk had there) and you should be able to execute WriteFile and your computation in parallel.
I would stay away from FILE_FLAG_NO_BUFFERING. You're after more performance with parallelism I presume? Don't prevent the cache from doing its job.
Another option that you did not consider is a memory mapped file. Those are available on Windows and Linux. There is a handy Boost abstraction that you could use.
With a memory mapped file, every thread in your process could write its output to the file on its own time, assuming that the record sizes are known and each thread has its own output area.
The operating system will take care of writing the mapped pages to disk when needed or when it gets around to it or when you close the file. Maybe when you close the file. Now that I think about it, some operating systems may require that you call msync to guarantee it.
I don't see why you would want to write asynchronously. Doing things in parallel does not make them faster in all cases. If you write two file at the same time to the same disk, it will almost always be a lot faster. If that is the case, just write them one after another.
If you have some fancy drive like SSD or a virtual RAM drive, parallel writing could be faster. You have to create an file with at full size and then do your parallel magic.
Asynchronous writing is nice, but is done by any OS anyway. The potential gain for you is that you can do other things than writing to disk like displaying a progress bar. This is where multi-threading can help you.
So imho you should use serial writing or parallel writing to multiple disks.
hth

Resources