Small regex nfa matching while possibly larger matches are running - c

I'm creating my own lexical analyzer generator, similar to (f)lex. My plan was to make a tool like grep and go from there. Right now I've followed the articles by Russ Cox on creating an analyzer as a virtual machine: https://swtch.com/~rsc/regexp/regexp2.html
The issue I've run into is keeping track of small matches while a larger match is running. An example where this could happen is ".*d|ab" on the input string "abcabcabcabcabcabcd".
Currently, I have an NFA running which is practically the same as the one Cox made. It uses thompsons algorithm to compute dfa sets on the fly from the nfa. The threads are stored in a sparse set. (Preston Briggs and Linda Torczon's 1993 paper, “An Efficient Representation for Sparse Sets,”)
Threads that appear earlier in this list have larger importance and eliminate later threads upon accepting.
My current implementation can find a word matching a regex, but cannot continously keep on matching. To show what currently happens, lets look at the above example regex and string.
The first step makes three threads. Two for the left side of the union, one for the right side. Only the threads taking the wildcard and 'a' advance into the next list.
Step two advances again the wildcard and the thread taking a 'b' this time. It also adds new thread to try and match from this character as starting point.
Step three advances the wildcard and puts the thread running "ab" into an accepting state. Now the newer (less priority) threads are removed and a new threads are added to start matching from this character.
This is where the problem starts: The nfa cannot output a match on "ab" yet. It should wait for ".*d" to finish. This means that for a large input file, a lot of matches should be buffered until the ".*d" terminates (which is at the last character to get the potential largest match)
Thus, the actual question is: What is an efficient way to store these matches or is there another way to not have to potentially buffer the whole input file?
As a side note on the code: The code is so similar to the article of Cox that any questions related to how the code functions can be viewed in the article. Also, I've downloaded and tested the code of Cox and it had the same issue as described above.
What I hope to get with the solution to this question is to get my nfa implementation to match text in the exact same way grep or lex would: with the longest leftmost matches.
As requested, code fragments:
VM opcodes representing ".*d|ab":
0 SPLIT 2, 5
1 ANY
2 SPLIT 1, 3
3 CHAR 'd'
4 JMP 7
5 CHAR 'a'
6 CHAR 'b'
7 ACCEPT
CHAR asserts input character to match first operand.
SPLIT creates two new threads starting at the first and second operand.
ANY matches any input character.
JMP sets the program counter to the first operand.
ACCEPT puts the current thread in an accepting state.
An add function to recursively go through opcodes until reaching an CHAR or ACCEPT opcode then adding these to the next threadlist.
static void recursive_add(struct NFA *nfa, struct Thread *thread)
{
switch(nfa->code[thread->pc])
{
case OPCODE_JMP:
printf(" | jmp\n");
thread->pc = nfa->code[thread->pc + 1];
recursive_add(nfa, thread);
break;
case OPCODE_SPLIT:
{
uint16_t pc = thread->pc;
printf(" | split: %d op1: %d op2: %d\n", thread->pc, nfa->code[thread->pc + 1], nfa->code[thread->pc + 2]);
thread->pc = nfa->code[thread->pc + 1];
recursive_add(nfa, thread);
thread->pc = nfa->code[pc + 2];
recursive_add(nfa, thread);
break;
}
case OPCODE_CHAR: case OPCODE_ACCEPT:
printf(" | char/accept %d\n", thread->pc);
add_sparseset(nfa->nthreads, thread);
return;
default:
fprintf(stderr, "Unexpected opcode %x at program counter %d\n", nfa->code[thread->pc], thread->pc);
exit(1);
}
}
And the main loop, currently a bit messy but this should give a better idea of what the code does.
void execute_nfa(struct NFA *nfa)
{
struct Thread thread;
struct SparseSet *swap;
char c;
for(;;)
{
c = getchar();
printf("Input char %c\n", c);
//add new starting thread for every character
thread.pc = 0;
recursive_add(nfa, &thread);
clear_sparseset(nfa->cthreads);
swap = nfa->cthreads;
nfa->cthreads = nfa->nthreads;
nfa->nthreads = swap;
for(unsigned int i = 0; i < nfa->cthreads->size; i++)
{
thread = *(((struct Thread *)nfa->cthreads->dense) + i);
printf(" thread pc: %d\n", thread.pc);
switch(nfa->code[thread.pc])
{
case OPCODE_CHAR:
if(nfa->code[thread.pc + 1] == c)
{
printf(" Add to next\n");
thread.pc += 2;
recursive_add(nfa, &thread);
}
break;
case OPCODE_ACCEPT:
printf(" accept (still need to do shit)\n");
break;
default:
fprintf(stderr, "Unexpected opcode %x at program counter %d\n", nfa->code[thread.pc], thread.pc);
exit(1);
}
}
}
}

Related

How do I create a "twirly" in a C program task?

Hey guys I have created a program in C that tests all numbers between 1 and 10000 to check if they are perfect using a function that determines whether a number is perfect. Once it finds these it prints them to the user, they are 6, 28, 496 and 8128. After this the program then prints out all the factors of each perfect number to the user. This is all fine. Here is my problem.
The final part of my task asks me to:
"Use a "twirly" to indicate that your program is happily working away. A "twirly" is the following characters printed over the top of each other in the following order: '|' '/' '-' '\'. This has the effect of producing a spinning wheel - ie a "twirly". Hint: to do this you can use \r (instead of \n) in printf to give a carriage return only (instead of a carriage return linefeed). (Note: this may not work on some systems - you do not have to do it this way.)"
I have no idea what a twirly is or how to implement one. My tutor said it has something to do with the sleep and delay functions which I also don't know how to use. Can anyone help me with this last stage, it sucks that all my coding is complete but I can't get this "twirly" thing to work.
if you want to simultaneously perform the task of
Testing the numbers and
Display the twirly on screen
while the process goes on then you better look into using threads. using POSIX threads you can initiate the task on a thread and the other thread will display the twirly to the user on terminal.
#include<stdlib.h>
#include<pthread.h>
int Test();
void Display();
int main(){
// create threads each for both tasks test and Display
//call threads
//wait for Test thread to finish
//terminate display thread after Test thread completes
//exit code
}
Refer chapter 12 for threads
beginning linux programming ebook
Given the program upon which the user is "waiting", I believe the problem as stated and the solutions using sleep() or threads are misguided.
To produce all the perfect numbers below 10,000 using C on a modern personal computer takes about 1/10 of a second. So any device to show the computer is "happily working away" would either never be seen or would significanly intefere with the time it takes to get the job done.
But let's make a working twirly for perfect number search anyway. I've left off printing the factors to keep this simple. Since 10,000 is too low to see the twirly in action, I've upped the limit to 100,000:
#include <stdio.h>
#include <string.h>
int main()
{
const char *twirly = "|/-\\";
for (unsigned x = 1; x <= 100000; x++)
{
unsigned sum = 0;
for (unsigned i = 1; i <= x / 2; i++)
{
if (x % i == 0)
{
sum += i;
}
}
if (sum == x)
{
printf("%d\n", x);
}
printf("%c\r", twirly[x / 2500 % strlen(twirly)]);
}
return 0;
}
No need for sleep() or threads, just key it into the complexity of the problem itself and have it update at reasonable intervals.
Now here's the catch, although the above works, the user will never see a fifth perfect number pop out with a 100,000 limit and even with a 100,000,000 limit, which should produce one more, they'll likely give up as this is a bad (slow) algorithm for finding them. But they'll have a twirly to watch.
i as integer
loop i: 1 to 10000
loop j: 1 to i/2
sum as integer
set sum = 0
if i%j == 0
sum+=j
return sum==i
if i%100 == 0
str as character pointer
set *str = "|/-\\"
set length = 4
print str[p] using "%c\r" as format specifier
Increment p and assign its modulo by len to p

Create an array of values from different text files in C

I'm working in C on 64-bit Ubuntu 14.04.
I have a number of .txt files, each containing lines of floating point values (1 value per line). The lines represent parts of a complex sample, and they're stored as real(a1) \n imag(a1) \n real(a2) \n imag(a2), if that makes sense.
In a specific scenario there are 4 text files each containing 32768 samples (thus 65536 values), but I need to make the final version dynamic to accommodate up to 32 files (the maximum samples per file would not exceed 32768 though). I'll only be reading the first 19800 samples (depending on other things) though, since the entire signal is contained in those 39600 points (19800 samples).
A common abstraction is to represent the files / samples as a matrix, where columns represent return signals and rows represent the value of each signal at a sampling instant, up until the maximum duration.
What I'm trying to do is take the first sample from each return signal and move it into an array of double-precision floating point values to do some work on, move on to the second sample for each signal (which will overwrite the previous array) and do some work on them, and so forth, until the last row of samples have been processed.
Is there a way in which I can dynamically open files for each signal (depending on the number of pulses I'm using in that particular instance), read the first sample from each file into a buffer and ship that off to be processed. On the next iteration, the file pointers will all be aligned to the second sample, it would then move those into an array and ship it off again, until the desired amount of samples (19800 in our hypothetical case) has been reached.
I can read samples just fine from the files using fscanf:
rx_length = 19800;
int x;
float buf;
double *range_samples = calloc(num_pulses, 2 * sizeof(range_samples));
for (i=0; i < 2 * rx_length; i++){
x = fscanf(pulse_file, "%f", &buf);
*(range_samples) = buf;
}
All that needs to happen (in my mind) is that I need to cycle both sample# and pulse# (in that order), so when finished with one pulse it would move on to the next set of samples for the next pulse, and so forth. What I don't know how to do is to somehow declare file pointers for all return signal files, when the number of them can vary inbetween calls (e.g. do the whole thing for 4 pulses, and on the next call it can be 16 or 64).
If there are any ideas / comments / suggestions I would love to hear them.
Thanks.
I would make the code you posted a function that takes an array of file names as an argument:
void doPulse( const char **file_names, const int size )
{
FILE *file = 0;
// declare your other variables
for ( int i = 0; i < size; ++i )
{
file = fopen( file_names[i] );
// make sure file is open
// do the work on that file
fclose( file );
file = 0;
}
}
What you need is a generator. It would be reasonably easy in C++, but as you tagged C, I can imagine a function, taking a custom struct (the state of the object) as parameter. It could be something like (pseudo code) :
struct GtorState {
char *files[];
int filesIndex;
FILE *currentFile;
};
void gtorInit(GtorState *state, char **files) {
// loads the array of file into state, set index to 0, and open first file
}
int nextValue(GtorState *state, double *real, double *imag) {
// read 2 values from currentFile and affect them to real and imag
// if eof, close currentFile and open files[++currentIndex]
// if real and imag were found returns 0, else 1 if eof on last file, 2 if error
}
Then you main program could contain :
GtorState state;
// initialize the list of files to process
gtorInit(&state, files);
double real, imag);
int cr;
while (0 == (cr = nextValue(&state, &real, &imag)) {
// process (real, imag)
}
if (cr == 2) {
// process (at least display) error
}
Alternatively, your main program could iterate the values of the different files and call a function with state analog of the above generator that processes the values, and at the end uses the state of the processing function to get the results.
Tried a slightly different approach and it's working really well.
In stead of reading from the different files each time I want to do something, I read the entire contents of each file into a 2D array range_phase_data[sample_number][pulse_number], and then access different parts of the array depending on which range bin I'm currently working on.
Here's an excerpt:
#define REAL(z,i) ((z)[2*(i)])
#define IMAG(z,i) ((z)[2*(i)+1])
for (i=0; i<rx_length; i++){
printf("\t[%s] Range bin %i. Samples %i to %i.\n", __FUNCTION__, i, 2*i, 2*i+1);
for (j=0; j<num_pulses; j++){
REAL(fft_buf, j) = range_phase_data[2*i][j];
IMAG(fft_buf, j) = range_phase_data[2*i+1][j];
}
printf("\t[%s] Range bin %i done, ready to FFT.\n", __FUNCTION__, i);
// do stuff with the data
}
This alleviates the need to dynamically allocate file pointers and in stead just opens the files one at a time and writes the data to the corresponding column in the matrix.
Cheers.

Informative "if" statement in "for" loop

Normally when I have a big for loop I put messages to inform me in which part of the process my program is, for example:
for(i = 0; i < large_n; i++) {
if( i % (large_n)/1000 == 0) {
printf("We are at %ld \n", i);
}
// Do some other stuff
}
I was wondering if this hurts too much the performance (a priori) and if it is the case if there is a smarter alternative.Thanks in advance.
Maybe you can split the large loop in order to check the condition sometimes only, but I don't know if this will really save time, that depends more on your "other stuff".
int T = ...; // times to check the condition, make sure large_n % T == 0
for(int t = 0; t < T; ++t)
{
for(int i = large_n/T * t; i < large_n/T * (t+1); ++i)
{
// other stuff
}
printf("We are at %ld \n", large_n/T * (t+1));
}
Regardless of what is in your loop, I wouldn't be leaving statements like printf in unless it's essential to the application/user, nor would I use what are effectively redundant if statements, for the same reason.
Both of these are examples of trace level debugging. They're totally valid and in some cases very useful, but generally not ultimately so in the end application. In this respect, a usual thing to do is to only include them in the build when you actually want to use the information they provide. In this case, you might do something like this:
#define DEBUG
for(i = 0; i < large_n; i++)
{
#ifdef DEBUG
if( i % (large_n)/1000 == 0)
{
printf("We are at %ld \n", i);
}
#endif
}
Regarding the performance cost of including these debug outputs all the time, it will totally depend on the system you're running, the efficiency of whatever "printing" statement you're using to output the data, the check/s you're performing and, of course, how often you're trying to perform output.
Your mod test probably doesn't hurt performance but if you want a very quick test and you're prepared for multiples of two then consider a mathematical and test:
if ( ( i & 0xFF ) == 0 ) {
/* this gets printed every 256 iterations */
...
}
or
if ( ( i & 0xFFFF ) == 0 ) {
/* this gets printed every 65536 iterations */
...
}
By placing a print statement inside of the for loop, you are sacrificing some performance.
Because the program needs to do a system call to write output to the screen every time the message is printed, it takes CPU time away from the program itself.
You can see the difference in performance between these two loops:
int i;
printf("Start Loop A\n");
for(i = 0; i < 100000; i++) {
printf("%d ", i);
}
printf("Done with Loop A\n");
printf("Start Loop B\n");
for(i = 0; i < 100000; i++) {
// Do Nothing
}
printf("Done with Loop B\n");
I would include timing code, but I am in the middle of work and can update it later over lunch.
If the difference isn't noticeable, you can increase 100000 to a larger number (although too large a number would cause the first loop to take WAY too long to complete).
Whoops, forgot to finish my answer.
To cut down on the number of system calls your program needs to make, you could check a condition first, and only print if that condition is true.
For example, if you were counting up as in my example code, you could only print out every 100th number by using %:
int i;
for(i = 0; i < 100000; i++) {
if(i%100 == 0)
printf("%d", i);
}
That will reduce the number of syscalls from ~100000 to ~1000, which in turn would increase the performance of the loop.
The problem is IO operation printf takes a much time than processor calculates. you can reduce the time if you can add them all and print finally.
Notation:
Tp = total time spent executing the progress statements.
Tn = total time spent doing the other normal stuff.
>> = Much greater than
If performance is your main criteria, you want Tn >> Tp. This strongly suggests that the code should be profiled so that you can pick appropriate values. The routine 'printf()' is considered a slow routine (much slower than %) and is a blocking routine (that is, the thread that calls it may pend waiting for a resource used by it).
Personally, I like to abstract away the progress indicator. It can be a logging mechanism,
a printf, a progress box, .... Heck, it may be updating a structure that is read by another thread/task/process.
id = progressRegister (<some predefined type of progress update mechanism>);
for(i = 0; i < large_n; i++) {
progressUpdate (id, <string>, i, large_n);
// Do some other stuff
}
progressUnregister(id);
Yes, there is some overhead in calling the routine 'progressUpdate()' on each iteration, but again, as long as Tn >> Tp, it usually is not that important.
Hope this helps.

C Primer 5th - Task 14-6

A text file holds information about a softball team. Each line has data arranged as follows:
4 Jessie Joybat 5 2 1 1
The first item is the player's number, conveniently in the range 0–18. The second item is the player's first name, and the third is the player's last name. Each name is a single word. The next item is the player's official times at bat, followed by the number of hits, walks, and runs batted in (RBIs). The file may contain data for more than one game, so the same player may have more than one line of data, and there may be data for other players between those lines. Write a program that stores the data into an array of structures. The structure should have members to represent the first and last names, the at bats, hits, walks, and RBIs (runs batted in), and the batting average (to be calculated later). You can use the player number as an array index. The program should read to end-of-file, and it should keep cumulative totals for each player.
The world of baseball statistics is an involved one. For example, a walk or reaching base on an error doesn't count as an at-bat but could possibly produce an RBI. But all this program has to do is read and process the data file, as described next, without worrying about how realistic the data is.
The simplest way for the program to proceed is to initialize the structure contents to zeros, read the file data into temporary variables, and then add them to the contents of the corresponding structure. After the program has finished reading the file, it should then calculate the batting average for each player and store it in the corresponding structure member. The batting average is calculated by dividing the cumulative number of hits for a player by the cumulative number of at-bats; it should be a floating-point calculation. The program should then display the cumulative data for each player along with a line showing the combined statistics for the entire team.
team.txt (text file I'm working with):
4 Jessie Joybat 5 2 1 1
4 Jessie Joybat 7 3 5 3
7 Jack Donner 6 3 1 2
11 Martin Garder 4 3 2 1
15 Jaime Curtis 7 4 1 2
2 Curtis Michel 3 2 2 3
9 Gillan Morthim 9 6 6 7
12 Brett Tyler 8 7 4 3
8 Hans Gunner 7 7 2 3
14 Jessie James 11 2 3 4
12 Brett Tyler 4 3 1 3
Since I'm a beginner in C, either I misinterpreted the task from what was asked originally or it's unfairly complex (I believe the former is the case). I'm so lost that I can't think of the way how could I fill in by the criteria of index (player number) every piece of data, keep track of whether he has more than one game, calculate and fetch bat average and then print.
What I have so far is:
#define LGT 30
struct profile {
int pl_num;
char name[LGT];
char lname[LGT];
int atbat[LGT/3];
int hits[LGT/3];
int walks[LGT/3];
int runs[LGT/3];
float batavg;
};
//It's wrong obviously but it's a starting point
int main(void)
{
FILE *flx;
int i,jc,flow=0;
struct profile stat[LGT]={{0}};
if((flx=fopen("team.txt","r"))==NULL) {
fprintf(stderr,"Can't read file team!\n");
exit(1);
}
for( jc = 0; jc < 11; jc++) {
fscanf(flx,"%d",&i);
stat[i].pl_num=i;
fscanf(flx,"%s",&stat[i].name);
fscanf(flx,"%s",&stat[i].lname);
fscanf(flx,"%d",&stat[i].atbat[flow]);
fscanf(flx,"%d",&stat[i].hits[flow]);
fscanf(flx,"%d",&stat[i].walks[flow]);
fscanf(flx,"%d",&stat[i].runs[flow]);
flow++;
}
}
Advice 1: don't declare arrays like atbat[LGT/3].
Advice 2: Instead of multiple fscanf you could read the whole line in a shot.
Advice 3: Since the number of players is limited and the player number has a good range (0-18), using that player number as an index into the struct array is a good idea.
Advice 4: Since you need cumulative data for each player (no need to store his history points), then you don't need arrays of integers, just an integer to represent the total.
So:
#include <stdio.h>
#define PLAYERS_NO 19
typedef struct
{
char name[20+1];
char lastName[25+1];
int atbat;
int hits;
int walks;
int runs;
float batavg;
} Profile;
int main(int argc, char** argv)
{
Profile stats[PLAYERS_NO];
int i;
FILE* dataFile;
int playerNo;
Profile tmpProfile;
int games = 0;
for(i=0; i<PLAYERS_NO; ++i)
{
stats[i].name[0] = '\0';
stats[i].lastName[0] = '\0';
stats[i].atbat = 0;
stats[i].hits = 0;
stats[i].walks = 0;
stats[i].runs = 0;
}
dataFile = fopen("team.txt", "r");
if ( dataFile == NULL )
{
fprintf(stderr, "Can't read file team!\n");
exit(1);
}
for(i=0; i<PLAYERS_NO && !feof(dataFile); ++i, ++games)
{
fscanf(dataFile, "%d", &playerNo);
if ( playerNo <0 || playerNo > PLAYERS_NO )
{
fprintf(stderr, "Player number out of range\n");
continue;
}
fscanf(dataFile, "%s %s %d %d %d %d",
&tmpProfile.name,
&tmpProfile.lastName,
&tmpProfile.atbat,
&tmpProfile.hits,
&tmpProfile.walks,
&tmpProfile.runs);
printf("READ: %d %s %s %d %d %d %d\n",
playerNo,
tmpProfile.name,
tmpProfile.lastName,
tmpProfile.atbat,
tmpProfile.hits,
tmpProfile.walks,
tmpProfile.runs);
strcpy(stats[playerNo].name, tmpProfile.name);
strcpy(stats[playerNo].lastName, tmpProfile.lastName);
stats[playerNo].atbat += tmpProfile.atbat;
stats[playerNo].hits += tmpProfile.hits;
stats[playerNo].walks += tmpProfile.walks;
stats[playerNo].runs += tmpProfile.runs;
}
/* exercise: compute the average */
fclose(dataFile);
for(i=0; i<PLAYERS_NO; ++i)
{
if ( stats[i].name[0] == '\0' )
continue;
printf("%d %s %s %d %d %d %d\n",
i,
stats[i].name,
stats[i].lastName,
stats[i].atbat,
stats[i].hits,
stats[i].walks,
stats[i].runs);
}
return 0;
}
The first rule of programming: Divide and conquer.
So you need to identify individual operations. One such operation is "load one row of input", another is "look up a player". If you have some of those operations (more will come up as you go), you can start building your program:
while( more_input ) {
row = load_one_row()
player = find_player( row.name )
if( !player ) {
player = create_player( row.name )
add_player( player )
}
... do something with row and player ...
}
when you have that, you can start to write all the functions.
An important point here is to write test cases. Start with a simple input and test the code to read a row. Do you get the correct results?
If so, test the code to find/create players.
The test cases make sure that you can forget about code that already works.
Use a framework like Check for this.
If I were doing this, I'd start with a structure that only held one "set" of data, then create an array of those structs:
struct profile {
char name[NAMELEN];
char lname[NAMELEN];
int atbat;
int hits;
int walks;
int runs;
float batavg;
};
Since you're using the player's number as the index into an array, you don't need to store it into the structure too.
I think that will simplify the problem a little bit. You don't need to store multiple data items for a single player -- when you get a duplicate, you just ignore some of the new data (like the names, which should be identical) and sum up the others (e.g., at-bats, hits).

Recursive Fib with Threads, Segmentation Fault?

Any ideas why it works fine for values like 0, 1, 2, 3, 4... and seg faults for values like >15?
#include
#include
#include
void *fib(void *fibToFind);
main(){
pthread_t mainthread;
long fibToFind = 15;
long finalFib;
pthread_create(&mainthread,NULL,fib,(void*) fibToFind);
pthread_join(mainthread,(void*)&finalFib);
printf("The number is: %d\n",finalFib);
}
void *fib(void *fibToFind){
long retval;
long newFibToFind = ((long)fibToFind);
long returnMinusOne;
long returnMinustwo;
pthread_t minusone;
pthread_t minustwo;
if(newFibToFind == 0 || newFibToFind == 1)
return newFibToFind;
else{
long newFibToFind1 = ((long)fibToFind) - 1;
long newFibToFind2 = ((long)fibToFind) - 2;
pthread_create(&minusone,NULL,fib,(void*) newFibToFind1);
pthread_create(&minustwo,NULL,fib,(void*) newFibToFind2);
pthread_join(minusone,(void*)&returnMinusOne);
pthread_join(minustwo,(void*)&returnMinustwo);
return returnMinusOne + returnMinustwo;
}
}
Runs out of memory (out of space for stacks), or valid thread handles?
You're asking for an awful lot of threads, which require lots of stack/context.
Windows (and Linux) have a stupid "big [contiguous] stack" idea.
From the documentation on pthreads_create:
"On Linux/x86-32, the default stack size for a new thread is 2 megabytes."
If you manufacture 10,000 threads, you need 20 Gb of RAM.
I built a version of OP's program, and it bombed with some 3500 (p)threads
on Windows XP64.
See this SO thread for more details on why big stacks are a really bad idea:
Why are stack overflows still a problem?
If you give up on big stacks, and implement a parallel language with heap allocation
for activation records
(our PARLANSE is
one of these) the problem goes away.
Here's the first (sequential) program we wrote in PARLANSE:
(define fibonacci_argument 45)
(define fibonacci
(lambda(function natural natural )function
`Given n, computes nth fibonacci number'
(ifthenelse (<= ? 1)
?
(+ (fibonacci (-- ?))
(fibonacci (- ? 2))
)+
)ifthenelse
)lambda
)define
Here's an execution run on an i7:
C:\DMS\Domains\PARLANSE\Tools\PerformanceTest>run fibonaccisequential
Starting Sequential Fibonacci(45)...Runtime: 33.752067 seconds
Result: 1134903170
Here's the second, which is parallel:
(define coarse_grain_threshold 30) ; technology constant: tune to amortize fork overhead across lots of work
(define parallel_fibonacci
(lambda (function natural natural )function
`Given n, computes nth fibonacci number'
(ifthenelse (<= ? coarse_grain_threshold)
(fibonacci ?)
(let (;; [n natural ] [m natural ] )
(value (|| (= m (parallel_fibonacci (-- ?)) )=
(= n (parallel_fibonacci (- ? 2)) )=
)||
(+ m n)
)value
)let
)ifthenelse
)lambda
)define
Making the parallelism explicit makes the programs a lot easier to write, too.
The parallel version we test by calling (parallel_fibonacci 45). Here
is the execution run on the same i7 (which arguably has 8 processors,
but it is really 4 processors hyperthreaded so it really isn't quite 8
equivalent CPUs):
C:\DMS\Domains\PARLANSE\Tools\PerformanceTest>run fibonacciparallelcoarse
Parallel Coarse-grain Fibonacci(45) with cutoff 30...Runtime: 5.511126 seconds
Result: 1134903170
A speedup near 6+, not bad for not-quite-8 processors. One of the other
answers to this question ran the pthreads version; it took "a few seconds"
(to blow up) computing Fib(18), and this is 5.5 seconds for Fib(45).
This tells you pthreads
is a fundamentally bad way to do lots of fine grain parallelism, because
it has really, really high forking overhead. (PARLANSE is designed to
minimize that forking overhead).
Here's what happens if you set the technology constant to zero (forks on every call
to fib):
C:\DMS\Domains\PARLANSE\Tools\PerformanceTest>run fibonacciparallel
Starting Parallel Fibonacci(45)...Runtime: 15.578779 seconds
Result: 1134903170
You can see that amortizing fork overhead is a good idea, even if you have fast forks.
Fib(45) produces a lot of grains. Heap allocation
of activation records solves the OP's first-order problem (thousands of pthreads each
with 1Mb of stack burns gigabytes of RAM).
But there's a second order problem: 2^45 PARLANSE "grains" will burn all your memory too
just keeping track of the grains even if your grain control block is tiny.
So it helps to have a scheduler that throttles forks once you have "a lot"
(for some definition of "a lot" significantly less that 2^45) grains to prevent the
explosion of parallelism from swamping the machine with "grain" tracking data structures.
It has to unthrottle forks when the number of grains falls below a threshold
too, to make sure there is always lots of logical, parallel work for the physical
CPUs to do.
You are not checking for errors - in particular, from pthread_create(). When pthread_create() fails, the pthread_t variable is left undefined, and the subsequent pthread_join() may crash.
If you do check for errors, you will find that pthread_create() is failing. This is because you are trying to generate almost 2000 threads - with default settings, this would require 16GB of thread stacks to be allocated alone.
You should revise your algorithm so that it does not generate so many threads.
I tried to run your code, and came across several surprises:
printf("The number is: %d\n", finalFib);
This line has a small error: %d means printf expects an int, but is passed a long int. On most platforms this is the same, or will have the same behavior anyways, but pedantically speaking (or if you just want to stop the warning from coming up, which is a very noble ideal too), you should use %ld instead, which will expect a long int.
Your fib function, on the other hand, seems non-functional. Testing it on my machine, it doesn't crash, but it yields 1047, which is not a Fibonacci number. Looking closer, it seems your program is incorrect on several aspects:
void *fib(void *fibToFind)
{
long retval; // retval is never used
long newFibToFind = ((long)fibToFind);
long returnMinusOne; // variable is read but never initialized
long returnMinustwo; // variable is read but never initialized
pthread_t minusone; // variable is never used (?)
pthread_t minustwo; // variable is never used
if(newFibToFind == 0 || newFibToFind == 1)
// you miss a cast here (but you really shouldn't do it this way)
return newFibToFind;
else{
long newFibToFind1 = ((long)fibToFind) - 1; // variable is never used
long newFibToFind2 = ((long)fibToFind) - 2; // variable is never used
// reading undefined variables (and missing a cast)
return returnMinusOne + returnMinustwo;
}
}
Always take care of compiler warnings: when you get one, usually, you really are doing something fishy.
Maybe you should revise the algorithm a little: right now, all your function does is returning the sum of two undefined values, hence the 1047 I got earlier.
Implementing the Fibonacci suite using a recursive algorithm means you need to call the function again. As others noted, it's quite an inefficient way of doing it, but it's easy, so I guess all computer science teachers use it as an example.
The regular recursive algorithm looks like this:
int fibonacci(int iteration)
{
if (iteration == 0 || iteration == 1)
return 1;
return fibonacci(iteration - 1) + fibonacci(iteration - 2);
}
I don't know to which extent you were supposed to use threads—just run the algorithm on a secondary thread, or create new threads for each call? Let's assume the first for now, since it's a lot more straightforward.
Casting integers to pointers and vice-versa is a bad practice because if you try to look at things at a higher level, they should be widely different. Integers do maths, and pointers resolve memory addresses. It happens to work because they're represented the same way, but really, you shouldn't do this. Instead, you might notice that the function called to run your new thread accepts a void* argument: we can use it to convey both where the input is, and where the output will be.
So building upon my previous fibonacci function, you could use this code as the thread main routine:
void* fibonacci_offshored(void* pointer)
{
int* pointer_to_number = pointer;
int input = *pointer_to_number;
*pointer_to_number = fibonacci(input);
return NULL;
}
It expects a pointer to an integer, and takes from it its input, then writes it output there.1 You would then create the thread like that:
int main()
{
int value = 15;
pthread_t thread;
// on input, value should contain the number of iterations;
// after the end of the function, it will contain the result of
// the fibonacci function
int result = pthread_create(&thread, NULL, fibonacci_offshored, &value);
// error checking is important! try to crash gracefully at the very least
if (result != 0)
{
perror("pthread_create");
return 1;
}
if (pthread_join(thread, NULL)
{
perror("pthread_join");
return 1;
}
// now, value contains the output of the fibonacci function
// (note that value is an int, so just %d is fine)
printf("The value is %d\n", value);
return 0;
}
If you need to call the Fibonacci function from new distinct threads (please note: that's not what I'd advise, and others seem to agree with me; it will just blow up for a sufficiently large amount of iterations), you'll first need to merge the fibonacci function with the fibonacci_offshored function. It will considerably bulk it up, because dealing with threads is heavier than dealing with regular functions.
void* threaded_fibonacci(void* pointer)
{
int* pointer_to_number = pointer;
int input = *pointer_to_number;
if (input == 0 || input == 1)
{
*pointer_to_number = 1;
return NULL;
}
// we need one argument per thread
int minus_one_number = input - 1;
int minus_two_number = input - 2;
pthread_t minus_one;
pthread_t minus_two;
// don't forget to check! especially that in a recursive function where the
// recursion set actually grows instead of shrinking, you're bound to fail
// at some point
if (pthread_create(&minus_one, NULL, threaded_fibonacci, &minus_one_number) != 0)
{
perror("pthread_create");
*pointer_to_number = 0;
return NULL;
}
if (pthread_create(&minus_two, NULL, threaded_fibonacci, &minus_two_number) != 0)
{
perror("pthread_create");
*pointer_to_number = 0;
return NULL;
}
if (pthread_join(minus_one, NULL) != 0)
{
perror("pthread_join");
*pointer_to_number = 0;
return NULL;
}
if (pthread_join(minus_two, NULL) != 0)
{
perror("pthread_join");
*pointer_to_number = 0;
return NULL;
}
*pointer_to_number = minus_one_number + minus_two_number;
return NULL;
}
Now that you have this bulky function, adjustments to your main function are going to be quite easy: just change the reference to fibonacci_offshored to threaded_fibonacci.
int main()
{
int value = 15;
pthread_t thread;
int result = pthread_create(&thread, NULL, threaded_fibonacci, &value);
if (result != 0)
{
perror("pthread_create");
return 1;
}
pthread_join(thread, NULL);
printf("The value is %d\n", value);
return 0;
}
You might have been told that threads speed up parallel processes, but there's a limit somewhere where it's more expensive to set up the thread than run its contents. This is a very good example of such a situation: the threaded version of the program runs much, much slower than the non-threaded one.
For educational purposes, this program runs out of threads on my machine when the number of desired iterations is 18, and takes a few seconds to run. By comparison, using an iterative implementation, we never run out of threads, and we have our answer in a matter of milliseconds. It's also considerably simpler. This would be a great example of how using a better algorithm fixes many problems.
Also, out of curiosity, it would be interesting to see if it crashes on your machine, and where/how.
1. Usually, you should try to avoid to change the meaning of a variable between its value on input and its value after the return of the function. For instance, here, on input, the variable is the number of iterations we want; on output, it's the result of the function. Those are two very different meanings, and that's not really a good practice. I didn't feel like using dynamic allocations to return a value through the void* return value.

Resources