print out a large number of integers rapidly in C - c
I have to print 1,000,000 four digit numbers. I used printf for this purpose
for(i=0;i<1000000;i++)
{
printf("%d\n", students[i]);
}
and it turns out to be too slow.Is there a faster way so that I can print it.
You could create an array, fill it with output data and then print out that array at once. Or if there is memory problem, just break that array to smaller chunks and print them one by one.
Here is my attempt replacing printf and stdio stream buffering with straightforward special-case code:
int print_numbers(const char *filename, const unsigned int *input, size_t len) {
enum {
// Maximum digits per number. The input numbers must not be greater
// than this!
# if 1
DIGITS = 4,
# else
// Alternative safe upper bound on the digits per integer
// (log10(2) < 28/93)
DIGITS = sizeof *input * CHAR_BIT * 28UL + 92 / 93,
# endif
// Maximum lines to be held in the buffer. Tune this to your system,
// though something on the order of 32 kB should be reasonable
LINES = 5000
};
// Write the output in binary to avoid extra processing by the CRT. If necessary
// add the expected "\r\n" line endings or whatever else is required for the
// platform manually.
FILE *file = fopen(filename, "wb");
if(!file)
return EOF;
// Disable automatic file buffering in favor of our own
setbuf(file, NULL);
while(len) {
// Set up a write pointer for a buffer going back-to-front. This
// simplifies the reverse order of digit extraction
char buffer[(DIGITS + 1 /* for the newline */) * LINES];
char *tail = &buffer[sizeof buffer];
char *head = tail;
// Grab the largest set of lines still remaining to be printed which
// will safely fit in our buffer
size_t chunk = len > LINES ? LINES : len;
const unsigned int *input_chunk;
len -= chunk;
input += chunk;
input_chunk = input;
do {
// Convert the each number by extracting least-significant digits
// until all have been printed.
unsigned int number = *--input_chunk;
*--head = '\n';
do {
# if 1
char digit = '0' + number % 10;
number /= 10;
# else
// Alternative in case the compiler is unable to merge the
// division/modulo and perform reciprocal multiplication
char digit = '0' + number;
number = number * 0xCCCDUL >> 19;
digit -= number * 10;
# endif
*--head = digit;
} while(number);
} while(--chunk);
// Dump everything written to the present buffer
fwrite(head, tail - head, 1, file);
}
return fclose(file);
}
I fear this won't buy you much more than a fairly small constant factor over your original (by avoiding some printf format parsing, per-character buffering, locale handling, multithreading locks, etc.)
Beyond this you may want to consider processing the input and writing the output on-the-fly instead of reading /processing/writing as separate stages. Of course whether or not this is possible depends entirely on the operation to be performed.
Oh, and don't forget to enable compiler optimizations when building the application. A run through with a profiler couldn't hurt either.
Related
Writing an array of integers into a file using C [duplicate]
This question already has answers here: How to write an array to file in C (3 answers) Closed 3 years ago. I would like to write an array of integers into a file using C. However, I get some gibberish in the file. The code is about a function that converts a decimal number into binary then stores it into a file. int * decToBinary(int n) //function to transform the decimal numbers to binary { static int binaryNum[16]; // array to store binary number int i = 0; // counter for binary array while (n > 0) { binaryNum[i] = n % 2; // storing remainder in binary array n = n / 2; i++; } return binaryNum; } int main() { FILE *infile; int i; int *p; int decimal= 2000; int written = 0; infile = fopen("myfile.txt","w"); p = decToBinary(decimal); written = fwrite(p,sizeof(int),sizeof(p),infile) ; if (written == 0) { printf("Error during writing to file !"); } fclose(infile); return 0; } This is what I get in my file: This is what I get when I write a text as a test, it does not have any problem with the text, but it has with the array. char str[] = "test text --------- \n"; infile = fopen("myfile.txt","wb"); p=decToBinary(decimal); fwrite(str , 1 , sizeof(str) , infile); written = fwrite(p,sizeof(int),sizeof(p),infile) ; And this is what I get when I make this change: written = fwrite(&p,sizeof(int),sizeof(p),infile) ;
First, be aware that there are two interpretations for 'binary': int n = 1012; fwrite(&n, sizeof(n), 1, file); This writes out the data just as is; as it is represented in form of bits, output is considered "binary" (a binary file). Your question and the code you provided, though, rather imply that you actually want to have a file containing the numbers in binary text format, i. e. 7 being represented by string "111". Then first, be aware that 0 and 1 do not represent the characters '0' and '1' in most, if not all, encodings. Assuming ASCII or compatible, '0' is represented by value 48, '1' by value 49. As C standard requires digits [0..9] being consecutive characters (this does not apply for any other characters!), you can safely do: binaryNum[i] = '0' + n % 2; Be aware that, as you want strings, you chose the bad data type, you need a character array: static char binaryNum[X]; X??? We need to talk about required size! If we create strings, we need to null-terminate them. So we need place for the terminating 0-character (really value 0, not 48 for character '0'), so we need at least one character more. Currently, due to the comparison n > 0, you consider negative values as equal to 0. Do you really intend this? If so, you might consider unsigned int as data type, otherwise, leave some comment, then I'll cover handling negative values later on. With restriction to positive values, 16 + 1 as size is fine, assuming int has 32 bit on your system! However, C standard allows int to be smaller or larger as well. If you want to be portable, use CHAR_BIT * sizeof(int) / 2 (CHAR_BIT is defined in <limits.h>; drop division by 2 if you switch to unsigned int). There is one special case not covered: integer value 0 won't enter the loop at all, thus you'd end up with an empty string, so catch this case separately: if(n == 0) { binaryNum[i++] = '0'; } else { while (n > 0) { /.../ } } // now the important part: // terminate the string! binaryNum[i] = 0; Now you can simply do (assuming you changed p to char*): written = fprintf(file, "%s\n", p); // ^^ only if you want to have each number on separate line // you can replace with space or drop it entirely, if desired Be aware that the algorithm, as is, prints out least significant bits first! You might want to have it inverse, then you'd either yet have to revert the string or (which I would prefer) start with writing the terminating 0 to the end and then fill up the digits one by one towards front - returning a pointer to the last digit (the most significant one) written instead of always the start of the buffer. One word about your original version: written = fwrite(p, sizeof(int), sizeof(p), infile); sizeof(p) gives you the size of a pointer; this one is system dependent, but will always be the same on the same system, most likely 8 on yours (if modern 64-bit hardware), possibly 4 (on typical 32-bit CPU), other values on less common systems are possible as well. You'd need to return the number of characters printed separately (and no, sizeof(binaryNum) won't be suitable as it always returns 17, assuming 32-bit int and all changes shown above applied).
You probably want this: ... int main() { int decimal = 2000; int *p = decToBinary(decimal); for (int i = 0; i< 16; i++) { printf("%d", p[i]); } return 0; } The output goes to the terminal instead into a file. For writing into a file use fopen as in your code, and use fprintf instead of printf. Concerning the decToBinary there is still room for improvement, especially you could transform the number directly into an array of char containing only chars 0 and 1 using the << and & operators.
Use fscanf to read in variable numbers of integer
I have over 100,000 csv files in the below format: 1,1,5,1,1,1,0,0,6,6,1,1,1,0,1,0,13,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,1,6,5,1,1,1,0,1,0,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,2,6,5,1,1,1,0,1,0,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,3,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,4,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,5,6,4,1,0,1,0,1,0,4,8,18,20,,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,6,6,5,1,1,1,0,1,0,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,7,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,1,0,8,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,, 1,1,5,1,1,2,0,0,12,12,1,2,4,1,1,0,13,4,7,8,18,20,21,25,27,29,31,32,,,,,,,,,,,,,,,, All I need is field 10 and field 17 onward, field 10 is the counter indicate how many integer stored start from field 17 i.e. what I need is: 6,13,4,7,8,18,20 5,4,7,8,18,20 5,4,7,8,18,20 5,13,4,7,8,20 5,13,4,7,8,20 4,4,8,18,20 5,4,7,8,18,20 5,13,4,7,8,20 5,13,4,7,8,20 12,13,4,7,8,18,20,21,25,27,29,31,32 Max number of integer need to read is 28. I can easily achieve this by Getline in C++, however, from my previous experience, since I need to handle over 100,000 such files and each files may have 300,000~400,000 such lines. Therefore using Getline to read in the data and build a vector> may have serious performance issue for me. I tried to use fscanf to achieve this: while (!feof(stream)){ fscanf(fstream,"%*d,%*d,%*d,%*d,%*d,%*d,%*d,%*d,%*d,%d",&MyCounter); fscanf(fstream,"%*d,%*d,%*d,%*d,%*d,%*d"); // skip to column 17 for (int i=0;i<MyCounter;i++){ fscanf(fstream,"%d",&MyIntArr[i]); } fscanf(fstream,"%*s"); // to finish the line } However, this will call fscanf multiple times and may also create performance issue. Is there any way to read in variable number of integer at 1 call with fscanf ? Or I need to read into a string and then strsep/stoi it ? Compare to fscanf, which is better from performance point of view?
So, there are at most 43 numbers per line. Even at 64 bits, each number is limited to 21 digits, so 1024 bytes is plenty for the max 946 bytes that a line could be (so long as there is no whitespace). char line[1024]; while (fgets(line, sizeof(line), stdin) != NULL) { //... } A helper function to skip to the desired column. const char *find_nth_comma(const char *s, int n) { const char *p = s; if (p && n) while (*p) { if (*p == ',') { if (--n == 0) break; } ++p; } return p; } So, inside your loop, skip to column 10 to find the first number of interest, and then skip to column 17 to start reading in the rest of the numbers. The completed loop looks like: while (fgets(line, sizeof(line), stdin) != NULL) { const char *p = find_nth_comma(line, 9); char *end; assert(p && *p); MyCounter = strtol(p+1, &end, 10); assert(*end == ','); p = find_nth_comma(end+1, 6); assert(p && *p); for (int i = 0; i < MyCounter; ++i, p = end) { MyIntArray[i] = strtol(p+1, &end, 10); assert((*end == ',') || (i == MyCounter-1) && (*end == '\0' || isspace(*end & 0xFF))); } } This approach will work with a mmap solution as well. The fgets would be replaced with a function that points to the next line to be processed in the file. The find_nth_comma would need a modification to detect end of line/end of file rather than rely on a NUL terminated string. strtol would be changed with a custom function that again detects end of line or end of file. (The purpose of such changes is to remove any code that would require copying the data, which would be motivation for a mmap approach.) With parallel processing, it is possible to parse multiple parts of the file simultaneously. But, it is probably sufficient to have different threads process different files, and then collate the results after all files have been processed.
Eventually I use memory mapped file to solve my problem (this solution is a side product of my previous problem, performance issue when reading big CSV file) read in large CSV file performance issue in C++ Since I work on MS Windows, so I use Stephan Brumme's "Portable Memory Mapping C++ Class" http://create.stephan-brumme.com/portable-memory-mapping/ Since I don't need to deal with file(s) > 2 GB, My implementation is simpler. For over 2GB file, visit the web to see how to handle. Below please find my piece of code: // may tried RandomAccess/SequentialScan MemoryMapped MemFile(FilterBase.BaseFileName, MemoryMapped::WholeFile, MemoryMapped::RandomAccess); // point to start of memory file char* start = (char*)MemFile.getData(); // dummy in my case char* tmpBuffer = start; // looping counter uint64_t i = 0; // pre-allocate result vector MyVector.resize(300000); // Line counter int LnCnt = 0; //no. of field int NumOfField=43; //delimiter count, num of field + 1 since the leading and trailing delimiter are virtual int DelimCnt=NoOfField+1; //Delimiter position. May use new to allocate at run time // or even use vector of integer // This is to store the delimiter position in each line // since the position is relative to start of file. if file is extremely // large, may need to change from int to unsigner, long or even unsigned long long static int DelimPos[DelimCnt]; // Max number of field need to read usually equal to NumOfField, can be smaller, eg in my case, I only need 4 fields // from first 15 field, in this case, can assign 15 to MaxFieldNeed int MaxFieldNeed=NumOfField; // keep track how many comma read each line int DelimCounter=0; // define field and line seperator char FieldDelim=','; char LineSep='\n'; // 1st field, "virtual Delimiter" position DelimPos[CommaCounter]=-1 DelimCounter++; // loop through the whole memory field, 1 and only once for (i = 0; i < MemFile.size();i++) { // grab all position of delimiter in each line if ((MemFile[i] == FieldDelim) && (DelimCounter<=MaxFieldNeed)){ DelimPos[DelimCounter] = i; DelimCounter++; }; // grab all values when end of line hit if (MemFile[i] == LineSep) { // no need to use if (DelimCounter==NumOfField) just assign anyway, waste a little bit // memory in integer array but gain performance DelimPos[DelimCounter] = i; // I know exactly what the format is and what field(s) I want // a more general approach (as a CSV reader) may put all fields // into vector of vector of string // With *EFFORT* one may modify this piece of code so that it can parse // different format at run time eg similar to: // fscanf(fstream,"%d,%f.... // also, this piece of code cannot handle complex CSV e.g. // Peter,28,157CM // John,26,167CM // "Mary,Brown",25,150CM MyVector.StrField = string(strat+DelimPos[0] + 1, strat+DelimPos[1] - 1); MyVector.IntField = strtol(strat+DelimPos[3] + 1,&tmpBuffer,10); MyVector.IntField2 = strtol(strat+DelimPos[8] + 1,&tmpBuffer,10); MyVector.FloatField = strtof(start + DelimPos[14] + 1,&tmpBuffer); // reset Delim counter each line DelimCounter=0 // previous line seperator treat as first delimiter of next line DelimPos[DelimCounter] = i; DelimCounter++ LnCnt++; } } MyVector.resize(LnCnt); MyVector.shrink_to_fit(); MemFile.close(); }; I can code whatever I want inside: if (MemFile[i] == LineSep) { } eg handle empty field, perform calculation etc. With this piece of code, I handle 2100 files (6.3 GB) in 57 seconds!!! (I code the CSV format in it and only grab 4 values in my previous case). Later will change this code to handle this issue. Thx all who help me in this issue.
In order to maximize performance, you should map the files in memory with mmap or equivalent and parse the file with ad hoc code, typically scanning one character at a time with a pointer, checking for '\n' and/or '\r' for end of record and converting the numbers on the fly for storage to your arrays. The tricky parts are: how do you allocate or otherwise handle the destination arrays. are the fields all numeric? integral? is the last record terminated by a newline? You can easily check this condition after the mmap call. The advantage is you only need check for end of file when you encounter a newline sequence.
Probably the easiest way to read a run-time determined number of integers is to point into the right part of a longer format string. In other words, we can have a format string with 28 %d, specifiers, but point to the nth one before the end of string and pass that pointer as the format string for scanf(). As a simple example, consider accepting 3 integers from a maximum of 6: "%d,%d,%d,%d,%d,%d," ^ The arrow shows the string pointer to use as the pattern argument. Here's a full worked example; its runtime is about 8 seconds for 1 million iterations (10 million lines) when built with gcc -O3. It's slightly complicated by the mechanics to update the input string pointer, which is obviously not necessary when reading from a file stream. I've skipped the checking that nfields <= 28, but that's easily added. char const *const input = "1,1,5,1,1,1,0,0,6,6,1,1,1,0,1,0,13,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,1,6,5,1,1,1,0,1,0,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,2,6,5,1,1,1,0,1,0,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,3,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,4,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,5,6,4,1,0,1,0,1,0,4,8,18,20,,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,6,6,5,1,1,1,0,1,0,4,7,8,18,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,7,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,1,0,8,6,5,1,1,1,0,1,0,13,4,7,8,20,,,,,,,,,,,,,,,,,,,,,,,\n" "1,1,5,1,1,2,0,0,12,12,1,2,4,1,1,0,13,4,7,8,18,20,21,25,27,29,31,32,,,,,,,,,,,,,,,,\n"; #include <stdio.h> #define SKIP_FIELD "%*[^,]," #define DECIMAL_FIELD "%d," int read() { int n; /* bytes read - not needed for file or stdin */ int sum = 0; /* just to make sure results are used */ for (char const *s = input; *s; ) { int nfields; int array[28]; int m = sscanf(s, /* field 0 is missing */ SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD DECIMAL_FIELD /* field 10 */ SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD SKIP_FIELD "%n", &nfields, &n); if (m != 1) { return -1; } s += n; static const char fieldchars[] = DECIMAL_FIELD; static const size_t fieldsize = sizeof fieldchars - 1; /* ignore terminating null */ static const char *const parse_entries = DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD DECIMAL_FIELD "[^\n] "; const char *const line_parse = parse_entries + (28-nfields) * fieldsize; /* now read nfields (max 28) */ m = sscanf(s, line_parse, &array[0], &array[1], &array[2], &array[3], &array[4], &array[5], &array[6], &array[7], &array[8], &array[9], &array[10], &array[11], &array[12], &array[13], &array[14], &array[15], &array[16], &array[17], &array[18], &array[19], &array[20], &array[21], &array[22], &array[23], &array[24], &array[25], &array[26], &array[27]); if (m != nfields) { return -1; } /* advance stream position */ sscanf(s, "%*[^\n] %n", &n); s += n; /* use the results */ for (int i = 0; i < nfields; ++i) { sum += array[i]; } } return sum; } #undef SKIP_FIELD #undef DECIMAL_FIELD int main() { int sum = 0; for (int i = 0; i < 1000000; ++i) { sum += read() * (i&1 ? 1 : - 1); /* alternate add and subtract */ } return sum != 0; }
How do I read and parse a text file with numbers, fast (in C)?
The last time update: my classmate uses fread() to read about one third of the whole file into a string, this can avoid lacking of memory. Then process this string, separate this string into your data structure. Notice, you need to care about one problem: at the end of this string, these last several characters may cannot consist one whole number. Think about one way to detect this situation so you can connect these characters with the first several characters of the next string. Each number is corresponding to different variable in your data structure. Your data structure should be very simple because each time if you insert your data into one data structure, it is very slow. The most of time is spent on inserting data into data structure. Therefore, the fastest way to process these data is: using fread() to read this file into a string, separate this string into different one-dimensional arrays. For example(just an example, not come from my project), I have a text file, like: 72 24 20 22 14 30 23 35 40 42 29 50 19 22 60 18 64 70 . . . Each row is one person's information. The first column means the person's age, the second column is his deposit, the second is his wife's age. Then we use fread() to read this text file into string, then I use stroke() to separate it(you can use faster way to separate it). Don't use data structure to store the separated data! I means, don't do like this: struct person { int age; int deposit; int wife_age; }; struct person *my_data_store; my_data_store=malloc(sizeof(struct person)*length_of_this_array); //then insert separated data into my_data_store Don't use data structure to store data! The fastest way to store your data is like this: int *age; int *deposit; int *wife_age; age=(int*)malloc(sizeof(int)*age_array_length); deposit=(int*)malloc(sizeof(int)*deposit_array_length); wife_age=(int*)malloc(sizeof(int)*wife_array_length); // the value of age_array_length,deposit_array_length and wife_array_length will be known by using `wc -l`.You can use wc -l to get the value in your C program // then you can insert separated data into these arrays when you use `stroke()` to separate them. The second update: The best way is to use freed() to read part of the file into a string, then separate these string into your data structure. By the way, don't use any standard library function which can format string into integer , that's to slow, like fscanf() or atoi(), we should write our own function to transfer a string into n integer. Not only that, we should design a more simpler data structure to store these data. By the way, my classmate can read this 1.7G file within 7 seconds. There is a way can do this. That way is much better than using multithread. I haven't see his code, after I see his code, I will update the third time to tell you how could hi do this. That will be two months later after our course finished. Update: I use multithread to solve this problem!! It works! Notice: don't use clock() to calculate the time when using multithread, that's why I thought the time of execution increases. One thing I want to clarify is that, the time of reading the file without storing the value into my structure is about 20 seconds. The time of storing the value into my structure is about 60 seconds. The definition of "time of reading the file" includes the time of read the whole file and store the value into my structure. the time of reading the file = scan the file + store the value into my structure. Therefore, have some suggestions of storing value faster ? (By the way, I don't have control over the inout file, it is generated by our professor. I am trying to use multithread to solve this problem, if it works, I will tell you the result.) I have a file, its size is 1.7G. It looks like: 1 1427826 1 1427827 1 1750238 1 2 2 3 2 4 3 5 3 6 10 7 11 794106 . . and son on. It has about ten millions of lines in the file. Now I need to read this file and store these numbers in my data structure within 15 seconds. I have tried to use freed() to read whole file and then use strtok() to separate each number, but it still need 80 seconds. If I use fscanf(), it will be slower. How do I speed it up? Maybe we cannot make it less than 15 seconds. But 80 seconds to read it is too long. How to read it as fast as we can? Here is part of my reading code: int Read_File(FILE *fd,int round) { clock_t start_read = clock(); int first,second; first=0; second=0; fseek(fd,0,SEEK_END); long int fileSize=ftell(fd); fseek(fd,0,SEEK_SET); char * buffer=(char *)malloc(sizeof(char)*fileSize); char *string_first; long int newFileSize=fread(buffer,1,fileSize,fd); char *string_second; while(string_first!=NULL) { first=atoi(string_first); string_second=strtok(NULL," \t\n"); second=atoi(string_second); string_first=strtok(NULL," \t\n"); max_num= first > max_num ? first : max_num ; max_num= second > max_num ? second : max_num ; root_level=first/NUM_OF_EACH_LEVEL; leaf_addr=first%NUM_OF_EACH_LEVEL; if(root_addr[root_level][leaf_addr].node_value!=first) { root_addr[root_level][leaf_addr].node_value=first; root_addr[root_level][leaf_addr].head=(Neighbor *)malloc(sizeof(Neighbor)); root_addr[root_level][leaf_addr].tail=(Neighbor *)malloc(sizeof(Neighbor)); root_addr[root_level][leaf_addr].g_credit[0]=1; root_addr[root_level][leaf_addr].head->neighbor_value=second; root_addr[root_level][leaf_addr].head->next=NULL; root_addr[root_level][leaf_addr].tail=root_addr[root_level][leaf_addr].head; root_addr[root_level][leaf_addr].degree=1; } else { //insert its new neighbor Neighbor *newNeighbor; newNeighbor=(Neighbor*)malloc(sizeof(Neighbor)); newNeighbor->neighbor_value=second; root_addr[root_level][leaf_addr].tail->next=newNeighbor; root_addr[root_level][leaf_addr].tail=newNeighbor; root_addr[root_level][leaf_addr].degree++; } root_level=second/NUM_OF_EACH_LEVEL; leaf_addr=second%NUM_OF_EACH_LEVEL; if(root_addr[root_level][leaf_addr].node_value!=second) { root_addr[root_level][leaf_addr].node_value=second; root_addr[root_level][leaf_addr].head=(Neighbor *)malloc(sizeof(Neighbor)); root_addr[root_level][leaf_addr].tail=(Neighbor *)malloc(sizeof(Neighbor)); root_addr[root_level][leaf_addr].head->neighbor_value=first; root_addr[root_level][leaf_addr].head->next=NULL; root_addr[root_level][leaf_addr].tail=root_addr[root_level][leaf_addr].head; root_addr[root_level][leaf_addr].degree=1; root_addr[root_level][leaf_addr].g_credit[0]=1; } else { //insert its new neighbor Neighbor *newNeighbor; newNeighbor=(Neighbor*)malloc(sizeof(Neighbor)); newNeighbor->neighbor_value=first; root_addr[root_level][leaf_addr].tail->next=newNeighbor; root_addr[root_level][leaf_addr].tail=newNeighbor; root_addr[root_level][leaf_addr].degree++; } }
Some suggestions: a) Consider converting (or pre-processing) the file into a binary format; with the aim to minimise the file size and also drastically reduce the cost of parsing. I don't know the ranges for your values, but various techniques (e.g. using one bit to tell if the number is small or large and storing the number as either a 7-bit integer or a 31-bit integer) could halve the file IO (and double the speed of reading the file from disk) and slash parsing costs down to almost nothing. Note: For maximum effect you'd modify whatever software created the file in the first place. b) Reading the entire file into memory before you parse it is a mistake. It doubles the amount of RAM required (and the cost of allocating/freeing) and has disadvantages for CPU caches. Instead read a small amount of the file (e.g. 16 KiB) and process it, then read the next piece and process it, and so on; so that you're constantly reusing the same small buffer memory. c) Use parallelism for file IO. It shouldn't be hard to read the next piece of the file while you're processing the previous piece of the file (either by using 2 threads or by using asynchronous IO). d) Pre-allocate memory for the "neighbour" structures and remove most/all malloc() calls from your loop. The best possible case is to use a statically allocated array as a pool - e.g. Neighbor myPool[MAX_NEIGHBORS]; where malloc() can be replaced with &myPool[nextEntry++];. This reduces/removes the overhead of malloc() while also improving cache locality for the data itself. e) Use parallelism for storing values. For example, you could have multiple threads where the first thread handles all the cases where root_level % NUM_THREADS == 0, the second thread handles all cases where root_level % NUM_THREADS == 1, etc. With all of the above (assuming a modern 4-core CPU), I think you can get the total time (for reading and storing) down to less than 15 seconds.
My suggestion would be to form a processing pipeline and thread it. Reading the file is an I/O bound task and parsing it is CPU bound. They can be done at the same time in parallel.
There are several possibilities. You'll have to experiment. Exploit what your OS gives you. If Windows, check out overlapped io. This lets your computation proceed with parsing one buffer full of data while the Windows kernel fills another. Then switch buffers and continue. This is related to what #Neal suggested, but has less overhead for buffering. Windows is depositing data directly in your buffer through the DMA channel. No copying. If Linux, check out memory mapped files. Here the OS is using the virtual memory hardware to do more-or-less what Windows does with overlapping. Code your own integer conversion. This is likely to be a bit faster than making a clib call per integer. Here's example code. You want to absolutely limit the number of comparisons. // Process one input buffer. *end_buf = ' '; // add a sentinel at the end of the buffer for (char *p = buf; p < end_buf; p++) { // somewhat unsafe (but fast) reliance on unsigned wrapping unsigned val = *p - '0'; if (val <= 9) { // Found start of integer. for (;;) { unsigned digit_val = *p - '0'; if (digit_val > 9) break; val = 10 * val + digit_val; p++; } ... do something with val } } Don't call malloc once per record. You should allocate blocks of many structs at a time. Experiment with buffer sizes. Crank up compiler optimizations. This is the kind of code that benefits greatly from excellent code generation.
Yes, standard library conversion functions are surprisingly slow. If portability is not a problem, I'd memory-map the file. Then, something like the following C99 code (untested) could be used to parse the entire memory map: #include <stdlib.h> #include <errno.h> struct pair { unsigned long key; unsigned long value; }; typedef struct { size_t size; /* Maximum number of items */ size_t used; /* Number of items used */ struct pair item[]; } items; /* Initial number of items to allocate for */ #ifndef ITEM_ALLOC_SIZE #define ITEM_ALLOC_SIZE 8388608 #endif /* Adjustment to new size (parameter is old number of items) */ #ifndef ITEM_REALLOC_SIZE #define ITEM_REALLOC_SIZE(from) (((from) | 1048575) + 1048577) #endif items *parse_items(const void *const data, const size_t length) { const unsigned char *ptr = (const unsigned char *)data; const unsigned char *const end = (const unsigned char *)data + length; items *result; size_t size = ITEMS_ALLOC_SIZE; size_t used = 0; unsigned long val1, val2; result = malloc(sizeof (items) + size * sizeof (struct pair)); if (!result) { errno = ENOMEM; return NULL; } while (ptr < end) { /* Skip newlines and whitespace. */ while (ptr < end && (*ptr == '\0' || *ptr == '\t' || *ptr == '\n' || *ptr == '\v' || *ptr == '\f' || *ptr == '\r' || *ptr == ' ')) ptr++; /* End of data? */ if (ptr >= end) break; /* Parse first number. */ if (*ptr >= '0' && *ptr <= '9') val1 = *(ptr++) - '0'; else { free(result); errno = ECOMM; /* Bad data! */ return NULL; } while (ptr < end && *ptr >= '0' && *ptr <= '9') { const unsigned long old = val1; val1 = 10UL * val1 + (*(ptr++) - '0'); if (val1 < old) { free(result); errno = EDOM; /* Overflow! */ return NULL; } } /* Skip whitespace. */ while (ptr < end && (*ptr == '\t' || *ptr == '\v' *ptr == '\f' || *ptr == ' ')) ptr++; if (ptr >= end) { free(result); errno = ECOMM; /* Bad data! */ return NULL; } /* Parse second number. */ if (*ptr >= '0' && *ptr <= '9') val2 = *(ptr++) - '0'; else { free(result); errno = ECOMM; /* Bad data! */ return NULL; } while (ptr < end && *ptr >= '0' && *ptr <= '9') { const unsigned long old = val2; val1 = 10UL * val2 + (*(ptr++) - '0'); if (val2 < old) { free(result); errno = EDOM; /* Overflow! */ return NULL; } } if (ptr < end) { /* Error unless whitespace or newline. */ if (*ptr != '\0' && *ptr != '\t' && *ptr != '\n' && *ptr != '\v' && *ptr != '\f' && *ptr != '\r' && *ptr != ' ') { free(result); errno = ECOMM; /* Bad data! */ return NULL; } /* Skip the rest of this line. */ while (ptr < end && *ptr != '\n' && *ptr != '\r') ptr++; } /* Need to grow result? */ if (used >= size) { items *const old = result; size = ITEMS_REALLOC_SIZE(used); result = realloc(result, sizeof (items) + size * sizeof (struct pair)); if (!result) { free(old); errno = ENOMEM; return NULL; } } result->items[used].key = val1; result->items[used].value = val2; used++; } /* Note: we could reallocate result here, * if memory use is an issue. */ result->size = size; result->used = used; errno = 0; return result; } I've used a similar approach to load molecular data for visualization. Such data contains floating-point values, but precision is typically only about seven significant digits, no multiprecision math needed. A custom routine to parse such data beats the standard functions by at least an order of magnitude in speed. At least the Linux kernel is pretty good at observing memory/file access patterns; using madvise() also helps. If you cannot use a memory map, then the parsing function would be a bit different: it would append to an existing result, and if the final line in the buffer is partial, it would indicate so (and the number of chars not parsed), so that the caller can memmove() the buffer, read more data, and continue parsing. (Use 16-byte aligned addresses for reading new data, to maximize copy speeds. You don't necessarily need to move the unread data to the exact beginning of the buffer, you see; just keep the current position in the buffered data.) Questions?
First, what's your disk hardware? A single SATA drive is likely to be topped out at 100 MB/sec. And probably more like 50-70 MB/sec. If you're already moving data off the drive(s) as fast as you can, all the software tuning you do is going to be wasted. If your hardware CAN support reading faster? First, your read pattern - read the whole file into memory once - is the perfect use-case for direct IO. Open your file using open( "/file/name", O_RDONLY | O_DIRECT );. Read to page-aligned buffers (see man page for valloc()) in page-sized chunks. Using direct IO will cause your data to bypass double buffering in the kernel page cache, which is useless when you're reading that much data that fast and not re-reading the same data pages over and over. If you're running on a true high-performance file system, you can read asynchronously and likely faster with lio_listio() or aio_read(). Or you can just use multiple threads to read - and use pread() so you don't have waste time seeking - and because when you read using multiple threads a seek on an open file affects all threads trying to read from the file. And do not try to read fast into a newly-malloc'd chunk of memory - memset() it first. Because truly fast disk systems can pump data into the CPU faster than the virtual memory manager can create virtual pages for a process.
Division alg C to get floating point number
Hi I got this division alg to display the integer and floating point values. How can I get MAX_REM? it is supposed to be the size of the buffer where our characters are going to be stored, so the size has to be the # of digits, but I don't know how to get that. thanks! void divisionAlg(unsigned int value) { int MAX_BASE=10; const char *charTable = {"0123456789ABCDEF"}; // lookup table for converting remainders char rembuf[MAX_REM + 1]; // holds remainder(s) and provision null at the end int index; // int i; // loop variable unsigned int rem; // remainder unsigned int base; // we'll be using base 10 ssize_t numWritten; // holds number of bytes written from write() system call base = 10; // validate base if (base < 2 || base > MAX_BASE) err_sys("oops, the base is wrong"); // For some reason, every time this method is called after the initial call, rembuf // is magically filled with a bunch of garbage; this just sets everything to null. // NOTE: memset() wasn't working either, so I have to use a stupid for-loop for (i=0; i<MAX_REM; i++) rembuf[i] = '\0'; rembuf[MAX_REM] = 0; // set last element to zero index = MAX_REM; // start at the end of rembuf when adding in the remainders do { // calculate remainder and divide valueBuff by the base rem = value % base; value /= base; // convert remainder into ASCII value via lookup table and store in buffer index--; rembuf[index] = charTable[rem]; } while (value != 0); // display value if ((numWritten = write(STDOUT_FILENO, rembuf, MAX_REM + 1)) == -1) err_sys("something went wrong with the write"); } // end of divisionAlg()
The calculation for figuring out how many digits there are a number takes is: digits = floor(log(number)/log(base))+1; However, in this case, I'd probably just assume the worse case, since it's no more than 32, and calculating it will be "expensive". So just #define MAX_REM 32, and then keep track of how many digits you actually put into rembuf (you already have index for that, so it's no extra cost really). You'll obviously need to calculate the amount of bytes to write out as well, but shouldn't require any special math.
searching an integer from a text file in c
I have a simulation program written in c and I need to create random numbers and write them to a txt file. Program only stops - when a random number already generated is generated again or - 1 billion random number are generated (no repetition) My problem is that I could not search the generated long int random number in the txt file! Text file format is: 9875 764 19827 2332 ... Any help is appreciated.. ` FILE * out; int checkNumber(long int num){ char line[512]; long int number; int result=0; if((out = fopen("out.txt","r"))==NULL){ result= 1; } char buf[10]; itoa(num, buf, 10); while(fgets(line, 512, out) != NULL) { if((strstr(line,buf)) != NULL){ result = 0; } } if(out) { fclose(out); } return result; } int main(){ int seed; long int nRNs=0; long int numberGenerated; out = fopen ("out.txt","w"); nRNs=0; seed = 12345; srand (seed); fprintf(out,"%d\n",numberGenerated); while( nRNs != 1000000000 ) { numberGenerated = rand(); nRNs++; if(checkNumber(numberGenerated)==0){ fclose(out); break; system("pause"); } else{ fprintf(out,"%d\n",numberGenerated); } } fclose(out); }`
If the text file only contains randomly generated numbers separated by space, then you need strtok() function(google its usage) and throw it into the binary tree structure as mentioned by #jacekmigacz. But in any circumstance, you will have to search the whole file once at least. Then ftell() the value to get the location you've searched for in the file. When another number is generated you can use fseek() to get the latest number. Remember to get the data line by line with fgets() Take care of the memory requirements and use malloc() judiciously
Try with tree (data structure).
Searching linearly through the text file every time is gonna take forever with so many numbers. You could hold every number generated so far sorted in a data structure so that you can do a binary search for a duplicate. This is going to need a lot of RAM though. For 1 billion integers that's already 4GB on a system with 32-bit integers, and you'll need several more for the data structure overhead. My estimate is around 16GB in the worst case scenario (where you actually get to 1 billion unique integers.) If you don't have a memory monster machine, you should instead write the data structure to a binary file and do the binary search there. Though that's still gonna be quite slow.
This may work or you can approach like this : (slow but will work) int new_rand = rand(); static int couter = 0; FILE *fptr = fopen("txt","a+"); int i; char c,buf[10]; while((c=getc(fptr))!=EOF) { buf[j++]=c; if(c == ' ') { buf[--j]='\0'; i=atoi(buf); if(i == new_rand) return; j=0; } if(counter < 1000000) { fwrite(&new_rand, 4, 1, fptr); counter++; }
Don't open and scan your file to checkNumber(). You'll be waiting forever. Instead, keep your generated numbers in memory using a bit set data structure and refer to that. Your bit set will need to be large enough to indicate every 32-bit integer, so it'll consume 2^32 / 8 bytes (or 512MiB) of memory. This may seem like a lot but it's much smaller than 32-bit * 1,000,000,000 (4GB). Also, both checking and updating will be done in constant time. Edit: The wikipedia link doesn't do much to explain how to code one, so here's a rough sample: (There're faster ways of writing this, e.g.: using bit shifts instead of division, but this should be easier to understand.) int checkNumberOrUpdate(char *bitSet, long int num){ char b = 1 << (num % 8); char w = num / 8; if (bitSet[w] & ~b) { return 1; } bitSet[w] |= b; return 0; } Note, bitSet needs to be calloc()d to the right size from your main function.