i have a certain txt file(for instance - dic.txt) in which words appear in this order:
hello - ola - hiya \n
chips - fries - frenchfries \n
I need to read the contents of the file into an array of string arrays:
for instance:
array[0] : [hello,ola,hiya]
array[1] : [chips,fries,frenchfries]
I was thinking of using strtok in order to split each line in the file into a string (after copying the entire file into a string and calculating the number of lines),but i could not figure how to split each line ("hello - ola - hiya \n") into the words,and storing each array into the array (an array of strings within an array).
I was considering using malloc in order to allocate memory for each line of words,and storing the pointer to the string's array into the array,but i will be glad to receive any suggestions.
The straightforward way to read lines from a file and then split them into tokens is to read lines with fgets and then use strtok to split each line into tokens:
int main(int argc, char *argv[])
{
// Check for arguments and file pointer omitted
FILE *f = fopen(argv[1], "r");
for (;;) {
char line[80];
char *token;
if (fgets(line, 80, f) == NULL) break;
token = strtok(line, " -\n");
while (token) {
// Do something with token, for example:
printf("'%s' ", token);
token = strtok(NULL, " -\n");
}
}
fclose(f);
return 0;
}
This approach is fine as long as all the lines in your file are shorter than 80 characters. It works for variable numbers of tokens per line.
You have mentioned the issue of handling memory for the lines. The example above assumes that the memory handling is done by the data structure for each word. (It's not part of the example, which just prints the tokens.)
You can malloc memory for each line, which is more flexible than a rigid character limit per line, but you'll end up with a lot of allocations. The benefit is that your words don't need extra memory, they can just be pointers into the lines, but you'll have to take care of properly allocating memory for the lines - and freeing it afterwards.
If you read the whole text file to a contiguous chunk of memory, you're basically done with memory storage, as long as you keep that chunk "alive" as long as your words live:
char *slurp(const char *filename, int *psize)
{
char *buffer;
int size;
FILE *f;
f = fopen(filename, "r");
if (f == NULL) return NULL;
fseek(f, 0, SEEK_END);
size = ftell(f);
fseek(f, 0, SEEK_SET);
buffer = malloc(size + 1);
if (buffer) {
if (fread(buffer, 1, size, f) < size) {
free(buffer);
} else {
buffer[size] = '\0';
if (psize) *psize = size;
}
}
fclose(f);
return buffer;
}
With that chunk of memory, you can first look for lines by looking for the next newline, and then use strtok as above:
int main(int argc, char *argv[])
{
char *buffer; // contiguous memory chunk
char *next; // pointer to next line or NULL for last line
buffer = slurp(argv[1], NULL);
if (buffer == NULL) return 0;
next = buffer;
while (next) {
char *token;
char *p = next;
// Find beginning of the next line,
// i.e. the char after the next newline
next = strchr(p, '\n');
if (next) {
*next = '\0'; // Null-terminate line
next = next + 1; // Advance past newline
}
token = strtok(p, " -\n");
while (token) {
// Do something with token, for example:
printf("'%s' ", token);
token = strtok(NULL, " -\n");
}
}
free(buffer); // ... and invalidate your words
return 0;
}
If you use fscan, you always copy the found tokens to a temporary buffer and when you store them away in your dictionary structure, you have to copy them again with strcpy. That's a lot of copying. Here, you read and allocate once and then work with pointers into the chunk. strtok null-terminates the tokens, so your chunk is a chain of C strings.
Reading the wholem file into memory is usually not a good solution, but in this case, where the file basically is the data, it makes sense.
(Note: All this discussion about memory does not affect the memory needed for your dictionary structure, the nodes in trees and lined lists or whatever. It is just about storing the strings proper.)
using fgets:
int eol(int c, FILE *stream) //given a char and the file, check if eol included
{
if (c == '\n')
return 1;
if (c == '\r') {
if ((c = getc(stream)) != '\n')
ungetc(c, stream);
return 1;
}
return 0;
}
int charsNumInLine(FILE *stream)
{
int position = ftell(stream);
int c, num_of_chars=0;
while ((c = getc(stream)) != EOF && !eol(c, stream))
num_of_chars++;
fseek(stream,position,SEEK_SET); //get file pointer to where it was before this function call
return num_of_chars;
}
void main()
{
//...
char *buffer;
int size;
while()
{
size=charsNumInLine(stream);
buffer = (char*)malloc( size*sizeof(char) );
fgets(buffer,sizeof(buffer),stream);
if (feof(stream) || ferror(stream) )
break;
// use strtok to separate words...
}
//...
}
another way is to use fscanf(file,"%s",buff)to read words and then use the above function eol to see when we get to a newline.
Related
I have a hard time understanding how you process ascii files in c. I have no problem opening files and closing them or reading files with one value on each line. However, when the data is separated with characters, I really don't understand what the code is doing at a lower level.
Example: I have a file containing names separated with comas that looks like this:
"MARY","PATRICIA","LINDA","BARBARA","ELIZABETH","JENNIFER"
I have created an array to store them:
char names[6000][20];
And now, my code to process it is while (fscanf(data, "\"%s\",", names[index]) != EOF) { index++; }
The code executes for the 1st iteration and names[0] contains the whole file.
How can I separate all the names?
Here is the full code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
char names[6000][20]; // an array to store 6k names of max length 19
FILE * data = fopen("./022names.txt", "r");
int index = 0;
int nbNames;
while (fscanf(data, "\"%s\",", names[index]) != EOF) {
index++;
}
nbNames = index;
fclose(data);
printf("%d\n", index);
for (index=0; index<nbNames; index++) {
printf("%s \n", names[index]);
}
printf("\n");
return 0;
}
PS: I am thinking this might also be because of the data structure of my array.
If you want a simple solution, you can read the file character by character using fgetc. Since there are no newlines in the file, just ignore quotation marks and move to the next index when you find a comma.
char names[6000][20]; // an array to store 6k names of max length 19
FILE * data = fopen("./022names.txt", "r");
int name_count = 0, current_name_ind = 0;
int c;
while ((c = fgetc(data)) != EOF) {
if (c == ',') {
names[name_count][current_name_ind] = '\0';
current_name_ind = 0;
++name_count;
} else if (c != '"') {
names[name_count][current_name_ind] = c;
++current_name_ind;
}
}
names[name_count][current_name_ind] = '\0';
fclose(data);
"The code executes for the 1st iteration and names[0] contains the whole file...., How can I separate all the names?"
Regarding the first few statements:
char names[6000][20]; // an array to store 6k names of max length 19
FILE * data = fopen("./022names.txt", "r");
What if there are there are 6001 names. Or one of the names has more than 20 characters?
Or what if there are way less than 6000 names?
The point is that with some effort to enumerate the tasks you have listed, and some time mapping out what information is needed to create the code that matches your criteria, you can create a better product: The following is derived from your post:
Process ascii files in c
Read file content that is separated by characters
input is a comma separated file, with other delimiters as well
Choose a method best suited to parse a file of variable size
As mentioned in the comments under your question there are ways to create your algorithms in such way as to flexibly allow for extra long names, or for a variable number of names. This can be done using a few C standard functions commonly used in parsing files. ( Although fscanf() has it place, it is not the best option for parsing file contents into array elements.)
The following approach performs the following steps to accomplish the user needs enumerated above
Read file to determine number of, and longest element
Create array sized to contain exact contents of file using count of elements and longest element using variable length array (VLA)
Create function to parse file contents into array. (using this technique of passing VLA as function argument.)
Following is a complete example of how to implement each of these, while breaking the tasks into functions when appropriate...
Note, code below was tested using the following input file:
names.txt
"MARY","PATRICIA","LINDA","BARBARA","ELIZABETH","JENNIFER",
"Joseph","Bart","Daniel","Stephan","Karen","Beth","Marcia",
"Calmazzothoulumus"
.
//Prototypes
int count_names(const char *filename, size_t *count);
size_t filesize(const char *fn);
void populateNames(const char *fn, int longest, char arr[][longest]);
char *filename = ".\\names.txt";
int main(void)
{
size_t count = 0;
int longest = count_names(filename, &count);
char names[count][longest+1];//VLA - See linked info
// +1 is room for null termination
memset(names, 0, sizeof names);
populateNames(filename, longest+1, names);
return 0;
}
//populate VLA with names in file
void populateNames(const char *fn, int longest, char names[][longest])
{
char line[80] = {0};
char *delim = "\",\n ";
char *tok = NULL;
FILE * fp = fopen(fn, "r");
if(fp)
{
int i=0;
while(fgets(line, sizeof line, fp))
{
tok = strtok(line, delim);
while(tok)
{
strcpy(names[i], tok);
tok = strtok(NULL, delim);
i++;
}
}
fclose(fp);
}
}
//passes back count of tokens in file, and return longest token
int count_names(const char *filename, size_t *count)
{
int len=0, lenKeep = 0;
FILE *fp = fopen(filename, "r");
if(fp)
{
char *tok = NULL;
char *delim = "\",\n ";
int cnt = 0;
size_t fSize = filesize(filename);
char *buf = calloc(fSize, 1);
while(fgets(buf, fSize, fp)) //goes to newline for each get
{
tok = strtok(buf, delim);
while(tok)
{
cnt++;
len = strlen(tok);
if(lenKeep < len) lenKeep = len;
tok = strtok(NULL, delim);
}
}
*count = cnt;
fclose(fp);
free(buf);
}
return lenKeep;
}
//return file size in bytes (binary read)
size_t filesize(const char *fn)
{
size_t size = 0;
FILE*fp = fopen(fn, "rb");
if(fp)
{
fseek(fp, 0, SEEK_END);
size = ftell(fp);
fseek(fp, 0, SEEK_SET);
fclose(fp);
}
return size;
}
You can use the in-built strtok() function which is easy to use.
I have used the tok+1 instead of tok to omit the first " and strlen(tok) - 2 to omit the last ".
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
char names[6000][20]; // an array to store 6k names of max length 19
FILE * data = fopen("./022names.txt", "r");
int index = 0;
int nbNames;
char *str = (char*)malloc(120000*sizeof(char));
while (fscanf(data, "%s", str) != EOF) {
char *tok = strtok(str, ",");
while(tok != 0){
strncpy(names[index++], tok+1, strlen(tok)-2);
tok = strtok(0, ",");
}
}
nbNames = index;
fclose(data);
free(str); // just to free the memory occupied by the str variable in the heap.
printf("%d\n", index);
for (index=0; index<nbNames; index++) {
printf("%s \n", names[index]);
}
printf("\n");
return 0;
}
Also, the parameter 120000 is just the maximum number of characters that can be in the file. It is just 6000 * 20 as you mentioned.
I am new to C and am getting very frustrated with learning this language. Currently I'm trying to write a program that reads in a program textfile, reads and prints all the string literals, and tokens each on separate line. I have most of it except for one snag. within the text file there is a line such as: (..text..). I need to be able to search, read and print all the text is inside the parentheses on it's own line. Here is an idea I have so far:
#define KEY 32
#define BUFFER_SIZE 500
FILE *fp, *fp2;
int main()
{
char ch, buffer[BUFFER_SIZE], operators[] = "+-*%=", separators[] = "(){}[]<>,";
char *pus;
char source[200 + 1];
int i, j = 0, k = 0;
char *words = NULL, *word = NULL, c;
fp = fopen("main.txt", "r");
fp2 = fopen ("mynewfile.txt","w") ;
while ((ch = fgetc(fp)) != EOF)
{
// pus[k++] = ch;
if( ch == '(')
{
for ( k = 0;, k < 20, K++){
buffer[k] = ch;
buffer[k] = '\0';
}
printf("%s\n", buffer)
}
....
The textfile is this:
#include <stdio.h>
int main(int argc, char **argv)
{
for (int i = 0; i < argc; ++i)
{
printf("argv[%d]: %s\n", i, argv[i]);
}
}
So far I've been able to read char by char and place it into a buffer. But this idea just isn't working, and I'm stumped. I've tried dabbling with strcopy(), ands strtok, but they all take char arrays. Any ideas would be appreciated thank you.
Most likely the best way would be to use fgets() with a file to read in each line as a string (char array) and then delimit that string. See the short example below:
char buffer[BUFFER_SIZE];
int current_line = 0;
//Continually read in lines until nothing is left...
while(fgets(buffer, BUFFER_SIZE - 1, fp) != NULL)
{
//Line from file is now in buffer. We can delimit it.
char copy[BUFFER_SIZE];
//Copy as strtok will overwrite a string.
strcpy(copy, buffer);
printf("Line: %d - %s", current_line, buffer); //Print the line.
char * found = strtok(copy, separators); //Will delmit based on the separators.
while(found != NULL)
{
printf("%s", found);
found = strtok(NULL, separators);
}
current_line++;
}
strtok will return a char pointer to where the first occurrence of a delimiter is. It will replace the delimiter with the null terminator, thereby making "new" string. We can pass NULL to strtok to tell it to continue where it left off. Using this, we can parse line by line from a file based on multiple delimiters. You could save these individual string or evaluate them further.
Let's say we have a string of words that are delimited by a comma.
I want to write a code in C to store these words in a variable.
Example
amazon, google, facebook, twitter, salesforce, sfb
We do not know how many words are present.
If I were to do this in C, I thought I need to do 2 iterations.
First iteration, I count how many words are present.
Then, in the next iteration, I store each words.
Step 1: 1st loop -- count number of words
....
....
//End 1st loop. num_words is set.
Step 2:
// Do malloc using num_words.
char **array = (char**)malloc(num_words* sizeof(char*));
Step 3: 2nd loop -- Store each word.
// First, walk until the delimiter and determine the length of the word
// Once len_word is determined, do malloc
*array= (char*)malloc(len_word * sizeof(char));
// And then store the word to it
// Do this for all words and then the 2nd loop terminates
Can this be done more efficiently?
I do not like having 2 loops. I think there must be a way to do it in 1 loop with just basic pointers.
The only restriction is that this needs to be done in C (due to constraints that are not in my control)
You don't need to do a separate pass to count the words. You can use realloc to enlarge the array on the fly as you read in the data on a single pass.
To parse an input line buffer, you can use strtok to tokenize the individual words.
When saving the parsed words into the word list array, you can use strdup to create a copy of the tokenized word. This is necessary for the word to persist. That is, whatever you were pointing to in the line buffer on the first line will get clobbered when you read the second line (and so on ...)
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
char **words;
size_t wordmax;
size_t wordcount;
int
main(int argc,char **argv)
{
char *cp;
char *bp;
FILE *fi;
char buf[5000];
--argc;
++argv;
// get input file name
cp = *argv;
if (cp == NULL) {
printf("no file specified\n");
exit(1);
}
// open input file
fi = fopen(cp,"r");
if (fi == NULL) {
printf("unable to open file '%s' -- %s\n",cp,strerror(errno));
exit(1);
}
while (1) {
// read in next line -- bug out if EOF
cp = fgets(buf,sizeof(buf),fi);
if (cp == NULL)
break;
bp = buf;
while (1) {
// tokenize the word
cp = strtok(bp," \t,\n");
if (cp == NULL)
break;
bp = NULL;
// expand the space allocated for the word list [if necessary]
if (wordcount >= wordmax) {
// this is an expensive operation so don't do it too often
wordmax += 100;
words = realloc(words,(wordmax + 1) * sizeof(char *));
if (words == NULL) {
printf("out of memory\n");
exit(1);
}
}
// get a persistent copy of the word text
cp = strdup(cp);
if (cp == NULL) {
printf("out of memory\n");
exit(1);
}
// save the word into the word array
words[wordcount++] = cp;
}
}
// close the input file
fclose(fi);
// add a null terminator
words[wordcount] = NULL;
// trim the array to exactly what we need/used
words = realloc(words,(wordcount + 1) * sizeof(char *));
// NOTE: because we added the terminator, _either_ of these loops will
// print the word list
#if 1
for (size_t idx = 0; idx < wordcount; ++idx)
printf("%s\n",words[idx]);
#else
for (char **word = words; *word != NULL; ++word)
printf("%s\n",*word);
#endif
return 0;
}
What you're looking for is
http://manpagesfr.free.fr/man/man3/strtok.3.html
(From man page)
The strtok() function parses a string into a sequence of tokens. On the first call to strtok() the string to be parsed should be specified in str. In each subsequent call that should parse the same string, str should be NULL.
But this thread look like duplicate of Split string with delimiters in C
Unless you are forced to produce your own implementation ...
We do not know how many words are present.
We know num_words <= strlen(string) + 1. Only 1 "loop" needed. The cheat here is a quick run down s via strlen().
// *alloc() out-of-memory checking omitted for brevity
char **parse_csv(const char *s) {
size_t slen = strlen(s);
size_t num_words = 0;
char **words = malloc(sizeof *words * (slen + 1));
// find, allocate, copy the words
while (*s) {
size_t len = strcspn(s, ",");
words[num_words] = malloc(len + 1);
memcpy(words[num_words], s, len);
words[num_words][len] = '\0';
num_words++;
s += len; // skip word
if (*s) s++; // skip ,
}
// Only 1 realloc() needed.
realloc(words, sizeof *words *num_words); // right-size words list
return words;
}
It makes send to NULL terminate the list, so
char **words = malloc(sizeof *words * (slen + 1 + 1));
...
words[num_words++] = NULL;
realloc(words, sizeof *words *num_words);
return words;
In considering the worst case for the initial char **words = malloc(...);, I take a string like ",,," with its 3 ',' would make for 4 words "", "", "", "". Adjust code as needed for such pathological cases.
I need remove punctuation from a given string or a word. Here's my code:
void remove_punc(char* *str)
{
char* ps = *str;
char* nstr;
// should be nstr = malloc(sizeof(char) * (1 + strlen(*str)))
nstr = (char *)malloc(sizeof(char) * strlen(*str));
if (nstr == NULL) {
perror("Memory Error in remove_punc function");
exit(1);
}
// should be memset(nstr, 0, sizeof(char) * (1 + strlen(*str)))
memset(nstr, 0, sizeof(char) * strlen(*str));
while(*ps) {
if(! ispunct(*ps)) {
strncat(nstr, ps, 1);
}
++ps;
}
*str = strdup(nstr);
free(nstr);
}
If my main function is the simple one:
int main(void) {
char* str = "Hello, World!:)";
remove_punc(&str);
printf("%s\n", str);
return 0;
}
It works! The output is Hello World.
Now I want to read in a big file and remove punctuation from the file, then output to another file.
Here's another main function:
int main(void) {
FILE* fp = fopen("book.txt", "r");
FILE* fout = fopen("newbook.txt", "w");
char* str = (char *)malloc(sizeof(char) * 1024);
if (str == NULL) {
perror("Error -- allocating memory");
exit(1);
}
memset(str, 0, sizeof(char) * 1024);
while(1) {
if (fscanf(fp, "%s", str) != 1)
break;
remove_punc(&str);
fprintf(fout, "%s ", str);
}
return 0;
}
When I rerun the program in Visual C++, it reports a
Debug Error! DAMAGE: after Normal Block(#54)0x00550B08,
and the program is aborted.
So, I have to debug the code. Everything works until the statement free(nstr) being executed.
I get confused. Anyone can help me?
You forgot to malloc space for the null terminator. Change
nstr = (char *)malloc(sizeof(char) * strlen(*str));
to
nstr = malloc( strlen(*str) + 1 );
Note that casting malloc is a bad idea, and if you are going to malloc and then memset to zero, you could use calloc instead which does just that.
There is another bug later in your program. The remove_punc function changes str to point to a freshly-allocated buffer that is just big enough for the string with no punctuation. However you then loop up to fscanf(fp, "%s", str). This is no longer reading into a 1024-byte buffer, it is reading into just the buffer size of the previous punctuation-free string.
So unless your file contains lines all in descending order of length (after punctuation removal), you will cause a buffer overflow here. You'll need to rethink your design of this loop. For example perhaps you could have remove_punc leave the input unchanged, and return a pointer to the freshly-allocated string, which you would free after printing.
If you go with this solution, then use %1023s to avoid a buffer overflow with fscanf (unfortunately there's no simple way to take a variable here instead of hardcoding the length). Using a scanf function with a bare "%s" is just as dangerous as gets.
The answer by #MatMcNabb explains the causes of your problems. I'm going to suggest couple of ways you can simplify your code, and make it less susceptible to memory problems.
If performance is not an issue, read the file character by character and discard the puncuation characters.
int main(void)
{
FILE* fp = fopen("book.txt", "r");
FILE* fout = fopen("newbook.txt", "w");
char c;
while ( (c = fgetc(fp)) != EOF )
{
if ( !ispunct(c) )
{
fputc(c, fout);
}
}
fclose(fout);
fclose(fp);
return 0;
}
Minimize the number of calls to malloc and free by passing in the input string as well as the output string to remove_punc.
void remove_punc(char* inStr, char* outStr)
{
char* ps = inStr;
int index = 0;
while(*ps)
{
if(! ispunct(*ps))
{
outStr[index++] = *ps;
}
++ps;
}
outStr[index] = '\0';
}
and change the way you use remove_punc in main.
int main(void)
{
FILE* fp = fopen("book.txt", "r");
FILE* fout = fopen("newbook.txt", "w");
char inStr[1024];
char outStr[1024];
while (fgets(inStr, 1024, fp) != NULL )
{
remove_punc(inStr, outStr);
fprintf(fout, "%s", outStr);
}
fclose(fout);
fclose(fp);
return 0;
}
In your main you have the following
char* str = (char *)malloc(sizeof(char) * 1024);
...
remove_punc(&str);
...
Your remove_punc() function takes the address of str but when you do this in your remove_punc function
...
*str = strdup(nstr);
...
you are not copying the new string to the previously allocated buffer, you are reassigning str to point to the new line sized buffer! This means that when you read lines from the file and the next line to be read is longer than the previous line you will run into trouble.
You should leave the original buffer alone and instead e.g. return the new allocate buffer containing the new string e.g. return nstr and then free that when done with it or better yet just copy the original file byte by byte to the new file and exclude any punctuation. That would be far more effective
I have a homework problem that I need help with. I need to implement a function char *getStrFromFile(FILE*);. I just simply don't understand it. I have attempted to figure out the question.
This function safely reads a complete line of unknown length from the
open file pointed to by fpin. It returns a line that is at most CHUNKSZ-1
characters longer than the minimum needed to hold the line.
It initially allocates an array of DEFLEN characters to hold the string,
and if this space is inadequate to hold the string, it will iteratively
create a new string that is CHUNKSZ larger, copy the old string to it
release the old string, and then read in more characters from the file,
and continue this until the entire line of arbitrary length can be returned.
RETURNS: NULL if no characters are left in fpin, otherwise:
pointer to allocated array at most CHUNKSZ-1 characters longer than
miminum necessary to hold an arbitrarily long line from file fpin
int main(int nargs, char *args[])
{
FILE *fpin;
char *getStrFromFile(FILE*);
if (nargs != 2)
{
fprintf(stderr, "USAGE: %s <file>\n", args[0]);
exit(1);
}
fpin = fopen(args[1], "r");
while(1)
{
char *ln;
ln = getStrFromFile(fpin);
if (!ln)
break;
printf("%s", ln);
free(ln);
}
fclose(fpin);
return(0);
}
That is the main method I have to use. Here is what I know so far.
char *getStrFromFile(FILE *fpin)
{
char string[DEFLEN];
if(fgets(string, CHUNKSZ, fpin) != NULL) {
int l = lstr(string);
if(string[l-1] = '\n') {
return string;
} else {
int size = 1;
int end = 0;
while (string[l-1] != '\n') {
size += CHUNSZ;
char *s2 = (char*)malloc(sizeof(char)+size);
for(i = 0+end; i < lstr(string); i++) {
s2[i] = string[i];
}
end += lstr(string);
fgets(string, size + end, fpin);
return s2;
This is not correct.
if(string[l-1] = '\n')
it must be
if(string[l-1] == '\n')