Using strtok to tokenize html - c

I'm looking to extract text found exactly within an < and >, while also extracting things found between > and <.
For instance:
<html> would just return <html>
<title>This is a title</title> would return <title>, This is a title, </title>
This is a title would return This is a title
And finally <title>This is a weird use of < bracket</title> should return <title>, This is a weird use of < bracket, </title>. My current version recognises it as <title>, This is a weird use of, < bracket, </title>
I'd appreciate any snippets of code, or directions to head in to get to a solution.
tldr, grab substrings with <...> and >...< seperately without being stumped by a floating ...>... or ...<....
Edit: not using strtok anymore, would appreciate any other help or similiar problems you may know about. Any thing to read also would be greatly beneficial. Note: we aren't trying to parse, simply lex the input string
Can only use standard libraries for c.

Just trying to build a basic validator for a subset of valid HTML.
You can't, not even a basic one. You will have too many false positives and negatives. Here's a simple example.
<tag attribute=">" />
HTML has many features which do not allow simple parsing. It is...
Balanced, like <tag></tag> and also "quotes".
Nested, like <tag><tag></tag></tag>.
Escaped, like "escaped\"quote".
Has other languages embedded in it, like Javascript and CSS.
If this is an exercise in tokenization, you could define a very specific subset, but I'd suggest something simpler like JSON which has a well defined grammar. Those are typically parsed using a lexer and parser, but JSON is small enough to be written by hand.

My own solution has been thus so far,
as suggested by #chqrlie...
void tokenize(char* stringPtr)
{
char *flag;
strcpy(flag, " ");
/*We build this up as we iterate the string.
Strtok was not suitable, build up tokens char by char */
char tempToken[tokenLength];
strcpy(tempToken, ""); // Init current token
// Traverse string catching stuff between <...> and >...< seperately.
for(int i =0; i<strlen(stringPtr);i++)
{
if (stringPtr[i]=='<' )
{
if (strcmp(flag, " ")==0)
{
putToken(tempToken);
strcpy(tempToken,""); // Tag starting, everything before it is a token.
strcpy(flag,"<");
strcat(tempToken, flag);
}
else // Catches <...<
{
presentError(stringPtr);
}
}
else if (stringPtr[i]=='>')
{
if (strcmp(flag,"<")==0)
{
strcat(tempToken, ">");
strcpy(flag," ");
putToken(tempToken);
strcpy(tempToken,"");
}
else // Cant have a > unless we saw < already
{
presentError(stringPtr);
}
}
else // Manage non angle brackets
{
strncat(tempToken, &stringPtr[i],1 );
}
}
putToken(tempToken); // Catches a line ending in a value, not a tag
/* Notes
Floating <'s and >'s will be errored up
- Special case ....<...>..., which is incorrect
will cause floating tokens, can be identified
Unclosed tags i.e. </p will be tokenized verbatim,
thus can identify this mistake
Unopened tags i.e. p> will be errored
*/
}
Assume that presentError() terminates lexing.
Some improvements can be made, I'm open to suggestions however this is a first working draft.

Related

Markup matches and escape special characters outside the matches at the same time

I have a search functionality for a treeview that highlights all matches, incl. distinction between caseless and case-sensitive, as well as distinction between regular expression and literal. However, I have a problem when the current cell contains special characters that are not part of the matches. Consider the following text inside a treeview cell:
father & mother
Now I want to do for example a search on the whole treeview for the letter 'e'. For highlighting the matches only and not the whole cell, I need to use markup. To achieve this, I use g_regex_replace_eval and its callback function in the way as stated inside the GLib documentation. The resulting new marked up text for the cell would be like this:
fath<span background='yellow' foreground='black'>e</span>r &
moth<span background='yellow' foreground='black'>e</span>r
If there are special characters inside the matches, they are escaped before being added to the hashtable that is used by the eval function. So special characters inside matches are no problem.
But I have the '&' now outside the markup parts, and it has to be changed to &, otherwise the markup won't show up in the cell and a warning
Failed to set text from markup due to error parsing markup: Error on line x: Entity did not end with a semicolon; most likely you used an ampersand character without intending to start an entity - escape ampersand as &
will be shown inside the terminal.
If I use g_markup_escape_text on the new cell text, it will obviously not only escape the '&', but also the '<' and '>' of the markup, so this is no solution.
Is there a reasonable way to put markup around the matches and escape special characters outside the markup at the same time or with a view steps? Everything I could imagine so far is much too complicated, if it would work at all.
Even though I had already considered Philip's suggestion in most of its parts before asking my question, I had not touched yet the subject of utf8, so he gave an important hint for the solution. The following is the core of a working implementation:
gchar *counter_char = original_cell_txt; // counter_char will move through all the characters of original_cell_txt.
gint counter;
gunichar unichar;
gchar utf8_char[6]; // Six bytes is the buffer size needed later by g_unichar_to_utf8 ().
gint utf8_length;
gchar *utf8_escaped;
enum { START_POS, END_POS };
GArray *positions[2];
positions[START_POS] = g_array_new (FALSE, FALSE, sizeof (gint));
positions[END_POS] = g_array_new (FALSE, FALSE, sizeof (gint));
gint start_position, end_position;
txt_with_markup = g_string_new ("");
g_regex_match (regex, original_cell_txt, 0, &match_info);
while (g_match_info_matches (match_info)) {
g_match_info_fetch_pos (match_info, 0, &start_position, &end_position);
g_array_append_val (positions[START_POS], start_position);
g_array_append_val (positions[END_POS], end_position);
g_match_info_next (match_info, NULL);
}
do {
unichar = g_utf8_get_char (counter_char);
counter = counter_char - original_cell_txt; // pointer arithmetic
if (counter == g_array_index (positions[END_POS], gint, 0)) {
txt_with_markup = g_string_append (txt_with_markup, "</span>");
// It's simpler to always access the first element instead of looping through the whole array.
g_array_remove_index (positions[END_POS], 0);
}
/*
No "else if" is used here, since if there is a search for a single character going on and
such a character appears double as 'm' in "command", between both m's a span tag has to be
closed and opened at the same position.
*/
if (counter == g_array_index (positions[START_POS], gint, 0)) {
txt_with_markup = g_string_append (txt_with_markup, "<span background='yellow' foreground='black'>");
// See the comment for the similar instruction above.
g_array_remove_index (positions[START_POS], 0);
}
utf8_length = g_unichar_to_utf8 (unichar, utf8_char);
/*
Instead of using a switch statement to check whether the current character needs to be escaped,
for simplicity the character is sent to the escape function regardless of whether there will be
any escaping done by it or not.
*/
utf8_escaped = g_markup_escape_text (utf8_char, utf8_length);
txt_with_markup = g_string_append (txt_with_markup, utf8_escaped);
// Cleanup
g_free (utf8_escaped);
counter_char = g_utf8_find_next_char (counter_char, NULL);
} while (*counter_char != '\0');
/*
There is a '</span>' to set at the end; because the end position is one position after the string size
this couldn't be done inside the preceding loop.
*/
if (positions[END_POS]->len) {
g_string_append (txt_with_markup, "</span>");
}
g_object_set (txt_renderer, "markup", txt_with_markup->str, NULL);
// Cleanup
g_regex_unref (regex);
g_match_info_free (match_info);
g_array_free (positions[START_POS], TRUE);
g_array_free (positions[END_POS], TRUE);
Probably the way to do this is to not use g_regex_replace_eval(), but rather to use g_regex_match_all() to get the list of matches for a string. Then you need to step through the string character-by-character (do this using the g_utf8_*() functions, since this has to be Unicode-aware). If you get to a character which needs to be escaped (<, >, &, ", '), output the escaped entity for it. When you get to a match position, output the correct markup for it.
I'd escape the whole text first using g_markup_escape_text, then escape the text to search and use it in g_regex_replace_eval. This way escaped text can be matched, and text not matched is already escaped.

How to disable parsing for a piece of text in a file?

Structure of my file is :
`pragma TOKEN1_NAME TOKEN1_VALUE
`pragma TOKEN2_NAME TOKEN2_VALUE
`pragma TOKEN3_NAME TOKEN3_VALUE
`pragma TOKEN4_NAME TOKEN4_VALUE
VHDL_TEXT{
// A valid VHDL text goes here.
}
`pragma TOKEN2_NAME TOKEN2_VALUE
VHDL_TEXT{
// VHDL text
}
I need to pass VHDL text as it is to the output file.I can do that by making a default rule at the end of lex file as:
Rule: . { append_to_buffer(*yytext); }
I also have list of other rules in my Lex file to deal with the tokens.
The problem i am having is how to deal with the situation in which VHDL text is also containing some of the tokens that can be recognized by the Lex rules?
In other words ,i want to disable detecting further valid token one i found the text i am interesting in and again start detection once it is over.
As rici points out indirectly you need to be able to distinguish between occurrences of the trailing delimiter '}' for your rule and occurrences of the right curly bracket in a valid VHDL design specification or portion.
See IEEE Std 1076-2008, 15.3 Lexical elements, separators, and delimiters where we find that '{' and '}' are not used as delimiters in VHDL.
They are other special characters (15.2 Character set, using ISO/IEC 8859-1:1998) requiring handling where graphic characters may appear.
graphic_character ::=
basic_graphic_character | lower_case_letter | other_special_character
These include extended identifiers (15.4.3), character literals (15.6), string literals (15.7), bit string literals (15.8), comments (15.9) and tool directives (15.11).
There's a need to identify these lexical elements within the production otherwise identifying '}' as a delimiter for the rule.
Only one tool directive is currently defined (24.1 Protect tool directives) wherein the use of the two curly bracket characters would be contained in VHDL lexical elements. All other uses in lexical elements are directly delimited. (And you could disclaim tool directive support, in VHDL they basically also invoke separate lexical, syntactical and semantic analysis).
Essentially you need to operate a VHDL lexical analyzer for traversing 'VHDL text' where you're rule delimiter right curly bracket will stand out like a sore thumb (as an exception, serving as the closing delimiter for VHDL text).
And about now you'd get the idea life would be easier if you could deal with VHDL by reference instead if possible. Your mechanism is as complex as including tool directives in VHDL (which can be done with a preprocessor as could your VHDL text).
This is in response to the vhdl tag added by FUZxxl.
When you have essentially different languages in a source file that you need to deal with that have clear demarcation tokens (like your VHDL_TEXT markers) that can be easily recognized by the lexer, the easiest thing to do is to use flex exclusive start states (%x). In your case, you would do something like:
%{
/* some global vars for holding aux state */
static int brace_depth;
static Buffer vhdl_text;
%}
%x VHDL
%%
.. normal lexer rules for your non-vhdl stuff
VHDL_TEXT[ \t]*{ { brace_depth = 1;
BufferClear(vhdl_text);
BEGIN(VHDL); }
<VHDL>"{" { BufferAppend(vhdl_text, *yytext);
brace_depth++; }
<VHDL>"}" { if (--brace_depth == 0) {
BEGIN(INITIAL);
yylval.buf = BufferExtract(vhdl_text);
return VHDL_TEXT; }
BufferAppend(vhdl_text, *yytext); }
<VHDL>--.*\n { BufferAppendString(vhdl_text, yytext); }
<VHDL>\"[^"\n]\" { BufferAppendString(vhdl_text, yytext); }
<VHDL>\\[^\\\n]\\ { BufferAppendString(vhdl_text, yytext); }
<VHDL>.|\n { BufferAppend(vhdl_text, *yytext); }
This will gather up everything between the curly braces in VHDL_TEXT {...} and return it to your parser as a single token (matching nested braces properly, if there are any in the VHDL text.) You can do macro substitution-like stuff in the VHDL code by adding a rule like:
<VHDL>{IDENT} { if (Macro *mac = lookup_macro_by_name(yytext)) {
BufferAppendString(vhdl_text, mac->replacement);
} else {
BufferAppendString(vhdl_text, yytext); } }
You also probably want a <VHDL><<EOF>> rule to detect a missing closing } on the vhdl text and give an appropriate error message.

Checking for a blank line in C - Regex

Goal:
Find if a string contains a blank line. Whether it be '\n\n',
'\r\n\r\n', '\r\n\n', '\n\r\n'
Issues:
I don't think my current regex for finding '\n\n' is right. This is my first time really using regex outside of simple use of * when removing files in command line.
Is it possible to check for all of these cases (listed above) in one regex? or do I have to do 4 seperate calls to compile_regex?
Code:
int checkForBlankLine(char *reader) {
regex_t r;
compile_regex(&r, "*\n\n");
match_regex(&r, reader);
return 0;
}
void compile_regex(regex_t *r, char *matchText) {
int status;
regcomp(r, matchText, 0);
}
int match_regex(regex_t *r, char *reader) {
regmatch_t match[1];
int nomatch = regexec(r, reader, 1, match, 0);
if (nomatch) {
printf("No matches.\n");
} else {
printf("MATCH!\n");
}
return 0;
}
Notes:
I only need to worry about finding one blank line, that's why my regmatch_t match[1] is only one item long
reader is the char array containing the text I am checking for a blank line.
I have seen other examples and tried to base the code off of those examples, but I still seem to be missing something.
Thank you kindly for the help/advice.
If anything needs to be clarified please let me know.
It seems that you have to compile the regex as extended:
regcomp(&re, "\r?\n\r?\n", REG_EXTENDED);
The first atom, \r? is probably unnecessary, because it doesn't add to the blank-line condition if you don't capture the result.
In the above, blank line really means empty line. If you want blank line to mean a line that has no characters except for white space, you can use:
regcomp(&re, "\r?\n[ \t]*\r?\n", REG_EXTENDED);
(I don't think you can use the space character pattern, \s here instead of [ \t], because that would include carriage return and new-line.)
As others have already hinted at, the "simple use of * in the command line` is not a regular expression. This wildcard-matching is called file globbing and has different semantics.
Check what the * in a regex means. It's not like the wildcard "anything" in the command line. The * means that the previous component can appear any amount of times. The wildcard in regex is the .. So if you want to say match anything you can do .*, which would be anything, any amount of times.
So in your case you can do .*\n\n.* which would match anything that has \n\n.
Finally, you can use or in a regex and ( ) to group stuff. So you can do something like .*(\n\n|\r\n\r\n).* And that would match anything that has a \n\n or a \r\n\r\n.
Hope that helps.
Rather than looking for only \r or \n, look for not \r or \n?
Your regex would simply be
'[^\r\n]'
and a match result of false indicates a blank line to your specification.

Search and replace a string as shown below

I am reading a file say x.c and I have to find for the string "shared". Once the string like that has been found, the following has to be done.
Example:
shared(x,n)
Output has to be
*var = &x;
*var1 = &n;
Pointers can be of any name. Output has to be written to a different file. How to do this?
I'm developing a source to source compiler for concurrent platforms using lex and yacc. This can be a routine written in C or if u can using lex and yacc. Can anyone please help?
Thanks.
If, as you state, the arguments can only be variables and not any kind of other expressions, then there are a couple of simple solutions.
One is to use regular expressions, and do a simple search/replace on the whole file using a pretty simple regular expression.
Another is to simply load the entire source file into memory, search using strstr for "shared(", and use e.g. strtok to get the arguments. Copy everything else verbatim to the destination.
Take advantage of the C preprocessor.
Put this at the top of the file
#define shared(x,n) { *var = &(x); *var1 = &(n); }
and run in through cpp. This will include external resources also and replace all macros, but you can simply remove all #something lines from the code, convert using injected preprocessor rules and then re-add them.
By the way, why not a simple macro set in a header file for the developer to include?
A doubt: where do var and var1 come from?
EDIT: corrected as shown by johnchen902
When it comes to preprocessor, I'll do this:
#define shared(x,n) (*var=&(x),*var1=&(n))
Why I think it's better than esseks's answer?
Suppose this situation:
if( someBool )
shared(x,n);
else { /* something else */ }
In esseks's answer it will becomes to:
if( someBool )
{ *var = &x; *var1 = &n; }; // compile error
else { /* something else */ }
And in my answer it will becomes to:
if( someBool )
(*var=&(x),*var1=&(n)); // good!
else { /* something else */ }

Trying to make match on a rule that uses "recursive" identifier in flex

I have this line:
0, 6 -> W(1) L(#);
or
\# -> #shift_right R W(1) L
I have to parse this line with flex, and take every element from every part of the arrow and put it in a list. I know how to match simple things, but I don't know how to match multiple things with the same rule. I'm not allowed to increase the limit for rules. I have a hint: parse the pieces, pieces will then combine, and I can use states, but I don't know how to do that, and I can't find examples on the net. Can someone help me?
So, here an example:
{
a -> W(b) #invert_loop;
b -> W(a) #invert_loop;
-> L(#)
}
When this section begins I have to create a structure for each line, where I put what is on the left of -> in a vector, those are some parameters, and the right side in a list, where each term is kinda another structure. For what is on the right side I wrote rules:
writex W([a-zA-Z0-9.#]) for W(anything).
So I need to parse these lines, so I can put the parameters and the structures int the big structure. Something like this(for the first line):
new bigStruc with param = a and list of struct = W(anything), #invert(it is a notation for a reference to another structure)
So what I need is to know how to parse these line so that I can create and create and fill these bigStruct, also using to rules for simple structure(i have all I need for these structures, but I don't how to parse so that I can use these methods).
Sorry for my English and I hope this time I was more clear on what I want.
Last-minute editing: I have matched the whole line with a rule, and then work on it with strtok. There is a way to use previous rules to see what type of structure i have to create? I mean not to stay and put a lots of if, but to use writex W([a-zA-Z0-9.#]) to know that i have to create that kind of structure?
Ok, lets see how this snippet works for you:
// these are exclusive rules, so they do not overlap, for inclusive rules, use %s
%x dataStructure
%x addRules
%%
<dataStructure>-> { BEGIN addRules; }
\{ { BEGIN dataStructure; }
<addRules>; { BEGIN dataStructure; }
<dataStructure>\} { BEGIN INITIAL; }
<dataStructure>[^,]+ { ECHO; } //this will output each comma separated token
<dataStructure>. { } //ignore anything else
<dataStructure>\n { } //ignore anything else
<addRules>[^ ]+ { ECHO; } //this will output each space separated rule
<addRules>. { } //ignore anything else
<addRules>\n { } //ignore anything else
%%
I'm not entirely sure what it it you want. Edit your original post to include the contents of your comments, with examples, and please structure your English better. If you can't explain what you want without contradicting yourself, I can't help you.

Resources