What is the best way to achieve sscanf-like functionality in Perl?
I am looking at the sscanf module.
Which is better:
Going sscanf way?
Regex way? (I am a beginner when it comes to Regex.)
The Perl documentation includes this tidbit:
scanf
scanf() is C-specific, use <> and regular expressions instead, see perlre.
I would say that learning regexes is well worth it; they are part of the core strength of Perl and are a useful tool elsewhere too.
CPAN has a String::Sscanf module for Perl!
Regex is better, but TIMTOWTDI.
There is more than one way to do itTM.
However, regular expressions are quite more versatile than sscanf. They are also not difficult to learn.
In Perl, an attempt to mimic the functionality of sscanf would most likely make heavy use of regular expressions.
You can use the regular expression.
If you want to use the sscanf module, you need to download that module. So it will occupies the space.
Instead of this, you use the regular expression only.
According to me regular expression is the better way
Related
Answers in C, Python, C++ or Javascript would be very much appreciated.
I've read a few books, done all the examples. Now I'd like to write a simple program.
But, I already ran into the following roadblock:
My intention is to take an equation from the user and save it in a variable,
For example:
-3*X+4 or pow(2,(sin(cos(x))/5)) > [In valid C Math syntax]
And then calculate the given expression for a certain X-Value.
Something like this:
printf("%g", UserFunction(3.2)) // Input 3.2 for X in User's Function and Print Result
Any ideas? For the life of me, I can't figure this out. Adding to my frustration, the solution is likely a very simply one. Thank you in advance.
There isn't a simple way to do this in C but I think muParser may be useful to you, it is written in C++ but has C binding. ExprTk is also an option but looks like it is C++ only, on the plus side it looks much easier to get interesting results with.
Another option may be the Expression Evaluation which is part of Libav. It is in C and the eval.h header has some good descriptions of the interface.
In compiled languages like C, C++, or Java there is no easy way to do this--you basically have to rewrite a whole compiler (or use an external library with an interpreter). This is only trivial in "scripting" languages like Python and Javascript, which have a function (often called "eval()") that evaluates expressions at runtime. This function is often dangerous, because it can also do things like call functions with side effects.
Ffmpeg/libav has a nice simple function evaluator you could use.
I am trying to use regular expression in c/c++ using regex.h.
I am trying to use lookahead options, for example:
(?=>#).*
in order to extract strings after a '#'
For some reason it fails to find any matches.
Does regex.h supports lookahead/lookbehind? is there another library I can use?
I am using regex.h, on linux.
I'm pretty sure NSRegularExpression is just a wrapper for libicu, which does support lookaheads. You have a typo in your example, right syntax is (?=#).* according to the link.
It doesn't really seem needed in this case though, why not just #.*?
I suspect it's really lookbehind you're talking about, not lookahead. That would be (?<=#).*, but why make it so complicated? Why not just use #(.*), as some of the other responders suggested, and pull the desired text out of the capturing group?
Also, are you really using NSRegularExpression? That seems unlikely, considering it's an Objective-C class in Apple's iOS/MacOS developer framework.
I'm looking for a shorter way to write a big expression in AS 2.0, which I could accomplish in C by using #define. Is there anything in AS 2.0 that allows me to do that?
Someone found a way to use the cpp preprocessor on actionscript: here
However, unless you have a very good reason and have already weighed the performance of doing it cleanly and found hardcoding significantly better, I would recommend just doing your expressions in a function (premature optimization is no good)
I am working with a unit-testing suite that hijacks function calls and tests expected output values.
The normal layout requires one block of unit-testing code for each expected value.
Since my code makes use of a large number of enums, I would like to automate the automated-testing with some for loop / macro magic, and I'm looking for some advice with writing it.
Here is a block of the test code that I need to duplicate X number of times:
START_TEST("test_CallbackFn");
EXPECTED_CALLS("{{function1(param_type)#default}{function2(param_type)#default}}");
CallbackFn();
END_CALLS();
END_TEST();
Now, here is what I would envision occuring
for (int i = 0; i < 10; i++)
{
RUN_TEST(i)
}
Now, I would like to define RUN_TEST with the code I mentioned above, except I need to replace the string default with the current value of i. What is throwing me off is the quotes and #'s that are present in the existing EXPECTED_CALLS macro.
I think I would look at using a separate macro processor rather than trying to beat the C preprocessor into submission. The classic example that people point to is m4, but for this, you might do better with awk or perl or python or something similar.
In my experiences, "complex" + "macro" = "don't do it!"
The C preprocessor was not designed to do anything this powerful. While you may be able to do some kung-fu and hack something together that works, it would be much easier to use a scripting language to generate the C code for you (it's also easier to debug since you can read through the generated code and make sure it is correct). Personally, I have used Ruby to do this several times but Python, Perl, bash (etc etc) should also work.
I'm not sure I fully understand the question, but if you want EXPECTED_CALLS to recieve a string where default is replaced with the string value of whatever default is you need to remove the #default from the string. i.e.
EXPECTED_CALLS("{{function1(param_type)#default}{function2(param_type)#default}}");
should be
EXPECTED_CALLS("{{function1(param_type)"#default"}{function2(param_type)"#default"}}");
It's probably possible: Boost.Preprocessor is quite impressive as it is.
For an enum it may be a bit more difficult, but there are for each loops in Boost.Preprocessor, etc..
The problem of the generative approach using external scripts is that it may require to externalize more than just the tests. Unless you plan on implementing a C++ parser which is known to be tricky at the best of times...
So you would need to generate the enums (store them in json for exemple) to be able to generate the tests for these enums afterward... and things begin to get hairy :/
I'm building my own language using Flex, but I want to know some things:
Why should I use lexical analyzers?
Are they going to help me in something?
Are they obligatory?
Lexical analysis helps simplify parsing because the lexemes can be treated as abstract entities rather than concrete character sequences.
You'll need more than flex to build your language, though: Lexical analysis is just the first step.
Any time you are converting an input string into space-separated strings and/or numeric values, you are performing lexical analysis. Writing a cascading series of else if (strcmp (..)==0) ... statements counts as lexical analysis. Even such nasty tools as sscanf and strtok are lexical analysis tools.
You'd want to use a tool like flex instead of one of the above for one of several reasons:
The error handling can be made much better.
You can be much more flexible in what different things you recognize with flex. For instance, it is tough to parse a C-format hexidecimal value properly with scanf routines. scanf pretty much has to know the hex value is comming. Lex can figure it out for you.
Lex scanners are faster. If you are parsing a lot of files, and/or large ones, this could become important.
You would consider using a lexical analyzer because you could use BNF (or EBNF) to describe your language (the grammar) declaratively, and then just use a parser to parse a program written in your language and get it in a structure in memory and then manipulate it freely.
It's not obligatory and you can of course write your own, but that depends on how complex the language is and how much time you have to reinvent the wheel.
Also, the fact that you can use a language (BNF) to describe your language without changing the lexical analyzer itself, enables you to make many experiments and change the grammar of your language until you have exactly what it works for you.