Finding all possible combinations of truth values to well-formed formulas occuring in the given formula in C - c

I want to write a truth table evaluator for a given formula like in this site.
http://jamie-wong.com/experiments/truthtabler/SLR1/
Operators are:
- (negation)
& (and)
| (or)
> (implication)
= (equivalence)
So far I made this
-(-(a& b) > ( -((a|-s)| c )| d))
given this formula my output is
abdsR
TTTT
TTTF
TTFT
TTFF
TFTT
TFTF
TFFT
TFFF
FTTT
FTTF
FTFT
FTFF
FFTT
FFTF
FFFT
FFFF
I am having difficulties with the evaluating part.
I created an array which in I stored indises of parenthesis if it helps,namely
7-3, 17-12, 20-11, 23-9, 24-1
I also checked the code in http://www.stenmorten.com/English/llc/source/turth_tables_ass4.c
,however I didn't get it.

Writing an operator precedence parser to evaluate infix notation expressions is not an easy task. However, the shunting yard algorithm is a good place to start.

Related

Theorem Solution by Resolution Refutation

I have the following problem which I need to solve by resolution method in Artificial Intelligence
I don't understand why the negation of dog(x) is added in the first clause and ///y in the fourth clause why negation of animal(Y) is added ...
I mean what is the need of negation there?
Recall that logical implication P → Q is equivalent to ¬P ∨ Q. You can verify this by looking at the truth table:
P Q P → Q
0 0 1
1 0 0
0 1 1
1 1 1
Now clearly dog(X) → animal(X) is equivalent to ¬dog(X) ∨ animal(X) which is a disjunction of literals therefore is a clause.
The same reasoning applies to animal(Y) → die(Y).
As soon as you have got a set of formulas in clausal form that is equivalent to your input knowledge base, you can apply binary resolution to check if your knowledge base is consistent, or to prove a goal.
To prove a goal you add a negation of it to your consistent knowledge base and see if the knowledge base with the added negation of the goal becomes inconsistent.

What is the advantage of linspace over the colon ":" operator?

Is there some advantage of writing
t = linspace(0,20,21)
over
t = 0:1:20
?
I understand the former produces a vector, as the first does.
Can anyone state me some situation where linspace is useful over t = 0:1:20?
It's not just the usability. Though the documentation says:
The linspace function generates linearly spaced vectors. It is
similar to the colon operator :, but gives direct control over the
number of points.
it is the same, the main difference and advantage of linspace is that it generates a vector of integers with the desired length (or default 100) and scales it afterwards to the desired range. The : colon creates the vector directly by increments.
Imagine you need to define bin edges for a histogram. And especially you need the certain bin edge 0.35 to be exactly on it's right place:
edges = [0.05:0.10:.55];
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 0 0 0
does not define the right bin edge, but:
edges = linspace(0.05,0.55,6); %// 6 = (0.55-0.05)/0.1+1
X = edges == 0.35
edges = 0.0500 0.1500 0.2500 0.3500 0.4500 0.5500
X = 0 0 0 1 0 0
does.
Well, it's basically a floating point issue. Which can be avoided by linspace, as a single division of an integer is not that delicate, like the cumulative sum of floting point numbers. But as Mark Dickinson pointed out in the comments:
You shouldn't rely on any of the computed values being exactly what you expect. That is not what linspace is for. In my opinion it's a matter of how likely you will get floating point issues and how much you can reduce the probabilty for them or how small can you set the tolerances. Using linspace can reduce the probability of occurance of these issues, it's not a security.
That's the code of linspace:
n1 = n-1
c = (d2 - d1).*(n1-1) % opposite signs may cause overflow
if isinf(c)
y = d1 + (d2/n1).*(0:n1) - (d1/n1).*(0:n1)
else
y = d1 + (0:n1).*(d2 - d1)/n1
end
To sum up: linspace and colon are reliable at doing different tasks. linspace tries to ensure (as the name suggests) linear spacing, whereas colon tries to ensure symmetry
In your special case, as you create a vector of integers, there is no advantage of linspace (apart from usability), but when it comes to floating point delicate tasks, there may is.
The answer of Sam Roberts provides some additional information and clarifies further things, including some statements of MathWorks regarding the colon operator.
linspace and the colon operator do different things.
linspace creates a vector of integers of the specified length, and then scales it down to the specified interval with a division. In this way it ensures that the output vector is as linearly spaced as possible.
The colon operator adds increments to the starting point, and subtracts decrements from the end point to reach a middle point. In this way, it ensures that the output vector is as symmetric as possible.
The two methods thus have different aims, and will often give very slightly different answers, e.g.
>> a = 0:pi/1000:10*pi;
>> b = linspace(0,10*pi,10001);
>> all(a==b)
ans =
0
>> max(a-b)
ans =
3.5527e-15
In practice, however, the differences will often have little impact unless you are interested in tiny numerical details. I find linspace more convenient when the number of gaps is easy to express, whereas I find the colon operator more convenient when the increment is easy to express.
See this MathWorks technical note for more detail on the algorithm behind the colon operator. For more detail on linspace, you can just type edit linspace to see exactly what it does.
linspace is useful where you know the number of elements you want rather than the size of the "step" between them. So if I said make a vector with 360 elements between 0 and 2*pi as a contrived example it's either going to be
linspace(0, 2*pi, 360)
or if you just had the colon operator you would have to manually calculate the step size:
0:(2*pi - 0)/(360-1):2*pi
linspace is just more convenient
For a simple real world application, see this answer where linspace is helpful in creating a custom colour map

matlab division by vector?

Where can I find the the documentation for this sort of division and output? Why is the results different from 1./a?
a = [4,5,6,8]
>> 1/a'
ans =
0 0 0 0.1250
For arrays, the operand / is the mrdivide function: the result of B/A will be one solution of the linear system xA=B.
It is completely different from the operand ./, which corresponds to the rdivide function.
Note that, as stated in the comments, a scalar in Matlab is treated as a 1x1 matrix.
You are asking matlab to solve the equitation x*[4,5,6,8]'=1, one possible solution is [0,0,0,.125]

Are leftshift and OR operators commutative in C?

Example:
Operation 1:
d= c | y | z | a<<3 | b <<3 | x;
Operation 2:
m = c|y|z|x;
d = m | a<<3 | b<<3;
does operation 1 and operation 2 yield same results in C?
To answer the question in your title:
Are leftshift and OR operators commutative in C?
the | bitwise-or operator is commutative, but the << operator is not (a<<3 and 3<<a are quite different).
That doesn't appear to be what you meant to ask, though. To answer the body of your question, since << has higher precedence than | (i.e., << binds more tightly), you can think of a<<3 and b<<3 as if they were primary or parenthesized expressions. In effect, you have multiple subexpressions joined by | operators. Rearranging them should have no effect; your two code snippets should behave identically (except that the second one stores a value in m, which doesn't exist in your first snippet).
This assumes that all the variables you're using are of the same type. If they're not, then storing the intermediate value in m might involve a conversion which could alter the results. This probably doesn't apply in your case, but since you didn't show us any declarations it's impossible to be sure of that.
In this case, it should provide the same results since (a) there are no side effects (e.g. built-in pre- or post-increments or decrements), and (b) the << operator has higher precedence than |.
So, the << operations will occur before the | operations.
It's not a question of commutativity, but a question of precedence between operators. Although it does help that | itself is commutative since your choices do change the order in which expressions are or'ed together.

C: Convert A ? B : C into if (A) B else C

I was looking for a tool that can convert C code expressions for the form:
a = (A) ? B : C;
into the 'default' syntax with if/else statements:
if (A)
a = B
else
a = C
Does someone know a tool that's capable to do such a transformation?
I work with GCC 4.4.2 and create a preprocessed file with -E but do not want such structures in it.
Edit:
Following code should be transformed, too:
a = ((A) ? B : C)->b;
Coccinelle can do this quite easily.
Coccinelle is a program matching and
transformation engine which provides
the language SmPL (Semantic Patch
Language) for specifying desired
matches and transformations in C code.
Coccinelle was initially targeted
towards performing collateral
evolutions in Linux. Such evolutions
comprise the changes that are needed
in client code in response to
evolutions in library APIs, and may
include modifications such as renaming
a function, adding a function argument
whose value is somehow
context-dependent, and reorganizing a
data structure. Beyond collateral
evolutions, Coccinelle is successfully
used (by us and others) for finding
and fixing bugs in systems code.
EDIT:
An example of semantic patch:
## expression E; constant C; ##
(
!E & !C
|
- !E & C
+ !(E & C)
)
From the documentation:
The pattern !x&y. An expression of this form is almost always meaningless, because it combines a boolean operator with a bit operator. In particular, if the rightmost bit of y is 0, the result will always be 0. This semantic patch focuses on the case where y is a constant.
You have a good set of examples here.
The mailing list is really active and helpful.
The following semantic patch for Coccinelle will do the transformation.
##
expression E1, E2, E3, E4;
##
- E1 = E2 ? E3 : E4;
+ if (E2)
+ E1 = E3;
+ else
+ E1 = E4;
##
type T;
identifier E5;
T *E3;
T *E4;
expression E1, E2;
##
- E1 = ((E2) ? (E3) : (E4))->E5;
+ if (E2)
+ E1 = E3->E5;
+ else
+ E1 = E4->E5;
##
type T;
identifier E5;
T E3;
T E4;
expression E1, E2;
##
- E1 = ((E2) ? (E3) : (E4)).E5;
+ if (E2)
+ E1 = (E3).E5;
+ else
+ E1 = (E4).E5;
The DMS Software Reengineering Toolkit can do this, by applying program transformations.
A specific DMS transformation to match your specific example:
domain C.
rule ifthenelseize_conditional_expression(a:lvalue,A:condition,B:term,C:term):
stmt -> stmt
= " \a = \A ? \B : \C; "
-> " if (\A) \a = \B; else \a=\C ; ".
You'd need another rule to handle your other case, but it is equally easy to express.
The transformations operate on source code structures rather than text, so layout out and comments won't affect recognition or application. The quotation marks in the rule not traditional string quotes, but rather are metalinguistic quotes that separate the rule syntax language from the pattern langu age used to specify the concrete syntax to be changed.
There are some issues with preprocessing directives if you intend to retain them. Since you apparantly are willing to work with preprocessor-expanded code, you can ask DMS to do the preprocessing as part of the transformation step; it has full GCC4 and GCC4-compatible preprocessors built right in.
As others have observed, this is a rather easy case because you specified it work at the level of a full statement. If you want to rid the code of any assignment that looks similar to this statement, with such assignments embedded in various contexts (initializers, etc.) you may need a larger set of transforms to handle the various set of special cases, and you may need to manufacture other code structures (e.g., temp variables of appropriate type). The good thing about a tool like DMS is that it can explicitly compute a symbolic type for an arbitrary expression (thus the type declaration of any needed temps) and that you can write such a larger set rather straightforwardly and apply all of them.
All that said, I'm not sure of the real value of doing your ternary-conditional-expression elimination operation. Once the compiler gets hold of the result, you may get similar object code as if you had not done the transformations at all. After all, the compiler can apply equivalence-preserving transformations, too.
There is obviously value in making regular changes in general, though.
(DMS can apply source-to-source program transformations to many langauges, including C, C++, Java, C# and PHP).
I am not aware of such a thing as the ternary operator is built-into the language specifications as a shortcut for the if logic... the only way I can think of doing this is to manually look for those lines and rewrite it into the form where if is used... as a general consensus, the ternary operator works like this
expr_is_true ? exec_if_expr_is_TRUE : exec_if_expr_is_FALSE;
If the expression is evaluated to be true, execute the part between ? and :, otherwise execute the last part between : and ;. It would be the reverse if the expression is evaluated to be false
expr_is_false ? exec_if_expr_is_FALSE : exec_if_expr_is_TRUE;
If the statements are very regular like this why not run your files through a little Perl script? The core logic to do the find-and-transform is simple for your example line. Here's a bare bones approach:
use strict;
while(<>) {
my $line = $_;
chomp($line);
if ( $line =~ m/(\S+)\s*=\s*\((\s*\S+\s*)\)\s*\?\s*(\S+)\s*:\s*(\S+)\s*;/ ) {
print "if(" . $2 . ")\n\t" . $1 . " = " . $3 . "\nelse\n\t" . $1 . " = " . $4 . "\n";
} else {
print $line . "\n";
}
}
exit(0);
You'd run it like so:
perl transformer.pl < foo.c > foo.c.new
Of course it gets harder and harder if the text pattern isn't as regular as the one you posted. But free, quick and easy to try.

Resources