How is a primitive procedure tagged in SICP's evaluator (chapter 4)? - eval

I am reading chapter 4 of SICP. In the eval procedure, there is a procedure application. This procedure checks whether the expression is tagged with the symbol 'primitive or 'procedure.
I can see where the symbol 'procedure is added. (It is when evaluating a lambda expression.).
I am not able to find where the tag 'primitive is added? Clearly, when I supply a program to the evaluator, I supply (+ 1 2) and not ('primitive + 1 2). I would guess the 'primitive tag is added somewhere (like 'procedure), but I cannot find where.

Take a look at the primitive-procedure-objects procedure, that's where the 'primitive tag is added to the elements of the primitive-procedures list, which contains the primitive operations available to the interpreter.
In turn, primitive-procedure-objects is called inside setup-environment, which is used for creating the initial environment for the interpreter.
When evaluating an expression such as (+ 1 2), the evaluator simply goes all the way down in the case analysis of eval, matching the application? predicate, which invokes apply and then (eval (operator exp) env) on the first element of the expression. In turn, this matches variable? in the case analysis, which calls lookup-variable-value, that returns a procedure, which we tagged with 'primitive in setup-environment. Whew!

Related

How can I use a variable in a T-SQL partition function?

I'm confused about the definition for boundary_values in the Microsoft documentation as related to constants and expressions.
I have created partitioned tables in MSSQL, so I know how to set the values for the "partition functions". I also have a stored procedure that updates the boundary values for two partition functions every month, so I know how to programmatically create the functions.
But I recently re-read the documentation for CREATE PARTITION FUNCTION. The syntax is:
CREATE PARTITION FUNCTION partition_function_name ( input_parameter_type )
AS RANGE [ LEFT | RIGHT ]
FOR VALUES ( [ *boundary_value* [ ,...n ] ] )
The explanation has these sentences that baffle me:
boundary_value is a constant expression that can reference variables. This includes user-defined type variables, or functions and user-defined functions. It cannot reference Transact-SQL expressions.
How can a constant expression "reference" a variable? According to the doc on expressions, a constant expression is "a symbol that represents a single, specific data value. For more information, see Constants (Transact-SQL).". That link says that a character constant, for example, is just some stuff between quote marks. Numeric constants are just a single number. This says that variables can't be used in a constant expression -- that would be a Transact-SQL expression.
And the "functions and user-defined functions" phrase also baffles me.
There are lots of examples of how to programmatically build boundary values, especially when working with dates, but they just programmatically make one large SQL expression using Create Partition Function and appending the boundary values (usually in a loop). That's not setting the boundary_value to a constant expression that's "referencing" a "variable". And it's not using functions. I have seen no examples that would illuminate how to reference a variable, or use a function, in the boundary_value.
Does anyone understand what this is trying to say?

Looping through a list which is a subarray

I'm using ECLiPSe 6.1. I have an array of variables of dimension N x N, let's call it Vars. Now I call a procedure with, say, my_procedure(Vars[1..N,1..2]).
Inside the procedure (my_procedure(List) :- ...), something like (foreach(X, List) do ...) is used.
This doesn't work. I have to write something like L is List inside the procedure before looping over L (instead of List) to make it work.
Why is this? And how can I address it? Because later on I try to call the procedure with flatten(Vars[1..N,1..2]) and then it gets even worse.
I started using collection_to_list/2 (with flatten) to resolve the issue, but I was wondering if there's an elegant way to address it.
Let me elaborate a bit, because your question highlights a feature of Prolog/ECLiPSe that regularly surprises users coming from other programming languages:
Every term/expression is by default just a symbolic structure with no inherent meaning
Any interpretation/evaluation of a such symbolic structure only happens in particular contexts, or when requested explicitly
Maybe the most blatant example is with what looks like an "arithmetic expression":
?- writeln(3+4).
3 + 4
Prolog takes the argument 3+4 simply as the symbolic term +(3,4) and passes it to writeln/1, uninterpreted. Passing a term as an argument to a user-defined predicate doesn't change this, there is no implicit evaluation at call time:
p(X) :- writeln(received(X)).
?- p(3+4).
received(3 + 4)
If we want to interpret the argument as an arithmetic expression and evaluate it, we have to request this explicitly:
parith(Expr) :- Num is Expr, writeln(evaluated_to(Num)).
?- parith(3 + 4).
evaluated_to(7)
Array access expressions in ECLiPSe behave in the same way. They are just symbolic expressions until explicitly evaluated by a predicate that understands them:
?- Array = [](11,22,33), p(Array[2]).
received([](11,22,33)[2])
?- Array = [](11,22,33), parith(Array[2]).
evaluated_to(22)
So, to finally come back to your original problem: when you call my_procedure(Vars[1..N,1..2]), the argument that gets passed is the symbolic expression Vars[1..N,1..2], and this is what my_procedure/1 receives. To turn that into the flat list that you want, it has to be interpreted as an expression that yields a list, and
collection_to_list/2 (or, starting from ECLiPSe 7.0, eval_to_list/2) do exactly that:
plist(Expr) :- eval_to_list(Expr, List), writeln(evaluated_to(List)).
?- A = [](11, 22, 33), p(A[2 .. 3]).
received([](11, 22, 33)[2 .. 3])
?- A = [](11, 22, 33), plist(A[2 .. 3]).
evaluated_to([22, 33])

Deep copy always fails in workbench system

I have found one case that does not make sense.
I have following feature:
test_array_deep_copy: BOOLEAN
local
imp, old_imp: ARRAY[STRING]
do
comment("Test of a deep copy.")
create {ARRAY[STRING]} imp.make_empty
imp.force ("Alan", 1)
imp.force ("Mark", 2)
imp.force ("Tom", 3)
old_imp := imp.deep_twin
imp[2] := "Jim"
Result :=
across
1 |..| imp.count as j
all
j.item /= 2 implies imp [j.item] = old_imp [j.item]
end
check not Result end
end
Since it is deep copy, that means address of imp and old_imp are different, as well as that its attributes in both two also refers to different address.
So, this "Result" after across loop, it should be false because addresses in imp and old_imp at same index are different.
So when I debug this code, it say Result is set to be false after finishing across loop.
The problem is that "check not Result" does not make false to true.
If I run workbench system, it says following:
I do not know why. "not" before "Result" in "check not Result" statement should make its whole check true, so it should say "PASSED" in workbench system, but it fails.
why is that?
Your reasoning is correct and the test query as written should return False. Most probably, the system reports FAILED as soon as the query returns False. So, what needs to be done is to fix the query itself.
The items of the array are cloned and therefore the equality = used in the loop gives False for all elements. To get True from the loop, a different equality operator has to be used: ~. It compares objects rather than references. After that change the query gives True and the test should pass.
An alternative would be to replace the equality operator with a call to the feature is_deep_equal:
imp [j.item].is_deep_equal (old_imp [j.item])
Unlike the operator ~ that uses the user-defined feature is_equal to compare objects – strings in the example – is_deep_equal performs "deep" equality test by traversing the whole object tree. The test should pass in this case as well. But deep equality is rarely used in practice.

Why required "=" before ANY function with array as param, in postgres procedure?

I was answering a postgres question yesterday, and also came across a postgres thread (here) where they describe the following error:
ERROR: operator does not exist: text = text[]
HINT: No operator matches the given name and argument type(s). You
might need to add explicit type casts.
The error seems to appear whenever an ARRAY string type is fed to ANY without using = ANY. This seems completely strange since based on language, logic, and sql conventions, usually you have (e.g. IN):
variable FUNCTION(set)
instead of.
variable = FUNCTION(set) , unless ofcourse operator is a summation/count operation returning one result :)
It would make more senseto have variable ANY(Set/Array) instead of variable=ANY(Set/Array). Similar example is the IN function.
Can anyone explain what is going on here?
IN (...) is basically equivalent to = ANY (ARRAY[...])
Crucially, ANY is not a function. It's syntax defined by the SQL standard, and is no more a function than GROUP BY or the OVER clause in a window function.
The reason that = is required before ANY is that ANY can apply to other operators too. What it means is "Test the operator to the left against every element in the array on the right, and return true if the test is true for at least one element."
You can use > ANY (ARRAY[...]) or whatever. It's a general purpose operator that isn't restricted to =. Notably useful for LIKE ANY (albeit with somewhat bad performance).
There is ALL too, which does much the same thing but returns true only if all results are true.

Distinguish between function calls and indexed arrays using ANTLR4

The syntax of a language is ambiguous in the sense that function names and indexed identifiers are written in an equal way:
var = function(5) => function call where 5 is a parameter
var = array(5) => element 5 of the array
To be able to make the distinction I need to make a first pass and create a symbol table. After that I want to use predicates to do something like:
reference
: {isFunction(getCurrentToken().getText())}? ident (argumentList?)
| {!isFunction(getCurrentToken().getText())}? ident (subscriptionList)?
;
But several questions remain:
Do I have to "extend/inherit" the parser to add the code of "isFunction"? Or do I have to put it in the .g4 file itself?
Is predicates the best way here, or is there a better way to achieve all this?
How to run the parser twice? How to handle the "first" run? (in that case isFunction will always return false as the symbol table is not yet constructed)
Somehow I feel there must be an easy, clean way to handle the above issue...
Not directly the answer you might have seeked but I recommend to do it all in code after parsing and not parse the file twice or make the paring dependent on the symbol table.
This could be done by allowing both function calls / array accesses to appear where any one of them would be allowed.
When you transform the rules into an internal representation later on, you can distiguish these two based upon the knowledge of the symbol table.

Resources