Answer set of a program - why is the empty set not an answer set? - logic-programming

I'm a little confused by the definition of an answer set.
S is an answer set of P if S is the least model of P.
When I have a program
b :- a
a.
Then I know my answer set has to be {a,b}, because a is a fact.
What happens if I have something like
a :- b
In the slides I found, they state that a is an answer set. But by my understanding the rule is satisfied, when b = true implies a = true.
So if I set a = false and b = false then the rule would be satisfied as well.
Why is the empty set not the answer set? (as it would be a subset of {a})

Empty set is an answer set of a :- b.. Try running your example online: https://potassco.org/clingo/run/
clingo version 5.3.0
Reading from stdin
-:1:6-7: info: atom does not occur in any rule head:
b
Solving...
Answer: 1
SATISFIABLE
Models : 1
(note empty line between "Answer: 1" and "SATISFIABLE" -> empty set)

Related

I cannot seem to create a DFA for this language

{w∈{a,b}∗|w has baba as a substring}
I am confused with this. How can I capture the following input for example?
aaababa
abbbaba
ababbba
ababaaaa
It seems there must always be baba in either middle or start or end. I must decide whether to put it always in start or middile or end?
Thanks
By the Myhill-Nerode theorem, the states are:
Already matched 'baba' (accepting state)
Immediately after 'bab', and haven't matched 'baba'
Immediately after 'ba', and haven't matched 'baba'
Immediately after 'b', and none of the above
None of the above (start state)
It's essentially the same DFA created by the Knuth-Morris-Pratt search algorithm.

Eiffel: how to use old keyword in ensure contract clause with detached

How to use the old keyword into ensure clause of a feature, the current statement doesn't seem to be valid at runtime
relationships: CHAIN -- any chain
some_feature
do
(...)
ensure
relationship_added: attached relationships as l_rel implies
attached old relationships as l_old_rel and then
l_rel.count = l_old_rel.count
end
For the whole case, following screenshots demonstrate the case
start of routine execution
end of routine execution
There are 2 cases to consider: when the original reference is void, and when — not:
attached relationships as r implies r.count =
old (if attached relationships as o then o.count + 1 else 1 end)
The intuition behind the assertion is as follows:
If relationships is Void on exit, we do not care about count.
If relationships is attached on exit, it has one more item compared to the old value. The old value may be attached or Void:
if the old value is attached, the new number of items is the old number plus 1;
if the old value is Void, the new number of items is 1.

Power Query M loop table / lookup via a self-join

First of all I'm new to power query, so I'm taking the first steps. But I need to try to deliver sometime at work so I can gain some breathing time to learn.
I have the following table (example):
Orig_Item Alt_Item
5.7 5.10
79.19 79.60
79.60 79.86
10.10
And I need to create a column that will loop the table and display the final Alt_Item. So the result would be the following:
Orig_Item Alt_Item Final_Item
5.7 5.10 5.10
79.19 79.60 79.86
79.60 79.86 79.86
10.10
Many thanks
Actually, this is far too complicated for a first Power Query experience.
If that's what you've got to do, then so be it, but you should be aware that you are starting with a quite difficult task.
Small detail: I would expect the last Final_Item to be 10.10. According to the example, the Final_Item will be null if Alt_Item is null. If that is not correct, well that would be a nice first step for you to adjust the code below accordingly.
You can create a new blank query, copy and paste this code in the Advanced Editor (replacing the default code) and adjust the Source to your table name.
let
Source = Table.Buffer(Table1),
AddedFinal_Item =
Table.AddColumn(
Source,
"Final_Item",
each if [Alt_Item] = null
then null
else List.Last(
List.Generate(
() => [Final_Item = [Alt_Item], Continue = true],
each [Continue],
each [Final_Item =
Table.First(
Table.SelectRows(
Source,
(x) => x[Orig_Item] = [Final_Item]),
[Alt_Item = "not found"]
)[Alt_Item],
Continue = Final_Item <> "not found"],
each [Final_Item])))
in
AddedFinal_Item
This code uses function List.Generate to perform the looping.
For performance reasons, the table should always be buffered in memory (Table.Buffer), before invoking List.Generate.
List.Generate is one of the most complex Power Query functions.
It requires 4 arguments, each of which is a function in itself.
In this case the first argument starts with () and the other 3 with each (it should be clear from the outline above: they are aligned).
Argument 1 defines the initial values: a record with fields Final_Item and Continue.
Argument 2 is the condition to continue: if an item is found.
Argument 3 is the actual transformation in each iteration: the Source table is searched (with Table.SelectRows) for an Orig_Item equal to Alt_Item. This is wrapped in Table.First, which returns the first record (if any found) and accepts a default value if nothing found, in this case a record with field Alt_Item with value "not found", From this result the value of record field [Alt_Item] is returned, which is either the value of the first record, or "not found" from the default value.
If the value is "not found", then Continue becomes false and the iterations will stop.
Argument 4 is the value that will be returned: Final_Item.
List.Generate returns a list of all values from each iteration. Only the last value is required, so List.Generate is wrapped in List.Last.
Final remark: actual looping is rarely required in Power Query and I think it should be avoided as much as possible. In this case, however, it is a feasible solution as you don't know in advance how many Alt_Items will be encountered.
An alternative for List.Generate is using a resursive function.
Also List.Accumulate is close to looping, but that has a fixed number of iterations.
This can be solved simply with a self-join, the open question is how many layers of indirection you'll be expected to support.
Assuming just one level of indirection, no duplicates on Orig_Item, the solution is:
let
Source = #"Input Table",
SelfJoin1 = Table.NestedJoin( Source, {"Alt_Item"}, Source, {"Orig_Item"}, "_tmp_" ),
Expand1 = ExpandTableColumn( SelfJoin1, "_tmp_", {"Alt_Item"}, {"_lkp_"} ),
ChkJoin1 = Table.AddColumn( Expand1, "Final_Item", each (if [_lkp_] = null then [Alt_Item] else [_lkp_]), type number)
in
ChkJoin1
This is doable with the regular UI, using Merge Queries, then Expand Column and adding a custom column.
If yo want to support more than one level of indirection, turn it into a function to be called X times. For data-driven levels of indirection, you wrap the calls in a list.generate that drop the intermediate tables in a structured column, though that's a much more advanced level of PQ.

prolog avoiding duplicate predicates

I was wondering whether it is possible to test whether a predicate already exists (with the same information) to then avoid the user being able to input the same information again.
I have already managed to do it for a single predicate:
:- dynamic(test/2).
test(a,b).
top(X,Y) :-
(test(X,Y),
write('Yes'),!
;write('No'),!
).
This version works just fine, returning 'Yes' if the information already exists and 'No' if it doesn't.
I was wondering whether it would be possible to do this for multiple prediactes, not just for 'test/2';
I have tried to replace the predicate 'test' with a variable Pred but unfortunately I get a syntax error when I try to compile it.
Here is my attempt:
main(Pred,X,Y) :-
(Pred(X,Y),
write('Yes'),!
;write('No'),!
).
Is it even possible to do something like this and if it is how would it be possible?
Btw I am using GNU Prolog if it helps.
Thank you very much for your help :D !!
You want call/2, to call a dynamic goal with arguments, evaluated at runtime. In your case, it would be call(Pred,X,Y):
main(Pred,X,Y) :-
(
call(Pred,X,Y),
write('Yes'),!
)
;
(
write('No'),!
).
Do note that Pred/2 must resolve to an actual predicate at runtime, and you will need to build a different rule for each number of arguments.
#Tomas-By's answer, using (=..)/2 lets you create a single rule, with a list of args, but with the same caveats regarding predicates existing, albeit with an extra line:
main(Pred,L) :- % where L is a list of args [X,Y|...]
Term =.. [Pred | L],
(
Term,
write('Yes'),!
)
;
(
write('No'),!
).
And, as pointed out in the comments by #lurker, in either instance, using (->)/2:
(call(Pred,X,Y) -> write('Yes') ; write('No'))
or
(Term -> write('Yes') ; write('No'))
may be preferable as the destruction of choice points is limited to the if->then;else structure.
There is an operator =.. for constructing terms, as in:
Term =.. [Op,V1,V2]
not sure if that is in Gnu Prolog.
Using Sicstus:
xxx(1,2).
check(Pred,X,Y) :-
Term =.. [Pred,X,Y],
( Term ->
write('Yes')
; write('No') ).
and after loading the file:
| ?- check(xxx,1,2).
Yes

Reduced Survey Frequency - Salesforce Workflow

Hoping you can help me review the logic below for errors. I am looking to create a workflow that will send a survey out to end users on a reduced frequency. Basically, it will check the Account object of the Case for a field, 'Reduced Survey Frequency', which contains a # and will not send a survey until that # of days has passed since the last date set on the Contact field 'Last Survey Date'. Please review the code and let me know any recommended changes!
AND( OR(ISPICKVAL(Status,"Closed"), ISPICKVAL(Status,"PM Sent")),
OR(CONTAINS(RecordType.Name,"Portal Case"),CONTAINS(RecordType.Name,"Standard Case"),
CONTAINS(RecordType.Name,"Portal Closed"),
CONTAINS(RecordType.Name,"Standard Closed")),
NOT( Don_t_sent_survey__c )
,
OR(((TODAY()- Contact.Last_Survey_Date__c) >= Account.Reduced_Survey_Frequency__c ),Account.Reduced_Survey_Frequency__c==0,
ISBLANK(Account.Reduced_Survey_Frequency__c),
ISBLANK(Contact.Last_Survey_Date__c)
))
Thanks,
Brian H.
Personally I prefer the syntax where && and || are used instead of AND(), OR()functions. It just reads bit nicer to me, no need to trace so many commas, keep track of indentation in the more complex logic... But if you're more used to this Excel-like flow - go for it. In the end it has to be readable for YOU.
Also I'd consider reordering this a bit - simple checks, most likely to fail first.
The first part - irrelevant to your question
Don't use RecordType.Name because these Names can be translated to say French and it will screw your logic up for users who will select non-English as their preferred language. Use RecordType.DeveloperName, it's safer.
CONTAINS - do you really have so many record types that share this part in their name? What's wrong with normal = comparison? You could check if the formula would be more readable with CASE() statement. Or maybe flip the logic if there are say 6 rec types and you've explicitly listed 4 (this might have to be reviewed though when you add new rec. type). If you find yourself copy-pasting this block of 4 checks frequently - consider making a helper formula field with it...
The second part
ISBLANK checks could be skipped if you'll properly use the "treat nulls as blanks / as zeroes" setting at the bottom of formula editor. Because you're making check like
OR(...,
Account.Reduced_Survey_Frequency__c==0,
ISBLANK(Account.Reduced_Survey_Frequency__c),
...
)
which is essentially what this thing was designed for. I'd flip it to "treat nulls as zeroes" (but that means the ISBLANK check will never "fire"). If you're not comfortable with that - you can also "safely compare or substract" by using
BLANKVALUE(Account.Reduced_Survey_Frequency__c,0)
Which will have the similar "treat null as zero" effect but only in this one place.
So... I'd end up with something like this:
(ISPICKVAL(Status,'Closed') || ISPICKVAL(Status, 'PM Sent')) &&
(RecordType.DeveloperName = 'Portal_Case' ||
RecordType.DeveloperName = 'Standard_Case' ||
RecordType.DeveloperName = 'Portal_Closed' ||
RecordType.DeveloperName = 'Standard_Closed'
) &&
NOT(Don_t_sent_survey__c) &&
(Contact.Last_Survey_Date__c + Account.Reduced_Survey_Frequency__c < TODAY())
No promises though ;)
You can easily test them by enabling debug logs. You'll see there the workflow formula together with values that are used to evaluate it.
Another option is to make a temporary formula field with same logic and observe (in a report?) where it goes true/false for mass spot check.

Resources