I try to figure out a question, however I do not how to solve it, I am unannounced most of the terms in the question. Here is the question:
Three transactions; T1, T2 and T3 and schedule program s1 are given
below. Please draw the precedence or serializability graph of the s1
and specify the serializability of the schedule S1. If possible, write
at least one serial schedule. r ==> read, w ==> write
T1: r1(X);r1(Z);w1(X);
T2: r2(Z);r2(Y);w2(Z);w2(Y);
T3: r3(X);r3(Y);w3(Y);
S1: r1(X);r2(Z);r1(Z);r3(Y);r3(Y);w1(X);w3(Y);r2(Y);w2(Z);w2(Y);
I do not have any idea about how to solve this question, I need a detailed description. In which resource should I look for? Thank in advance.
There are various ways to test for serializability. The Objective of serializability is to find nonserial schedules that allow transactions to execute concurrently without interfering with one another.
First we do a Conflict-Equivalent Test. This will tell us whether the schedule is serializable.
To do this, we must define some rules (i & j are 2 transactions, R=Read, W=Write).
We cannot Swap the order of actions if equivalent to:
1. Ri(x), Wi(y) - Conflicts
2. Wi(x), Wj(x) - Conflicts
3. Ri(x), Wj(x) - Conflicts
4. Wi(x), Rj(x) - Conflicts
But these are perfectly valid:
R1(x), Rj(y) - No conflict (2 reads never conflict)
Ri(x), Wj(y) - No conflict (working on different items)
Wi(x), Rj(y) - No conflict (same as above)
Wi(x), Wj(y) - No conflict (same as above)
So applying the rules above we can derive this (using excel for simplicity):
From the result, we can clearly see with managed to derive a serial-relation (i.e. The schedule you have above, can be split into S(T1, T3, T2).
Now that we have a serializable schedule and we have the serial schedule, we now do the Conflict-Serialazabile test:
Simplest way to do this, using the same rules as the conflict-equivalent test, look for any combinations which would conflict.
r1(x); r2(z); r1(z); r3(y); r3(y); w1(x); w3(y); r2(y); w2(z); w2(y);
----------------------------------------------------------------------
r1(z) w2(z)
r3(y) w2(y)
w3(y) r2(y)
w3(y) w2(y)
Using the rules above, we end up with a table like above (e.g. we know reading z from one transaction and then writing z from another transaction will cause a conflict (look at rule 3).
Given the table, from left to right, we can create a precedence graph with these conditions:
T1 -> T2
T3 -> T2 (only 1 arrow per combination)
Thus we end up with a graph looking like this:
From the graph, since there it's acyclic (no cycle) we can conclude the schedule is conflict-serializable. Furthermore, since its also view-serializable (since every schedule that's conflict-s is also view-s). We could test the view-s to prove this, but it's rather complicated.
Regarding sources to learn this material, I recommend:
"Database Systems: A practical Approach To design, implementation and management: International Edition" by Thomas Connolly; Carolyn Begg - (It is rather expensive so I suggest looking for a cheaper, pdf copy)
Good luck!
Update
I've developed a little tool which will do all of the above for you (including graph). It's pretty simple to use, I've also added some examples.
Related
I have problem with finding "correct" associations within production data.
The data looks like this
A;B;C;D;E;F;G
1;0;1;0;0;0;0
0;1;0;0;0;0;0
0;0;0;1;0;0;0
0;0;1;0;1;0;0
1;0;0;0;0;0;0
0;0;0;0;0;1;0
0;0;0;0;0;0;1
1;0;1;0;0;0;0
(Of course I have a lot more steps and rows)
Where A,B,C etc are production steps. 0 means that a worker did not perform this production step and 1 means that this step was performed by a worker. For example, first row - 1;0;1;0;0;0;0 means that steps A & C where performed at the same time by a worker. And second row -0;1;0;0;0;0;0 means that (perhaps another worker) performed only production step B.
So it happens that some of the production steps are usually performed simultaneously by the same worker, just like step A & C in the example above (2 out of 3 times they occur together). In order to find which steps tend to be performed together I applied apriori algorithm.
I hoped to receive answer like "If there is 1 in column A, it is likely that 1 will appear in column C". But instead, apriori algorithm found for me this "cool" rules which basically say that there are a lot of 0s in the table. Rules found where like this "If there is 0 in columns A and G, it is likely that there is 0 in column E" - thanks Sherlock
I need this algorithm to focus on rules connected to where are 1s in the table, not 0s. Basically any rule that looks at 0s can be ignored. I just want rules that look at 1s because I want to know which production steps tend to be performed together and I don't care which production steps are not performed together (0s) because obviously majority of the steps are not performed simultaneously.
Does anybody have some idea how to find associations between 1s instead of 0s?
I use Weka software to do the data mining.
Apriori has no notion of what the labels represent, they are just strings.
Have you tried the -Z option, treating the first label in attribute as missing?
We are using GraphDB 8.4.0 as a triple store for a large data integration project. We want to make use of the reasoning capabilities, and are trying to decide between using HORST (pD*) and OWL2-RL.
However, the literature describing these is quite dense. I think I understand that OWL2-RL is more "powerful", but I am unsure how so. Can anyone provide some examples of the differences between the two reasoners? What kinds of statements are inferred by OWL2-RL, but not HORST (and vice versa)?
Brill, inside there GraphDB there is a single rule engine, which supports different reasoning profiles, depending on the rule-set which was selected. The predefined rule-sets are part of the distribution - look at the PIE files in folder configs/rules. One can also take one of the existing profiles and tailor it to her needs (e.g. remove a rule, which is not necessary).
The fundamental difference between OWL2 RL and what we call OWL-Horst (pD*) is that OWL2RL pushes the limits of which OWL constructs can be supported using this type of entailment rules. OWL Horst is limited to RDFS (subClassOf, subSpropertyOf, domain and range) plus what was popular in the past as OWL Lite: sameAs, equivalentClass, equivalentProperty, SymmetricProperty, TransitiveProperty, inverseOf, FunctionalProperty, InverseFunctionalProperty. There is also partial support for: intersectionOf, someValuesFrom, hasValue, allValuesFrom.
What OWL 2 RL adds on top is support for AsymmetricProperty, IrreflexiveProperty, propertyChainAxiom, AllDisjointProperties, hasKey, unionOf, complementOf, oneOf, differentFrom, AllDisjointClasses and all the property cardinality primitives. It also adds more complete support for intersectionOf, someValuesFrom, hasValue, allValuesFrom. Be aware that there are limitations to the inference supported by OWL 2 RL for some of these properties, e.g. what type of inferences should or should not be done for specific class expressions (OWL restrictions). If you chose OWL 2 RL, check Tables 5-8 in the spec, https://www.w3.org/TR/owl2-profiles/#OWL_2_RL. GraphDB's owl-2-rl data set is fully compliant with it. GraphDB is the only major triplestore with full OWL 2 RL compliance - see the this table (https://www.w3.org/2001/sw/wiki/OWL/Implementations) it appears with its former name OWLIM.
My suggestion would be to go with OWL Horst for a large dataset, as reasoning with OWL 2 RL could be much slower. It depends on your ontology and data patterns, but as a rule of thumb you can expect loading/updates to be 2 times slower with OWL 2 RL, even if you don't use extensively its "expensive" primitives (e.g. property chains). See the difference between loading speeds with RDFS+ and OWL 2 RL benchmarked here: http://graphdb.ontotext.com/documentation/standard/benchmark.html
Finally, I would recommend you to use the "optimized" versions of the pre-defined rule-sets. These versions exclude some RDFS reasoning rules, which are not useful for most of the applications, but add substantial reasoning overheads, e.g. the one that infers that the subject, the predicate and the object of a statement are instances of rdfs:Resource
Id: rdf1_rdfs4a_4b
x a y
-------------------------------
x <rdf:type> <rdfs:Resource>
a <rdf:type> <rdfs:Resource>
y <rdf:type> <rdfs:Resource>
If you want to stay 100% compliant with the W3C spec, you should stay with the non-optimized versions.
If you need further assistance, please, write to support#ontotext.com
In addition to what Atanas (our CEO) wrote and your excellent example, http://graphdb.ontotext.com/documentation/enterprise/rules-optimisations.html provides some ideas how to optimize your rulesets to make them faster.
Two of the ideas are:
ptop:transitiveOver is faster than owl:TransitiveProperty: quadratic vs cubic complexity over the length of transitive chains
ptop:PropChain (a 2-place chain) is faster than general owl:propertyChainAxiom (n-place chain) because it doesn't need to unroll the rdf:List underlying the representation of owl:propertyChainAxiom.
Under some conditions you can translate the standard OWL constructs to these custom constructs, to have both standards compliance and faster speed:
use rule TransitiveUsingStep; if every TransitiveProperty p (eg skos:broaderTransitive) is defined over a step property s (eg skos:broader) and you don't insert p directly
if you use only 2-step owl:propertyChainAxiom then translate them to custom using the following rule, and infer using rule ptop_PropChain:
Id: ptop_PropChain_from_propertyChainAxiom
q <owl:propertyChainAxiom> l1
l1 <rdf:first> p1
l1 <rdf:rest> l2
l2 <rdf:first> p2
l2 <rdf:rest> <rdf:nil>
----------------------
t <ptop:premise1> p1
t <ptop:premise2> p2
t <ptop:conclusion> q
t <rdf:type> <ptop:PropChain>
http://rawgit2.com/VladimirAlexiev/my/master/pubs/extending-owl2/index.html describes further ideas for extended property constructs, and has illustrations.
Let us know if we can help further.
After thinking this for bit, I came up with a concrete example. The Oral Health and Disease Ontology (http://purl.obolibrary.org/obo/ohd.owl) contains three interrelated entities:
a restored tooth
a restored tooth surface that is part of the restored tooth
a tooth restoration procedure that creates the restored tooth surface (e.g., when you have a filling placed in your tooth)
The axioms that define these entities are (using pseudo Manchester syntax):
restored tooth equivalent to Tooth and has part some dental restoration material
restored tooth surface subclass of part of restored tooth
filling procedure has specified output some restored tooth surface
The has specified output relation is a subproperty of the has participant relation, and the has participant relation contains the property chain:
has_specified_input o 'is part of'
The reason for this property chain is for reasoner to infer that if a tooth surface participates in a procedure, then the whole tooth that the surface is part of participates in the procedure, and, furthermore, the patient that the tooth is part of also participates in the procedure (due to the transitivity of part of).
As a concrete example, let define some individuals (using pseudo rdf):
:patient#1 a :patient .
:tooth#1 a :tooth; :part-of :patient#1
:restored-occlusal#1 a :restored-occlusal-surface; :part-of tooth#1 .
:procedure#1 :has-specified-output :restored-occlusal#1 .
Suppose you want to query for all restored teeth:
select ?tooth where {?tooth a :restored-tooth}
RDFS-Plus reasoning will not find any restored teeth b/c it doesn't reason over equivalent classes. But, OWL-Horst and OWL2-RL will find such teeth.
Now, suppose you want to find all patients that underwent (i.e. participated in) a tooth restoration procedure:
select ?patient where {
?patient a patient: .
?proc a tooth_restoration_procedure:; has_participant: ?patient .
}
Again, RDFS-Plus will not find any patients, and neither will OWL-Horst b/c this inference requires reasoning over the has participant property chain. OWL2-RL is needed in order to make this inference.
I hope this example is helpful for the interested reader :).
I welcome comments to improve the example. Please keep any comments within the scope of trying to make the example clearer. Its purpose is to give insight into what these different levels of reasoning do and not to give advice about which reasoner is most appropriate.
[exam question]Given a key:
S: r3(X); r1(Y); w3(Z); w3(X); w1(Z); w1(X); r2(X); w2(Z); w2(Y); r1(X);
and also
T1: r1(Y); w1(Z); w1(X); r1(X);
T2: r2(X); w2(Z); w2(Y);
T3: r3(X); w3(Z); w3(X);
Indicate after each operation which locks are there. I really dont understand it and I need your help. thank you.
this is the answer: but I have no idea how its done.
time stamp ordering for this question:
Note: This should be a comment, but I don't have reputation for it
I don't think you should be asking for help about homework, though I can point that if S is the sequence of operations (it looks like it is) then the answer is pretty straighforward.
EDIT:
On chapter 16, Section 4, Figure 16.15 of Distributed Systems: Concepts and Design 5th edition (link) you have the lock compatibility table.
Also, Figure 16.16 has everything you need to know to execute the 2PL.
As for the commits (the c1, c2, c3), you just need to look at the last operation of each transaction.
Thus, first build the table with the operations and the commit events and then, for each object (X, Y, Z) determine who has the locks by using the references I just gave you.
I'm not sure what exactly I'm trying to ask. I want to be able to make some code that can easily take an initial and final state and some rules, and determine paths/choices to get there.
So think, for example, in a game like Starcraft. To build a factory I need to have a barracks and a command center already built. So if I have nothing and I want a factory I might say ->Command Center->Barracks->Factory. Each thing takes time and resources, and that should be noted and considered in the path. If I want my factory at 5 minutes there are less options then if I want it at 10.
Also, the engine should be able to calculate available resources and utilize them effectively. Those three buildings might cost 600 total minerals but the engine should plan the Command Center when it would have 200 (or w/e it costs).
This would ultimately have requirements similar to 10 marines # 5 minutes, infantry weapons upgrade at 6:30, 30 marines at 10 minutes, Factory # 11, etc...
So, how do I go about doing something like this? My first thought was to use some procedural language and make all the decisions from the ground up. I could simulate the system and branching and making different choices. Ultimately, some choices are going quickly make it impossible to reach goals later (If I build 20 Supply Depots I'm prob not going to make that factory on time.)
So then I thought weren't functional languages designed for this? I tried to write some prolog but I've been having trouble with stuff like time and distance calculations. And I'm not sure the best way to return the "plan".
I was thinking I could write:
depends_on(factory, barracks)
depends_on(barracks, command_center)
builds_from(marine, barracks)
build_time(command_center, 60)
build_time(barracks, 45)
build_time(factory, 30)
minerals(command_center, 400)
...
build(X) :-
depends_on(X, Y),
build_time(X, T),
minerals(X, M),
...
Here's where I get confused. I'm not sure how to construct this function and a query to get anything even close to what I want. I would have to somehow account for rate at which minerals are gathered during the time spent building and other possible paths with extra gold. If I only want 1 marine in 10 minutes I would want the engine to generate lots of plans because there are lots of ways to end with 1 marine at 10 minutes (maybe cut it off after so many, not sure how you do that in prolog).
I'm looking for advice on how to continue down this path or advice about other options. I haven't been able to find anything more useful than towers of hanoi and ancestry examples for AI so even some good articles explaining how to use prolog to DO REAL THINGS would be amazing. And if I somehow can get these rules set up in a useful way how to I get the "plans" prolog came up with (ways to solve the query) other than writing to stdout like all the towers of hanoi examples do? Or is that the preferred way?
My other question is, my main code is in ruby (and potentially other languages) and the options to communicate with prolog are calling my prolog program from within ruby, accessing a virtual file system from within prolog, or some kind of database structure (unlikely). I'm using SWI-Prolog atm, would I be better off doing this procedurally in Ruby or would constructing this in a functional language like prolog or haskall be worth the extra effort integrating?
I'm sorry if this is unclear, I appreciate any attempt to help, and I'll re-word things that are unclear.
Your question is typical and very common for users of procedural languages who first try Prolog. It is very easy to solve: You need to think in terms of relations between successive states of your world. A state of your world consists for example of the time elapsed, the minerals available, the things you already built etc. Such a state can be easily represented with a Prolog term, and could look for example like time_minerals_buildings(10, 10000, [barracks,factory])). Given such a state, you need to describe what the state's possible successor states look like. For example:
state_successor(State0, State) :-
State0 = time_minerals_buildings(Time0, Minerals0, Buildings0),
Time is Time0 + 1,
can_build_new_building(Buildings0, Building),
building_minerals(Building, MB),
Minerals is Minerals0 - MB,
Minerals >= 0,
State = time_minerals_buildings(Time, Minerals, Building).
I am using the explicit naming convention (State0 -> State) to make clear that we are talking about successive states. You can of course also pull the unifications into the clause head. The example code is purely hypothetical and could look rather different in your final application. In this case, I am describing that the new state's elapsed time is the old state's time + 1, that the new amount of minerals decreases by the amount required to build Building, and that I have a predicate can_build_new_building(Bs, B), which is true when a new building B can be built assuming that the buildings given in Bs are already built. I assume it is a non-deterministic predicate in general, and will yield all possible answers (= new buildings that can be built) on backtracking, and I leave it as an exercise for you to define such a predicate.
Given such a predicate state_successor/2, which relates a state of the world to its direct possible successors, you can easily define a path of states that lead to a desired final state. In its simplest form, it will look similar to the following DCG that describes a list of successive states:
states(State0) -->
( { final_state(State0) } -> []
; [State0],
{ state_successor(State0, State1) },
states(State1)
).
You can then use for example iterative deepening to search for solutions:
?- initial_state(S0), length(Path, _), phrase(states(S0), Path).
Also, you can keep track of states you already considered and avoid re-exploring them etc.
The reason you get confused with the example code you posted is essentially that build/1 does not have enough arguments to describe what you want. You need at least two arguments: One is the current state of the world, and the other is a possible successor to this given state. Given such a relation, everything else you need can be described easily. I hope this answers your question.
Caveat: my Prolog is rusty and shallow, so this may be off base
Perhaps a 'difference engine' approach would be appropriate:
given a goal like 'build factory',
backwards-chaining relations would check for has-barracks and tell you first to build-barracks,
which would check for has-command-center and tell you to build-command-center,
and so on,
accumulating a plan (and costs) along the way
If this is practical, it may be more flexible than a state-based approach... or it may be the same thing wearing a different t-shirt!
I am reading a lot about logic programming - ASP (Answer Set Programming) is one example or this. They (logic programs) are usually in the form:
[Program 1]
Rule1: a <- a1, a2, ..., not am, am+1;
Rule2: ...
This set of rules is called the logic program and the s.c. model is the result of such computation - some kind of assignment of True/False values to each of a1, a2, ...
There is lot of research going on - e.g. how such kind of programs (rules) can be integrated with the (semantic web) ontologies to build knowledge bases that contain both - rules and ontologies (some kind of constraints/behaviour and data); there is lot of research about ASP itself - like parallel extensions, extensions for probabilistic logic, for temporal logic and so on.
My question is - is there some kind of research and maybe some proof-of-concept projects where this analysis is extended from Boolean variables to variables with integer and maybe even float domains? Currently I have not found any research that could address the following programs:
[Program 2]
Rule1 a1:=5 <- a2=5, a3=7, a4<8, ...
Rule2 ...
...
[the final assignment of values to a1, a2, etc., is the solution of this program]
Currently - as I understand - if one could like to perform some kind of analysis on Program-2 (e.g. to find if this program is correct in some sense - e.g. if it satisfies some properties, if it terminates, what domains are allowed not to violate some kind of properties and so on), then he or she must restate Program-2 in terms of Program-1 and then proceed in way which seems to be completely unexplored - to my knowledge (and I don't believe that is it unexplored, simply - I don't know some sources or trend). There is constraint logic programming that allow the use of statements with inequalities in Program-1, but it is too focused on Boolean variables as well. Actually - Programm-2 is of kind that can be fairly common in business rules systems, that was the cause of my interest in logic programming.
SO - my question has some history - my practical experience has led me to appreciate business rules systems/engines, especially - JBoss project Drools and it was my intention to do some kind of research of theory underlying s.c. production rules systems (I was and I am planning to do my thesis about them - if I would spot what can be done here), but I can say that there is little to do - after going over the literature (e.g. http://www.computer.org/csdl/trans/tk/2010/11/index.html was excellent IEEE TKDE special issues with some articles about them, one of them was writter by Drools leader) one can see that there is some kind of technical improvements of the decades old Rete algorithm but there is no theory of Drools or other production rule systems that could help with to do some formal analysis about them. So - the other question is - is there theory of production rule systems (for rule engines like Drools, Jess, CLIPS and so on) and is there practical need for such theory and what are the practical issues of using Drools and other systems that can be addressed by the theory of production rule systems.
p.s. I know - all these are questions that should be directed to thesis advisor, but my current position is that there is no (up to my knowledge) person in department where I am enrolled with who could fit to answer them, so - I am reading journals and also conference proceedings (there are nice conference series series of Lecture Notes in Computer Science - RuleML and RR)...
Thanks for any hint in advance!
In a sense the boolean systems already do what you suggest.
to ensure A=5 is part of your solution, consider the rules (I forget my ASP syntax so bear with me)
integer 1..100 //integers 1 to 100 exist
1{A(X) : integer(X)}1 //there is one A(X) that is true, where X is an integer
A(5) //A(5) is true
and I think your clause would require:
integer 1..100 //integers 1 to 100 exist
1{A(X) : integer(X)}1 //A1 can take only one value and must take a value
1{B(X) : integer(X)}1 //A2 ``
1{C(X) : integer(X)}1 //A3 ``
1{D(X) : integer(X)}1 //A4 ``
A(5) :- B(5), C(7), D(8) //A2=5, A3=7, A4=8 ==> A1=5
I hope I've understood the question correctly.
Recent versions of Clojure core.logic (since 0.8) include exactly this kind of support, based on cKanren
See an example here: https://gist.github.com/4229449