Suppose stream B and stream C are child streams of stream A and seeded from the same baseline of A. After parallel development for a while in both B and C, B needs to obtain the codes from the latest baseline bl_C_5 of C. Is there any functional (merging of elements) or non-functional (performance etc.) difference between the following operations?
B rebases from baseline bl_C_5 of C
C delivers baseline bl_C_5 to B
Simple:
"B rebases from baseline bl_C_5 of C" is impossible: a rebase can only merge baseline produced from the parent Stream A, not from C.
You could deliver C to A, set a baseline on A and rebase said baseline on B.
Then the difference with delivering directly C to B (which is possible) would be that the rebase would bring potential other changes delivered to A.
If no other merge (deliver/rebase) has taken place on A, the deliver C to A, and then rebase B is the same than delivering directly to B. The only difference is that all views on A would see C contributions.
Related
I'm currently working on tranforming logical queries for database systems from a DNF form into a CNF form, focussed on queries which have a similar form to
(a and b and c and d and e1) or (a and b and c and d and e2) or (a and b and c and d and e3),
into
a and b and c and d and (e1 or e2 or e3)
I expected there to already be an algorithm for that, but I am currently working with z3 and the only way to form a CNF out of that I saw was with tseitin or similar, which includes extra variables, which are a problem as they can't be used for queries properly.
Using z3 for this purpose is probably not the best option. While there's an internal converter, it's tuned to be used in conjunction with the rest of the solver. If you don't have a need to find a satisfying model, you shouldn't be using a SAT/SMT solver.
Conversion from DNF to CNF can produce exponentially large formulas, and this is the reason why most practical applications will use Tseytin transformation to reduce this complexity; which adds extra variables. If you don't want this, you should use the Quine-McCluskey algorithm, which has worst case exponential behavior. But if your formulas are small enough to start with, it may not matter.
You can code this algorithm yourself, or if you're open to using SymPy, there's an existing implementation (with possible simplifications). See https://docs.sympy.org/latest/modules/logic.html?highlight=to_cnf#sympy.logic.boolalg.to_cnf
Each atomic object has its own associated modification order, which is a total order of modifications made to that object. If, from some thread's point of view, modification A of some atomic M happens-before modification B of the same atomic M, then in the modification order of M, A occurs before B.
Note that although each atomic object has its own modification order, it is not a total order; different threads may observe modifications to different atomic objects in different orders.
Aren't the two bold statements contradictory? I found them on https://en.cppreference.com/w/c/language/atomic and was wondering what exactly is going on now - is it a total order or not? And what exactly is guaranteed now and what isn't?
This is indeed a bad choice of words by cppreference. The important sentence is in fact the last sentence: different threads may observe modifications to different atomic objects in different orders
So if atomic object 1 has the totally ordered sequence of modifications A B C, and atomic object 2 has the totally ordered sequence D E F, then all threads will see A before C and D before F, but threads may disagree whether A comes before D. Therefore, the set of all modifications {A B D C E F} has no total order.
But all threads that agree that B comes before E will also agree that A comes before F. Partial orders still give some guarantees.
I need to solve a linear equation (Ax = b), with a large sparse A and with différents b multiple time. Thus, an LU (or Cholesky if A is symmetric), factorisation is highly preferable.
I'm using the armadillo library and I've heard that one can use the function:
spsolve(x, A, b, "superlu");
In order to solve such a system. I'm not very concerned about retrieving the L and U matrices. However, it is of primal importance that both L and U aren't recomputed every time I call spsolve.
Does spsolve(x, A, b, "superlu") store the LU decomposition and if not, is there a way to retrieve said matrices?
The fact is that armadillo is, essentially, a "wrap-up" for software developed by the other teams; this particularly is correct for the superlu sparse solver. The feature that you are asking for (solving a series of systems of equations with one and the same matrix but different right-hand sides) may not exist in armadillo at all. You should probably use a sparse linear solver directly (not necessarily superlu) that has that feature embedded. In case if you system is very large, so a factorisation-based solver cannot cope with that, an iterative solver may do, and there is an option to go in such case: since modern CPUs are multi-core, several independent solution processes can be run in parallel. One such iterative solver is described in the following blog (you may ask questions and/or participate in discussion there): http://comecau.blogspot.com/2018_09_05_archive.html
Let's say we have relation R(A,B,C,D,E)
The FDs are:
A -> C
AB -> D
CD -> E
I know that the key is AB.
My question is:
Is E fully or partially dependent on AB?
I think it's fully dependent, since AB determines E if I use FD inference rules. But, my coworker says that E is partially dependent on AB since C is partially dependent on AB. What is the correct one?
Thanks for your help
It's fully dependent on AB because you don't have a dependency (even transitive one) A->E or B->E. In practical terms it means: if I tell you the value of A or B you couldn't tell me the value of E - you need both values to deduce the value. Take a look at this short page on functional dependencies:
Y is fully functionally dependent on X and there should not be any Z→Y, Where Z is a proper subset of X.
In this context A or B would be the subsets.
Looking your whole relation in a bigger picture it seems like a theoretical construct. At least it would be poorly designed:
CD -> E breaks the 3rd normal form
A -> C breaks the 2nd normal form - C IS partially dependent on AB
I tried two simple programs in ASP(Answer Set Programming) and then i used the answer set solver DLV to find the answer sets (also referred as stable models); the programs are
P1:
b :- c.
c.
P2:
b :- c.
f.
in P1 dlv finds {c, b} as answer set, in P2 the answer set found is {f}; i can't understand why the answer set is {c,b} in P1, and is only {f} in P2; is not enough {c} in P1 as (minimal) model?
Thank you
That's because {c} is not a model of P1.
In the case of ground (no variables) positive (all bodies positive) programs, the constraints are pretty simple:
For an interpretation to be a model of a ground positive program, every applicable rule must also be applied, where:
applicable: for ground positive programs, all literals in the body are contained in the interpretation,
applied: a head atom is contained the interpretation.
For a model to be an answer set, it has to be minimal among the set of all models of this program.
So, in P1 you have this rule:
b :- c.
which, for the interpretation {c}, is applicable (since c is in the interpretation), but not applied (because b isn't).
As for P2, we have the fact f., implying you have to have f in any answer set (since f. is the same f :-, meaning it's always applicable). However, f does not render the rule b :- c applicable, and there are no other rules, so {f} is a model of P2, and obviously also a minimal one - an answer set.
Because of that, e.g. {c,b,f} is not an answer set of P2, since it's not minimal when compared with {f}.