How to express xRy & yR-z & x!=z? - owl

I'm doing an ontology about genealogy and I found out that I can use chain property to easily express some properties. So I have the two properties hasChild and hasParent (one is the opposite of the other).
I want to create a relation hasSibling that says:
forall x, y, z: hasSibling(x, z) = hasParent(x, y) and hasChild(y, z)
So I create the property hasSibling and defined it using chain property as:
hasParent o hasChild -> hasSibling
The problem is that a person is its own sibling. And I know it's normal because I never said that x != z. So my question is: how can I express this using Protégé?
Please do not hesitate to correct my question if necessary.

Related

Does blockwise allow iteration over out-of-core arrays?

The blockwise docs mention that with concatenate=False:
In the case of a contraction the passed function should expect an iterable of blocks on any array that holds that index.
My question then is whether or not there is a fundamental limitation that would prohibit this "iterable of blocks" from loading the blocks one at a time rather than keeping them all in a list (i.e. in memory). Is this possible? It does not look like blockwise works this way now, but I am wondering if it could:
import dask.array as da
import operator
# Create an array and write to disk
x = da.random.random(size=(10, 6), chunks=(5, 3))
da.to_zarr(x, '/tmp/x.zarr', overwrite=True)
x = da.from_zarr('/tmp/x.zarr')
y = x.T
def fn(x, y):
print(type(x), type(x[0]))
x = np.concatenate(x, axis=1)
y = np.concatenate(y, axis=0)
return np.matmul(x, y)
da.blockwise(fn, 'ik', x, 'ij', y, 'jk', concatenate=False, dtype='float').compute(scheduler='single-threaded')
# <class 'list'> <class 'numpy.ndarray'>
Is it possible for these lists to be generators instead?
This was true very early on in Dask, but we switched to concrete lists eventually. Today a task does not start until all of its dependency tasks are available in memory.
Given the context of your question I'm guessing that you're running up against memory issues with tensordot style applications. The memory use of tensordot style applications depends heavily on chunk structure. I encourage you to look at this issue, and especially at the talk referenced in the first post: https://github.com/dask/dask/issues/2225

OWL-API: making a set of individuals equivalent to owl:Thing

I'm trying to add an equivalent axiom of the following form:
owl:Thing EquivalentTo {individual1, indivdual2, ... individualN}
Below is how I'm trying to add the axiom:
String individualSet = "{a, b, c, d}"
OWLAxiom a = df.getOWLEquivalentClassesAxiom(df.getOWLClass(individualSet), df.getOWLThing());
manager.addAxiom(ontology, a);
The problem is that this actually creates an extra class with the name "{a, b, c, d}", which prevents a reasoner from making right conclusions as intended.
In Protege, I can add this type of Equivalent To axiom without resulting in an extra class... How can I do the same with OWL-API?
I figured it out. I had to use OWLObjectOneOf to compose a set of individuals and make that equivalent to owl:Thing.

Is there any way to handle Google Datastore Kind Property names with spaces in Golang?

I'm running in a nasty issue with Datastore that doesn't appear to have any workaround.
I'm using the Google Appengine Datastore package to pull back projection query results into Appengine memory for manipulation, and that is being accomplished with representing each entity as a Struct, with each Struct field corresponding to a Property name, like so:
type Row struct {
Prop1 string
Prop2 int
}
This works great, but I've extended my queries into reading other property names that have spaces. While the query runs fine, it can't pull the data back into structs as it's looking to place the given value into a struct of the same naming convention, and I'm getting this kind of error:
datastore: cannot load field "Viewed Registration Page" into a "main.Row": no such struct field
Obviously Golang cannot represent a struct field like this. There is a field of the relevant type but no obvious way to tell the query to place it there.
What would be the best solution here?
Cheers
Actually Go supports mapping entity property names to different struct field names using tags (see this answer for details: What are the use(s) for tags in Go?).
For example:
type Row struct {
Prop1 string `datastore:"Prop1InDs"`
Prop2 int `datastore:"p2"`
}
But the Go implementation of the datastore package panics if you attempt to use a property name which contains a space.
To sum it up: you can't map property names having spaces to struct fields in Go (this is an implementation restriction which may change in the future).
But the good news is that you can still load these entities, just not into struct values.
You may load them into a variable of type datastore.PropertyList. datastore.PropertyList is basically a slice of datastore.Property, where Property is a struct which holds the name of the property, its value and other info.
This is how it can be done:
k := datastore.NewKey(ctx, "YourEntityName", "", 1, nil) // Create the key
e := datastore.PropertyList{}
if err := datastore.Get(ctx, k, &e); err != nil {
panic(err) // Handle error
}
// Now e holds your entity, let's list all its properties.
// PropertyList is a slice, so we can simply "range" over it:
for _, p := range e {
ctx.Infof("Property %q = %q", p.Name, p.Value)
}
If your entity has a property "Have space" with value "the_value", you will see for example:
2016-05-05 18:33:47,372 INFO: Property "Have space" = "the_value"
Note that you could implement the datastore.PropertyLoadSaver type on your struct and handle this under the hood, so basically you could still load such entities into struct values, but you have to implement this yourself.
But fight for entity names and property names to not have spaces. You will make your life harder and miserable if you allow these.
All programming languages that I know treat a space as the end of a variable/constant name. The obvious solution is to avoid using spaces in property names.
I would also note that a property name becomes a part of every entity and every index entry. I don't know if Google somehow compresses them, but I tend to use short property names in any case.
You can use annotations to rename properties.
From the docs:
// A and B are renamed to a and b.
// A, C and J are not indexed.
// D's tag is equivalent to having no tag at all (E).
// I is ignored entirely by the datastore.
// J has tag information for both the datastore and json packages.
type TaggedStruct struct {
A int `datastore:"a,noindex"`
B int `datastore:"b"`
C int `datastore:",noindex"`
D int `datastore:""`
E int
I int `datastore:"-"`
J int `datastore:",noindex" json:"j"`
}

Dealing with database access in transformer stacks

This question is about groundhog or persistent, because I believe both share the same problem.
Say I have a transformer Tr m a that provides some functionality f :: Int -> Tr m (). This functionality requires database access. There are a few options I can use here, and none are satisfactory.
I could put a DbPersist transformer somewhere inside of Tr. Actually, I'd need to put it at the top because there are no PersistBackend instances for standard transformers AND I'd still need to write an instance for my Tr newtype. This already sucks because the class is far from minimal. I could also lift every db action I do.
The other option is changing the signature of f to PersistBackend m => Int -> Tr m (). This would again either require a PersistBackend instance on my Tr newtype, or lifting.
Now here's the real issue. How do I run Tr inside of a context that already has a PersistBackend constraint? There's no way to share it with Tr.
I can either do the first option and run the actual DbPersist transformer inside of Tr with some new connection pool (as far as I can tell there's no way to get the pool from the PersistBackend context I'm already in), or I can do the second option and have the run function be runTr :: PersistBackend m => Tr m a -> m a. The second option would actually be completely fine, but the problem here is that the DbPersist, that will eventually have to be somewhere in the stack, is now under the Tr transformer and there are no PersistBackend instances for the standard transformers of which Tr is made of.
What's the correct approach here? At the moment is seems that the best option is to go with a sepatare ReaderT somewhere in the stack that provides me with the connection pool on request and then do runDbConn with that pool everywhere where I want to access the DB. Seeing how DbPersist basically already is just a ReaderT I don't see the sense in having to do that.
groundhog
I recommend using the latest groundhog from their master branch. Even though the change I'm about to describe appears to have been implemented in Sept. 2015, no release has made it to Hackage. But the authors seemed to have tackled this very problem.
On tip, PersistBackend is now a much simpler class to implement, much reduced from the dozens-of-methods-long behemoth it once was:
class (Monad m, Applicative m, Functor m, MonadIO m, ConnectionManager (Conn m), PersistBackendConn (Conn m)) => PersistBackend m where
type Conn m
getConnection :: m (Conn m)
instance (Monad m, Applicative m, Functor m, MonadIO m, PersistBackendConn conn) => PersistBackend (ReaderT conn m) where
type Conn (ReaderT conn m) = conn
getConnection = ask
They wrote an instance for ReaderT conn m (DbPersist has been deprecated and aliased to ReaderT conn), and you could as easily write one for Tr (ReaderT conn) if you choose to go the route of putting ReaderT inside rather than outside. It's not quite an mtl monad transformer since you would have to instance Tr m instead of Tr, but this and the associated data type trick they're using should allow you to use a custom monad stack without too much fuss.
Either option you choose will probably require some lifting. In my personal opinion I would stick ReaderT conn on the very outside of the stack. That way, the mtl helpers can still lift through most of your stack and you can glue on an additional lift to take it home. And, if you were to stick with the version on Hackage, this seems to be the only reasonable option since otherwise you would have the (old) monolithic PersistBackend class.
persistent
Persistent is a little more straightforward: as long as the monad transformer stack contains ReaderT SqlBackend and terminates in IO, you can lift a call to runSqlPool :: MonadBaseControlIO m => ReaderT SqlBackend m a -> Pool SqlBackend -> m a. All Persistent operations are defined to return something of type ReaderT backend m a, so the design sort of just works out.

Prolog: Finding the Nth Element in a List

I am attempting to locate the nth element of a List in Prolog. Here is the code I am attempting to use:
Cells = [OK, _, _, _, _, _] .
...
next_safe(_) :-
facing(CurrentDirection),
delta(CurrentDirection, Delta),
in_cell(OldLoc),
NewLoc is OldLoc + Delta,
nth1(NewLoc, Cells, SafetyIdentifier),
SafetyIdentifier = OK .
Basically, I am trying to check to see if a given cell is "OK" to move into. Am I missing something?
there is a predfined predicate called nth0 ..
5 ?- nth0(1,[1,2,3],X).
X = 2.
6 ?- listing(nth0).
lists:nth0(A, B, C) :-
integer(A), !,
A>=0,
nth0_det(A, B, C).
lists:nth0(A, B, C) :-
var(A), !,
nth_gen(B, C, 0, A).
true.
index listing start from 0
hope this helps ..
Louis, I'm not entirely clear on what you're aiming to do with this code, but a couple of comments that might hopefully help.
Things that start with a capital letter in Prolog are variables to be matched against in rules. _ is a special symbol that can be used in place of a variable name to indicate that any value can match.
next_safe(_) is therefore only capable of providing you with a true/false answer if you give it a specific value. One of the major benefits of Prolog is the ability to unify against a variable through backtracking (as ony said). This would mean that when written correctly you could just ask Prolog next_safe(X). and it would return all the possible values (safe moves) that unify with X.
To go back to the first point about capital letters. This means that OK is actually a variable waiting to be matched. It is effectively an empty box that you are trying to match against another empty box. I think what you're intending is to use the value ok which is different. You do not assign to variables in the same way that you do in other programming styles. Something like the following might be closer to what you are looking for, though I'm still not sure it's right as it looks like you're trying to assign things but I'm not certain how your nth1 works.
Cells = [ok, _, _, _, _, _] .
...
next_safe(NewLoc) :-
facing(CurrentDirection),
delta(CurrentDirection, Delta),
in_cell(OldLoc),
NewLoc is OldLoc + Delta,
nth1(NewLoc, Cells, ok).

Resources