I have two main Questions:
1/ If we speak about OWL 2 semantics in academic manuscripts (ex. thesis) :
do we include the description provided in this W3C official page, which consists of more than one interpretation functions
OR
the one provided in most Description logic and OWL manuscripts? Which consists just of one interpretation function (papers and thesis)???
2/ If we speak about OWL 2 standard reasoning tasks in academic manuscripts (ex. thesis) :
do we speak about object and data properties reasoning tasks( ex. subsumption, satisfiability...) besides those of classes: because most academic manuscripts speak just about classes reasoning tasks in OWL 2;
thank you for telling me which of these alternatives, in both questions, is more correct and formal.
Strictly speaking OWL 2 maps to the DL SROIQ(D) extended with DL safe rules (which provides the semantics for hasKey).
Using one interpretation function is the norm in academic texts.
As AKSW pointed out, standard reasoning tasks are reducible to concept satisfiability (resp. class satisfiability in OWL), hence the reason academic texts tend to refer to concept satisfiability.
Role satisfiability (object/data properties satisfiability) reduces to concept satisfiability by checking satisfiability of the concept $\geq 1 r.\top$. However, there are some limitations when considering object/data properties. See Preventing, Detecting, and Revising Flaws in Object Property Expressions.
The trouble is that “OWL 2 Semantics” is ambiguous: OWL is a well-defined interchange format with several incompatible semantic interpretations. If you like you can refer to that particular document, but it’s important to cite it more specifically as the “OWL 2 Direct Semantics”.
In cases where your work doesn’t involve data types or punning, the SROIQ logic is actually a much simpler and cleaner mathematical formalism...with the caveat that the SROIQ literature is typically written for an academic audience, so this simpler model is usually described in a denser style.
Related
I want to refer to concepts defined in other ontologies, using only the respective concepts URI, without importing the outer ontology. I think that this is compatible with OWL semantics, using owl:equivalentTo property.
Can somebody confirm that this is correct? Furthermore, could someone provide me with an example on how to do it (preferably using Protege)?
Assume there is an ontology anOnt: in which there is a term anOnt:Term that you want to reuse in your ontology yourOnt:. You may import anOnt: and you're done. However, you can also redeclare the term anOnt:Term in your ontology, like this:
yourOnt: a owl:Ontology .
anOnt:Term a owl:Class .
# use anOnt:Term as you wish
But these options are only necessary if you want to comply with OWL 2 DL. OWL also defines OWL Full, and its RDF-based semantics, where terms do not have to be declared at all. So you can just write:
yourOnt:SomeTerm rdfs:subClass anOnt:Term .
and that's compatible with OWL semantics, in the sense of the OWL 2 RDF-based semantics.
For more on whether you should use owl:imports or redeclare terms, or just reuse terms, you can read an answer I wrote on answers.semanticweb.com (a now deceased website). For more on why OWL 2 has two semantics, you can read another answer I wrote on answers.semanticweb.com.
The only way you can refer to concepts in an external ontology is by importing it. After you have imported it you can use owl:equivalentTo to assert that say the Identity concept in your ontology is equivalent to the external:ID concept of the external ontology.
I am working on a project using RDF data and I am thinking about implementing a data cleanup method which will run against an RDF triples dataset and flag triples which do not match a certain pattern, based on a custom ontology.
For example, I would like to enforce that class http://myontology/A must denote http://myontology/Busing the predicate http://myontology/denotes. Any instance of Class A which does not denote an instance of Class B should be flagged.
I am wondering if a tool such as the OWLReasoner from OWL-API would have the capability to accomplish something like this, if I designed a custom axiom for the Reasoner. I have reviewed the documentation here: http://owlcs.github.io/owlapi/apidocs_4/org/semanticweb/owlapi/reasoner/OWLReasoner.html
It seems to me that the methods available with the Reasoner might not be up for the purpose which I would like to use them for, but I'm wondering if anyone has experience using OWL-API for this purpose, or knows another tool which could do the trick.
Generally speaking, OWL reasoning is not well suited to finding information that's missing in the input and flagging it up: for example, if you create a class that asserts that an instance of A has exactly one denote relation to an instance of B, and have an instance of A that does not, under Open World assumption the reasoner will just assume that the missing statement is not available, not that you're in violation.
It would be possible to detect incorrect denote uses - if, instead of relating to an instance of B, the relation was to an instance of a class disjoint with B. But this seems a different use case than the one you're after.
You can implement code with the OWL API to do this check, but it likely wouldn't benefit from the ability to reason, and given that you're working at the RDF level I'd think an API like Apache Jena might actually work better for you (you won't need to worry if your input file is not OWL compliant, for example).
I'm fairly new to ontologies and have the following situation:
Given a class definition, I want to automatically generate individuals based on all possible combinations of a given restriction.
For example:
Let's say a "Pizza" class has the property "hasTopping" which is supposed to be linked to an individual of class "Topping". I want to generate an individual of the class Pizza for each individual existing for a Topping. If there are two Topping individuals, Tomato and Cheese, I want to create one Pizza individual with "hasTopping Tomato" and one with "hasTopping Cheese".
Is there any general way to generate individuals in ontologies like this? (As an alternative to implement it myself.)
Is this "violating" the intent/purpose of ontologies in general? Would this usually be handled in a different way? (I'm not completely familiar with ontologies yet.)
There's no standard method to do this, so I think you'll have to implement it yourself. The Leigh University Benchmark does something similar, so it might provide you with some ideas: http://swat.cse.lehigh.edu/projects/lubm/
I don't think this violates the idea behind ontologies at all - seems quite straightforward. There is no best practice for it, so however you choose to implement it will probably be adequate.
In order to solve symbolic planning problems we write action theories.
Popular languages for writing action theories are STRIPS and ADL.
For describing an action we need to provide:
preconditions
effects
For example, in a robot domain, we have Robot and Object classes, and the closeTo and holding properties.
The action pickUp(?robot, ?object) is possible if closeTo(?robot, ?object) holds, and also forall ?o in Object . not holding(?robot, ?o).
How would one represent preconditions with OWL and/or SWRL?
How about action effects?
The Knowrob project suggests that it is possible to use the Qualitative Process Theory (QPT) in combination with OWL-Markup language for implementing actions. A possible precondition would be [1] :
rdf_triple(knowrob:'thermicallyConnectedTo', Dough, HeatSource),
But it was never demonstrated, that this prolog-spaghetti-code will work. OWL is not a real programming language, it's more like a markup language like json. To formalize processes in a deklarative form is a academic quiz but has no relevance for gamecoding or real robotprogramming.
I am reading The Pragmatic Programmer: From Journeyman to Master by Andrew Hunt, David Thomas. When I was reading about a term called orthogonality I was thinking that I am getting it right. I was understanding it very well. However, at the end of the chapter a few questions were asked to measure the level of understanding of the subject. While I was trying to answer those questions to myself I realized that I haven't understood it perfectly. So to clarify my understandings I am asking those questions here.
C++ supports multiple inheritance, and Java allows a class to
implement multiple interfaces. What impact does using these facilities
have on orthogonality? Is there a difference in impact between using multiple
inheritance and multiple interfaces?
There are actually three questions bundled up here: (1) What is the impact of supporting multiple inheritance on orthogonality? (2) What is the impact of implementing multiple interfaces on orthogonality? (3) What is the difference between the two sorts of impact?
Firstly, let us get to grips with orthogonality. In The Art of Unix Programming, Eric Raymond explains that "In a purely orthogonal design, operations do not have side effects; each action (whether it's an API call, a macro invocation, or a language operation) changes just one thing without affecting others. There is one and only one way to change each property of whatever system you are controlling."
So, now look at question (1). C++ supports multiple inheritance, so a class in C++ could inherit from two classes that have the same operation but with two different effects. This has the potential to be non-orthogonal, but C++ requires you to state explicitly which parent class has the feature to be invoked. This will limit the operation to only one effect, so orthogonality is maintained. See Multiple inheritance.
And question (2). Java does not allow multiple inheritance. A class can only derive from one base class. Interfaces are used to encode similarities which the classes of various types share, but do not necessarily constitute a class relationship. Java classes can implement multiple interfaces but there is only one class doing the implementation, so there should only be one effect when a method is invoked. Even if a class implements two interfaces which both have a method with the same name and signature, it will implement both methods simultaneously, so there should only be one effect. See Java interface.
And finally question (3). The difference is that C++ and Java maintain orthogonality by different mechanisms: C++ by demanding the the parent is explicitly specified, so there will be no ambiguity in the effect; and Java by implementing similar methods simultaneously so there is only one effect.
Irrespective of any number of interfaces/ classes you extend there will be only one implementation inside that class. Lets say your class is X.
Now orthogonality says - one change should affect only one module.
If you change your implementation of one interface in class X - will it affect other modules/classes using your class X ? Answer is no - because the other modules/classes are coding by interface not implementation.
Hence orthogonality is maintained.