Referring to a concept in a not-imported ontology - owl

I want to refer to concepts defined in other ontologies, using only the respective concepts URI, without importing the outer ontology. I think that this is compatible with OWL semantics, using owl:equivalentTo property.
Can somebody confirm that this is correct? Furthermore, could someone provide me with an example on how to do it (preferably using Protege)?

Assume there is an ontology anOnt: in which there is a term anOnt:Term that you want to reuse in your ontology yourOnt:. You may import anOnt: and you're done. However, you can also redeclare the term anOnt:Term in your ontology, like this:
yourOnt: a owl:Ontology .
anOnt:Term a owl:Class .
# use anOnt:Term as you wish
But these options are only necessary if you want to comply with OWL 2 DL. OWL also defines OWL Full, and its RDF-based semantics, where terms do not have to be declared at all. So you can just write:
yourOnt:SomeTerm rdfs:subClass anOnt:Term .
and that's compatible with OWL semantics, in the sense of the OWL 2 RDF-based semantics.
For more on whether you should use owl:imports or redeclare terms, or just reuse terms, you can read an answer I wrote on answers.semanticweb.com (a now deceased website). For more on why OWL 2 has two semantics, you can read another answer I wrote on answers.semanticweb.com.

The only way you can refer to concepts in an external ontology is by importing it. After you have imported it you can use owl:equivalentTo to assert that say the Identity concept in your ontology is equivalent to the external:ID concept of the external ontology.

Related

testing framework for OWL ontologies

Is there a testing framework or tool or Protege's plugin for functional testing of OWL ontologies? e.g. to check presence or absence of some axioms in an onology,
to apply some test facts to the ontology and analyse new axioms obtained as a result.
If I understand you correctly you, given an ontology you want to figure out
whether the stated axioms are correct and
whether there are any missing axioms.
(1) OntoComP is a plugin for Protege 4.x. Given an ontology it will, by asking the user questions, either add counter examples or add additional axioms to the ontology. You can read more on this here.
(2) Scenario testing for OWL ontologies defines a method via which you can setup "scenarios" to test that your ontology adheres to functional requirements. This is an approach I have defined in my MSc dissertation and which we have used successfully to validate complex functional requirements for a hotel management group.
(3) You can use the OWL API to test whether the required axioms hold:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLDataFactory dataFactory = manager.getOWLDataFactory();
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(ontologyFileIRI);
OWLReasonerFactory reasonerFactory = new ReasonerFactory();
OWLReasoner reasoner = reasonerFactory.createReasoner(ontology);
OWLClass classA = dataFactory.getOWLClass(ontologyIRI + "#A");
OWLClass classD = dataFactory.getOWLClass(ontologyIRI + "#D");
OWLAxiom dSubclassOfA = dataFactory.getOWLSubClassOfAxiom(classD, classA);
reasoner.isConsistent();
System.out.println("D subclass of A = " + reasoner.isEntailed(dSubclassOfA));
The complete example code for this you can find here.
So, this is a challenge to apply testing procedures (functional tests) to a core (i.e. OWL2 driven knowledge base) of an ontology based application.
You can start with mentioned above:
-RDFUnit (https://github.com/AKSW/RDFUnit), but it seems to be very SPARQL specific.
Also things exist like:
-evOWLuator (https://github.com/sisinflab-swot/evowluator).
It is Python based, current version is 0.1.1, and they declare the support for ontology classification, consistency and 'matchmaking' reasoning tasks.
-Scone (https://bitbucket.org/malefort/scone/src/default/). It is quite perspective because allows usage of a controlled natural language, however is still in an early development stage.
And some research efforts are too.
-Again you are free to dive in DIY with OWL API.

Clarification requests about Description Logic and OWL

I have two main Questions:
1/ If we speak about OWL 2 semantics in academic manuscripts (ex. thesis) :
do we include the description provided in this W3C official page, which consists of more than one interpretation functions
OR
the one provided in most Description logic and OWL manuscripts? Which consists just of one interpretation function (papers and thesis)???
2/ If we speak about OWL 2 standard reasoning tasks in academic manuscripts (ex. thesis) :
do we speak about object and data properties reasoning tasks( ex. subsumption, satisfiability...) besides those of classes: because most academic manuscripts speak just about classes reasoning tasks in OWL 2;
thank you for telling me which of these alternatives, in both questions, is more correct and formal.
Strictly speaking OWL 2 maps to the DL SROIQ(D) extended with DL safe rules (which provides the semantics for hasKey).
Using one interpretation function is the norm in academic texts.
As AKSW pointed out, standard reasoning tasks are reducible to concept satisfiability (resp. class satisfiability in OWL), hence the reason academic texts tend to refer to concept satisfiability.
Role satisfiability (object/data properties satisfiability) reduces to concept satisfiability by checking satisfiability of the concept $\geq 1 r.\top$. However, there are some limitations when considering object/data properties. See Preventing, Detecting, and Revising Flaws in Object Property Expressions.
The trouble is that “OWL 2 Semantics” is ambiguous: OWL is a well-defined interchange format with several incompatible semantic interpretations. If you like you can refer to that particular document, but it’s important to cite it more specifically as the “OWL 2 Direct Semantics”.
In cases where your work doesn’t involve data types or punning, the SROIQ logic is actually a much simpler and cleaner mathematical formalism...with the caveat that the SROIQ literature is typically written for an academic audience, so this simpler model is usually described in a denser style.

Can I use OWL API to enforce specific subject-predicate-object relationships?

I am working on a project using RDF data and I am thinking about implementing a data cleanup method which will run against an RDF triples dataset and flag triples which do not match a certain pattern, based on a custom ontology.
For example, I would like to enforce that class http://myontology/A must denote http://myontology/Busing the predicate http://myontology/denotes. Any instance of Class A which does not denote an instance of Class B should be flagged.
I am wondering if a tool such as the OWLReasoner from OWL-API would have the capability to accomplish something like this, if I designed a custom axiom for the Reasoner. I have reviewed the documentation here: http://owlcs.github.io/owlapi/apidocs_4/org/semanticweb/owlapi/reasoner/OWLReasoner.html
It seems to me that the methods available with the Reasoner might not be up for the purpose which I would like to use them for, but I'm wondering if anyone has experience using OWL-API for this purpose, or knows another tool which could do the trick.
Generally speaking, OWL reasoning is not well suited to finding information that's missing in the input and flagging it up: for example, if you create a class that asserts that an instance of A has exactly one denote relation to an instance of B, and have an instance of A that does not, under Open World assumption the reasoner will just assume that the missing statement is not available, not that you're in violation.
It would be possible to detect incorrect denote uses - if, instead of relating to an instance of B, the relation was to an instance of a class disjoint with B. But this seems a different use case than the one you're after.
You can implement code with the OWL API to do this check, but it likely wouldn't benefit from the ability to reason, and given that you're working at the RDF level I'd think an API like Apache Jena might actually work better for you (you won't need to worry if your input file is not OWL compliant, for example).

Which one is the difference in protege 5.0 (owl) between defined class and primitive class?

I read the manual of PRÖTEGË 5 in http://wiki.opensemanticframework.org/index.php/Adding_an_Ontology_Concept_using_Prot%C3%A9g%C3%A9 but I don´t understand , I am using the methodology "METHONTOLOGY" for the construction the Ontology.and I have some doubts for the implementation in Protege 5
Someone have a better manual for use PROTEGE 5
A defined class is a class involved in an axiom like C equivalent some Property D. A primitive class is one for which no such definition exists.
These concepts are explained in description logic books (e.g., the Description Logic Handbook), and their application in OWL is described in the W3C specs.

Writing action theories in OWL + SWRL: possible?

In order to solve symbolic planning problems we write action theories.
Popular languages for writing action theories are STRIPS and ADL.
For describing an action we need to provide:
preconditions
effects
For example, in a robot domain, we have Robot and Object classes, and the closeTo and holding properties.
The action pickUp(?robot, ?object) is possible if closeTo(?robot, ?object) holds, and also forall ?o in Object . not holding(?robot, ?o).
How would one represent preconditions with OWL and/or SWRL?
How about action effects?
The Knowrob project suggests that it is possible to use the Qualitative Process Theory (QPT) in combination with OWL-Markup language for implementing actions. A possible precondition would be [1] :
rdf_triple(knowrob:'thermicallyConnectedTo', Dough, HeatSource),
But it was never demonstrated, that this prolog-spaghetti-code will work. OWL is not a real programming language, it's more like a markup language like json. To formalize processes in a deklarative form is a academic quiz but has no relevance for gamecoding or real robotprogramming.

Resources