testing framework for OWL ontologies - owl

Is there a testing framework or tool or Protege's plugin for functional testing of OWL ontologies? e.g. to check presence or absence of some axioms in an onology,
to apply some test facts to the ontology and analyse new axioms obtained as a result.

If I understand you correctly you, given an ontology you want to figure out
whether the stated axioms are correct and
whether there are any missing axioms.
(1) OntoComP is a plugin for Protege 4.x. Given an ontology it will, by asking the user questions, either add counter examples or add additional axioms to the ontology. You can read more on this here.
(2) Scenario testing for OWL ontologies defines a method via which you can setup "scenarios" to test that your ontology adheres to functional requirements. This is an approach I have defined in my MSc dissertation and which we have used successfully to validate complex functional requirements for a hotel management group.
(3) You can use the OWL API to test whether the required axioms hold:
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLDataFactory dataFactory = manager.getOWLDataFactory();
OWLOntology ontology = manager.loadOntologyFromOntologyDocument(ontologyFileIRI);
OWLReasonerFactory reasonerFactory = new ReasonerFactory();
OWLReasoner reasoner = reasonerFactory.createReasoner(ontology);
OWLClass classA = dataFactory.getOWLClass(ontologyIRI + "#A");
OWLClass classD = dataFactory.getOWLClass(ontologyIRI + "#D");
OWLAxiom dSubclassOfA = dataFactory.getOWLSubClassOfAxiom(classD, classA);
reasoner.isConsistent();
System.out.println("D subclass of A = " + reasoner.isEntailed(dSubclassOfA));
The complete example code for this you can find here.

So, this is a challenge to apply testing procedures (functional tests) to a core (i.e. OWL2 driven knowledge base) of an ontology based application.
You can start with mentioned above:
-RDFUnit (https://github.com/AKSW/RDFUnit), but it seems to be very SPARQL specific.
Also things exist like:
-evOWLuator (https://github.com/sisinflab-swot/evowluator).
It is Python based, current version is 0.1.1, and they declare the support for ontology classification, consistency and 'matchmaking' reasoning tasks.
-Scone (https://bitbucket.org/malefort/scone/src/default/). It is quite perspective because allows usage of a controlled natural language, however is still in an early development stage.
And some research efforts are too.
-Again you are free to dive in DIY with OWL API.

Related

Ontologies only built with classes and not class instances

I am wondering why public biomedical ontologies are often organized in such a way that there are no class instances and only classes? I understand it in a way that all instances are classes, but I do not understand what is the advantage or purpose of such modelling? Those classes have also only annotation properties. For example NCIT ontology: https://bioportal.bioontology.org/ontologies/NCIT/?p=summary.
I would appreciate if someone could provide me with an explanation what is the purpose of such model and if there is an advantage to a model where classes have class instances. I am definitively not an expert in the field and I only was working on modelling 'standard' ontologies with classes and their instances.
TLDR
The reason for preferring classes over individuals (or instances) is that classes allow for sophisticated reasoning which is used to infer classification hierarchies.
The longer answer
The semantics of OWL allows you to make the following type of statements:
ClassExpression1 is a subclass of ClassExpression2
PropertyExpression1 is a subproperty of PropertyExpression2
Individual c1 is an instance of Class1
Individual x is related to individual y via property1
Of these 4 options, (1) by far allows for the most sophistication. Intuitively it comes down to how much each of these allow you to express and the reasoning capability to derive inferences from those statements. To get an intuitive feel of this, using the OWL Direct Semantics, we can see what
ClassExpression1 and ClassExpression2 can be substituted with:
There no way that this expressivity can be achieved using individuals.
Individuals vs Classes
In your question you say that all instances (individuals) are classes. This is not exactly true. Rather, classes consists of instances or instances belong to classes. From a mathematical perspective classes are sets and individuals are members of a set.
Annotations in biomedical ontologies
These ontologies have a substantial (80%-90%) amount of annotations. However, they do have lots of logical axioms. You can see it for example when you look at http://purl.obolibrary.org/obo/NCIT_C12392 on the righthandside, if you scroll down to the bottom, you will see the axioms listed:

Clarification requests about Description Logic and OWL

I have two main Questions:
1/ If we speak about OWL 2 semantics in academic manuscripts (ex. thesis) :
do we include the description provided in this W3C official page, which consists of more than one interpretation functions
OR
the one provided in most Description logic and OWL manuscripts? Which consists just of one interpretation function (papers and thesis)???
2/ If we speak about OWL 2 standard reasoning tasks in academic manuscripts (ex. thesis) :
do we speak about object and data properties reasoning tasks( ex. subsumption, satisfiability...) besides those of classes: because most academic manuscripts speak just about classes reasoning tasks in OWL 2;
thank you for telling me which of these alternatives, in both questions, is more correct and formal.
Strictly speaking OWL 2 maps to the DL SROIQ(D) extended with DL safe rules (which provides the semantics for hasKey).
Using one interpretation function is the norm in academic texts.
As AKSW pointed out, standard reasoning tasks are reducible to concept satisfiability (resp. class satisfiability in OWL), hence the reason academic texts tend to refer to concept satisfiability.
Role satisfiability (object/data properties satisfiability) reduces to concept satisfiability by checking satisfiability of the concept $\geq 1 r.\top$. However, there are some limitations when considering object/data properties. See Preventing, Detecting, and Revising Flaws in Object Property Expressions.
The trouble is that “OWL 2 Semantics” is ambiguous: OWL is a well-defined interchange format with several incompatible semantic interpretations. If you like you can refer to that particular document, but it’s important to cite it more specifically as the “OWL 2 Direct Semantics”.
In cases where your work doesn’t involve data types or punning, the SROIQ logic is actually a much simpler and cleaner mathematical formalism...with the caveat that the SROIQ literature is typically written for an academic audience, so this simpler model is usually described in a denser style.

How to use OWLReasoner to update an ontology

I'm new to the OWL API and I was wondering if there was a way to update an ontology with all the new relations picked up by the reasoner (HermiT). I couldn't find a tutorial or much documentation, so I assumed calling
reasoner.classifyClasses();
reasoner.classifyDataProperties();
reasoner.classifyObjectProperties();
reasoner.precomputeInferences();
reasoner.flush();
would classify the new relations. Then, I'm not sure how to translate these new relations to create an updated ontology. I have an idea of how I could manually iterate through new relations and add them if they aren't present in the ontology, but I'm looking for an easier way to do this. Also, I'm not entirely sure if the above code reasons all the new relations for me, so let me know if I should make any corrections.
You can use InferredOntologyGenerator for that purpose. The class can be created with a reasoner as input and the InferredOntologyGenerator::fillOntology method to add all the axioms that can be inferred to a new ontology.
Note that axiom generation can be a very slow operation. Try with a small ontology at first, to see whether the result is what you need.

Can I use OWL API to enforce specific subject-predicate-object relationships?

I am working on a project using RDF data and I am thinking about implementing a data cleanup method which will run against an RDF triples dataset and flag triples which do not match a certain pattern, based on a custom ontology.
For example, I would like to enforce that class http://myontology/A must denote http://myontology/Busing the predicate http://myontology/denotes. Any instance of Class A which does not denote an instance of Class B should be flagged.
I am wondering if a tool such as the OWLReasoner from OWL-API would have the capability to accomplish something like this, if I designed a custom axiom for the Reasoner. I have reviewed the documentation here: http://owlcs.github.io/owlapi/apidocs_4/org/semanticweb/owlapi/reasoner/OWLReasoner.html
It seems to me that the methods available with the Reasoner might not be up for the purpose which I would like to use them for, but I'm wondering if anyone has experience using OWL-API for this purpose, or knows another tool which could do the trick.
Generally speaking, OWL reasoning is not well suited to finding information that's missing in the input and flagging it up: for example, if you create a class that asserts that an instance of A has exactly one denote relation to an instance of B, and have an instance of A that does not, under Open World assumption the reasoner will just assume that the missing statement is not available, not that you're in violation.
It would be possible to detect incorrect denote uses - if, instead of relating to an instance of B, the relation was to an instance of a class disjoint with B. But this seems a different use case than the one you're after.
You can implement code with the OWL API to do this check, but it likely wouldn't benefit from the ability to reason, and given that you're working at the RDF level I'd think an API like Apache Jena might actually work better for you (you won't need to worry if your input file is not OWL compliant, for example).

Writing action theories in OWL + SWRL: possible?

In order to solve symbolic planning problems we write action theories.
Popular languages for writing action theories are STRIPS and ADL.
For describing an action we need to provide:
preconditions
effects
For example, in a robot domain, we have Robot and Object classes, and the closeTo and holding properties.
The action pickUp(?robot, ?object) is possible if closeTo(?robot, ?object) holds, and also forall ?o in Object . not holding(?robot, ?o).
How would one represent preconditions with OWL and/or SWRL?
How about action effects?
The Knowrob project suggests that it is possible to use the Qualitative Process Theory (QPT) in combination with OWL-Markup language for implementing actions. A possible precondition would be [1] :
rdf_triple(knowrob:'thermicallyConnectedTo', Dough, HeatSource),
But it was never demonstrated, that this prolog-spaghetti-code will work. OWL is not a real programming language, it's more like a markup language like json. To formalize processes in a deklarative form is a academic quiz but has no relevance for gamecoding or real robotprogramming.

Resources