meaning of parameters in parse method OWLAPI (building an AST) - owl-api

I was looking for a good parser for OWL ontologies - initially in Python since I have very limited experience with Java. It seems that OWLAPI is the best choice as far as I can tell, and well, it is Java.
So, I am trying to parse an .owl file and build the AST from it. I downloaded owlapi and I´m having problems with it since it doesn´t seem to have much in terms of documentation.
My very basic question is what do the two first parameters of - say - OWLXMLParser(), stand for:
- document source: Is this the .owl file read as a stream (in getDocument below)?
- root ontology: what goes here? initially I thought that this is where the .owl file goes, seems not to be the case.
Does the parse method construct the AST or am I barking up the wrong tree?
I´m pasting some of my intents below - there are more of them but for I´m trying to be less verbose :)
[The error I´m getting is this - if anyone cares - although the question is more fundamental:
java.lang.NullPointerException: stream cannot be null
at org.semanticweb.owlapi.util.OWLAPIPreconditions.checkNotNull(OWLAPIPreconditions.java:102)
at org.semanticweb.owlapi.io.StreamDocumentSourceBase.(StreamDocumentSourceBase.java:107)
at org.semanticweb.owlapi.io.StreamDocumentSource.(StreamDocumentSource.java:35)
at testontology.testparsers.OntologyParser.getDocument(App.java:72)
at testontology.testparsers.OntologyParser.test(App.java:77)
at testontology.testparsers.App.main(App.java:58)]
Thanks a lot for your help.
public class App
{
public static void main( String[] args )
{
OntologyParser o = new OntologyParser();
try {
OWLDocumentFormat p = o.test();
} catch (Exception e) {
e.printStackTrace();
}
}
}
class OntologyParser {
private OWLOntology rootOntology;
private OWLOntologyManager manager;
private OWLOntologyDocumentSource getDocument() {
System.out.println("access resource stream");
return new StreamDocumentSource(getClass().getResourceAsStream(
"/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
}
public OWLDocumentFormat test() throws Exception {
OWLOntologyDocumentSource documentSource = getDocument();
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology rootOntology = manager.loadOntologyFromOntologyDocument (new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl")));
OWLDocumentFormat doc = parseOnto(documentSource, rootOntology);
return doc;
}
private OWLDocumentFormat parseOnto(
#Nonnull OWLOntologyDocumentSource initialDocumentSource,
#Nonnull OWLOntology initialOntology) throws IOException {
OWLParser initialParser = new OWLXMLParser();
OWLOntologyLoaderConfiguration config = new OntologyConfigurator().buildLoaderConfiguration();
//// option 1:
//final OWLOntologyManager managerr = new OWLOntologyManagerImpl(new OWLDataFactoryImpl(), new ReentrantReadWriteLock(true));
//final IRI iri = IRI.create("testasdf");
//final IRI version = IRI.create("0.0.1");
//OWLOntologyDocumentSource source = new FileDocumentSource(new File("/home/mmarines/Desktop/WORK/mooly/smart-cities/data/test.owl"));
//final OWLOntology onto = new OWLOntologyImpl(managerr, new OWLOntologyID(iri,version));
//return initialParser.parse(initialDocumentSource, onto, config);
////
//option 2:
return initialParser.parse(initialDocumentSource, initialOntology, config);
}
Click here to Reply or Forward
15.32 GB (13%) of 115 GB used
Manage
Terms - Privacy
Last account activity: 1 hour ago
Details

The owlapi parsers are designed for use by the OWLOntologyManager implementations, which are managed (unless you're writing a new owlapi implementation) by the OWLManager singleton. There's plenty of examples on how to use that class in the wiki pages.
All parsers included in the owlapi distribution are meant to create OWLAxiom instances in an OWLOntology, not create an AST of an owl file - the syntactic shape of the files depends on the specific format, on the preferences of the writer, and so on, while the purpose of the api is to provide ontology manipulation functionality to the caller. The details of the output format can be tweaked but exposing them to the caller is not part of the main design.

Related

Flink integration test(s) with Testcontainers

I have a simple Apache Flink job that looks very much like this:
public final class Application {
public static void main(final String... args) throws Exception {
final var env = StreamExecutionEnvironment.getExecutionEnvironment();
final var executionConfig = env.getConfig();
final var params = ParameterTool.fromArgs(args);
executionConfig.setGlobalJobParameters(params);
executionConfig.setParallelism(params.getInt("application.parallelism"));
final var source = KafkaSource.<CustomKafkaMessage>builder()
.setBootstrapServers(params.get("application.kafka.bootstrap-servers"))
.setGroupId(config.get("application.kafka.consumer.group-id"))
// .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
.setStartingOffsets(OffsetsInitializer.earliest())
.setTopics(config.getString("application.kafka.listener.topics"))
.setValueOnlyDeserializer(new MessageDeserializationSchema())
.build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "custom.kafka-source")
.uid("custom.kafka-source")
.rebalance()
.flatMap(new CustomFlatMapFunction())
.uid("custom.flatmap-function")
.filter(new CustomFilterFunction())
.uid("custom.filter-function")
.addSink(new CustomDiscardSink()) // Will be a Kafka sink in the future
.uid("custom.discard-sink");
env.execute(config.get("application.job-name"));
}
}
Problem is that I would like to provide an integration test for the entire application — sort of like an end-to-end (set of) test(s) for the entire job. I'm using Testcontainers, but I'm not really sure how to move forward with this. For instance, this is how the test looks like (for now):
#Testcontainers
final class ApplicationTest {
private static final DockerImageName DOCKER_IMAGE = DockerImageName.parse("confluentinc/cp-kafka:7.0.1");
#Container
private static final KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DOCKER_IMAGE);
#ClassRule // How come this work in JUnit Jupiter? :/
public static MiniClusterResource cluster;
#BeforeAll
static void init() {
KAFKA_CONTAINER.start();
// ...probably need to wait and create the topic(s) as well
final var config = new MiniClusterResourceConfiguration.Builder().setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build();
cluster = new MiniClusterResource(config);
}
#Test
void main() throws Exception {
// new Application(); // ...what's next?
}
}
I'm not sure how to implement what's required to trigger the job as-is from that point on. Basically, I would like to execute what was defined before, without (almost) any modifications — I've seen plenty of examples that practically build the entire job again, so that's not an option.
Can somebody provide any pointers here?
MessageDeserializationSchema is unbounded, so isEndOfStream returns false. Not sure if that's an impediment.
In order to make the pipeline more testable, I suggest you create a method on your Application class that takes a source and a sink as parameters, and creates and executes the pipeline, using those connectors.
In your tests you can call that method with special sources and sinks that you use for testing. In particular, you will want to use a KafkaSource that uses .setBounded(...) in the tests so that it cleanly handles just the range of data intended for the test(s).
The solutions and tests for the Apache Flink training exercises are organized along these lines; for example, see RideCleansingSolution.java and RideCleansingIntegrationTest.java. These examples don't use kafka or test containers, but hopefully they'll still be helpful.
I would suggest you instrument your application as an opaque-box test by interacting with it through its public API. This can be done either as an out-process test (e.g. by running your application in a container as well, using Testcontainers) are as an in-process test (by creating your Application and calling its main() method).
Now in your comments you explained, that you want to check for the side-effects of interacting with your application (Kafka messages being published). To check this, connect to the KafkaContainer with your own KafkaConsumer from within the test and use a library such as Awaitiliy to wait until the messages have been received.

How do we obtain the Google App Engine safe-URL key in Java for a different appId and namespace?

When I just need to do it for my application without namespace I can use the following code:
final Key myKey = KeyFactory.createKey(kind, id);
final String safeUrlKey = KeyFactory.keyToString(myKey);
Unfortunately when I need to do it for a different appId or namespace I don't find any way to do it in Java.
In python for example I can use the following code:
new_key = db.Key.from_path(entity, id, _app=application_id, namespace=namespace)
return str(new_key)
But in Java this doesn't seem to be available.
Any idea on how I can do this?
The App Engine SDK does indeed try to prohibit this, as evidenced by the lack of public classes/methods that can handle app IDs and namespaces. Even in python this is discouraged by the underscore prefix on the _app keyword argument. This is probably because App Engine apps are meant to be well-contained within their project.
It is possible to use reflection to workaround these barriers, but only on the Standard Java 8 runtime (which is currently in beta). The Standard Java 7 runtime prohibits reflecting non-accessible methods. (If you're using App Engine Flex I suspect you'll be ok too, although I haven't tested that.)
If you are already using Java 8 or willing to switch, I was able to create keys for arbitrary app IDs/namespaces with the following:
Key createKey(String appId, String namespace, String kind, long id) {
try {
Class<?> appNsClazz = Class.forName("com.google.appengine.api.datastore.AppIdNamespace");
Constructor<?> constructor = appNsClazz.getConstructor(String.class, String.class);
constructor.setAccessible(true);
Constructor<Key> keyFactory = Key.class.getDeclaredConstructor(String.class,
Key.class, long.class, String.class, appNsClazz);
keyFactory.setAccessible(true);
Object appNs = constructor.newInstance(appId, namespace);
return keyFactory.newInstance(kind, /* parent key */ null, id, /* name */ null, appNs);
} catch (ClassNotFoundException | NoSuchMethodException |
InvocationTargetException | InstantiationException |
IllegalAccessException e) {
throw new RuntimeException(e);
}
}
If you will be running this code often it would be good to cache the Constructor instances, and the appNs instance if possible to avoid the performance overhead of reflection.
Please do note that this code will not work on the Standard Java 7 runtime.
Finally I was able to make it work with the following code (looking at how KeyFactory was doing it internally):
public static String getSafeUrlFromId(final String kind, final Long id, final String applicationId, final String namespace) {
final com.google.storage.onestore.v3.OnestoreEntity.Reference myMessage = new com.google.storage.onestore.v3.OnestoreEntity.Reference();
final Element pathElement = new Element().setType(kind).setId(id);
final Path path = myMessage.getMutablePath();
path.addElement(pathElement);
myMessage.setPath(path);
if (namespace != null){
myMessage.setNameSpace(namespace);
}
myMessage.setApp(applicationId);
final BaseEncoding encoder = BaseEncoding.base64Url();
final String alphanumericKey = encoder.omitPadding().encode(myMessage.toByteArray());
return alphanumericKey;
}

Passing serialized Externalizable object from standard JVM to CodenameOne

I'm trying to deserialize an object, which:
was created and serialized in another standard JVM (server)
implements traditional Java Externalizable interface
was passed over a network
public static void getData() {
ConnectionRequest req = new ConnectionRequest() {
#Override
protected void readResponse(InputStream is) throws IOException {
DataInputStream dis = new DataInputStream(is);
Employee recovered = new Employee();
recovered.internalize(1, dis);
}
};
req.setUrl(BASEURL);
req.setPost(false);
NetworkManager.getInstance().addToQueueAndWait(req);
}
From the remote jvm I'm passing object in ByteArray or ByteArrayInputStream and in CN1 I get EOFException.
Is it possible to transfer objects such way? Or should i use JSON.
I thought I don't need JSON, if I have Java on both sides..
Codename One's externalization interface isn't compatible with Java SE. Serialization and externalization relies on reflection and dynamic invocation which aren't practical on all of Codename One's targets (even Android where the binary is usually obfuscated).
You can pass an object however you will need to use the Codename One API to do so. You can effectively take the JavaSE.jar file from the Codename one project and use the API there to write/read the object.
Other than that your code to read the object is incorrect. You should use Util.readObject/writeObject. I suggest reading the great tutorial Steve Hannah wrote on the subject.

Enterprise Library 5: Creating instances of Enterprise Library objects

I am using Enterprise Library 5.0 in my win-form Application.
1. Regarding creating instances of Enterprise Library objects
What is the best way to Resolve the reference for Logging / exception objects? In our application, we have different applications in solution. So Solutions have below project:
CommonLib (Class Lib)
CustomerApp (winform app)
CustWinService (win service proj)
ClassLib2 (class Lib)
I have implemented logging / exceptions as below in CommonLib project. Created a class AppLog as below:
public class AppLog
{
public static LogWriter defaultWriter = EnterpriseLibraryContainer.Current.GetInstance<LogWriter>();
public static ExceptionManager exManager = EnterpriseLibraryContainer.Current.GetInstance<ExceptionManager>();
public AppLog()
{
}
public static void WriteLog(string LogMessage, string LogCategories)
{
// Create a LogEntry and populate the individual properties.
if (defaultWriter.IsLoggingEnabled())
{
string[] Logcat = LogCategories.Split(",".ToCharArray());
LogEntry entry2 = new LogEntry();
entry2.Categories = Logcat;
entry2.EventId = 9007;
entry2.Message = LogMessage;
entry2.Priority = 9;
entry2.Title = "Logging Block Examples";
defaultWriter.Write(entry2);
}
}
}
And then I used Applog class as below for logging and exception in different projects:
try
{
AppLog.WriteLog("This is Production Log Entry.", "ExceCategory");
string strtest = string.Empty;
strtest = strtest.Substring(1);
}
catch (Exception ex)
{
bool rethrow = AppLog.exManager.HandleException(ex, "ExcePolicy");
}
So its the correct way to use Logging and Exception? or any other way i can improve it?
2. Logging File Name dynamic
In logging block, we have fileName which need to be set in app.config file. Is there a way I can assign fileName value dynamically through coding? Since I don't want to hard code it in config file and paths are different for production and development environment.
Thanks
TShah
To keep your application loosely coupled and easier to test, I would recommend defining separate logging and exception handling interfaces, then having your AppLog class implement both. Your application can then perform logging and exception handling via those interfaces, with AppLog providing the implementation.
You can have a different file name set per environment using config transforms, which I believe you can use in a winforms application by using Slow Cheetah.

Serializer library for Silverlight

I'm developing a modular app using prism in SL3, one of the modules is responsible for persisting the application settings in the isolated storage (so that when you open the app next time, you continue where you were). It works perfectly, except that I don't like the way dependencies are wired now.
I want to have a type-agnostic settings manager that has a generic store and then I add custom data from each module, some thing like this:
AppSettings["OpenForEditEmployees"] = new List<EmployeeDTO>();
AppSettings["ActiveView"] = ViewsEnum.Report;
I have implemented this part, but serialising that dictionary to xml proved to be harder than I suspected. I was wondering if there is an easy way to serialise a Dictionary<string, object> into XML.
Since you are using a Dictionary, the regular XmlSerializer won't work, you can serialize using DataContractSerializer.
These 2 static classes will handle all of your serialization/deserialization needs for string representation of xml in silverlight (and any .NET)
You will need a reference to System.Runtime.Serialization for the DataContractSerializer
public static void SerializeXml<T>(T obj, Stream strm)
{
DataContractSerializer ser = new DataContractSerializer(typeof(T));
ser.WriteObject(strm, obj);
}
public static T DeserializeXml<T>(Stream xml)
{
DataContractSerializer ser = new DataContractSerializer(typeof(T));
return (T)ser.ReadObject(xml);
}
and if you would rather use JSON, you can add a reference to the System.ServiceModel.Web assembly and use this version instead.
public static void SerializeJson<T>(T obj, Stream strm)
{
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(T));
ser.WriteObject(strm, obj);
}
public static T DeserializeJson<T>(Stream json)
{
DataContractJsonSerializer ser = new DataContractJsonSerializer(typeof(T));
return (T)ser.ReadObject(json);
}
Have you looked at json.net
http://json.codeplex.com/
It's not XML but it does a great job with serialization.
And, works great in Silverlight.

Resources