I have run (in Neatbeans 8.2) the following simple java code, in order to experiment with SWRL language:
String base = "http://www.prova/testont.owl";
IRI ontologyIRI = IRI.create(base);
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();
OWLOntology ontology = manager.createOntology(ontologyIRI);
OWLDataFactory factory = manager.getOWLDataFactory();
OWLClass adult = factory.getOWLClass(IRI.create(ontologyIRI + "#Adult"));
OWLClass person = factory.getOWLClass(IRI.create(ontologyIRI + "#Person"));
OWLDataProperty hasAge = factory.getOWLDataProperty(IRI.create(ontologyIRI + "#hasAge"));
OWLNamedIndividual john = factory.getOWLNamedIndividual(IRI.create(ontologyIRI + "#John"));
OWLNamedIndividual andrea = factory.getOWLNamedIndividual(IRI.create(ontologyIRI + "#Andrea"));
OWLClassAssertionAxiom classAssertion = factory.getOWLClassAssertionAxiom(person, john);
manager.addAxiom(ontology, classAssertion);
classAssertion = factory.getOWLClassAssertionAxiom(person, andrea);
manager.addAxiom(ontology, classAssertion);
OWLDatatype integerDatatype = factory.getOWLDatatype(OWL2Datatype.XSD_INTEGER.getIRI());
OWLLiteral literal = factory.getOWLLiteral("41", integerDatatype);
OWLAxiom ax = factory.getOWLDataPropertyAssertionAxiom(hasAge, andrea, literal);
manager.addAxiom(ontology, ax);
literal = factory.getOWLLiteral("15", integerDatatype);
ax = factory.getOWLDataPropertyAssertionAxiom(hasAge, john, literal);
manager.addAxiom(ontology, ax);
SWRLRuleEngine ruleEngine = SWRLAPIFactory.createSWRLRuleEngine(ontology);
ruleEngine.createSWRLRule("r1", "Person(?p)^hasAge(?p,?age)^swrlb:greaterThan(?age,17) -> Adult(?p)");
manager.saveOntology(ontology, IRI.create(((new File("FILE_PATH")).toURI())));
I used maven with the following dependency:
<dependency>
<groupId>edu.stanford.swrl</groupId>
<artifactId>swrlapi-drools-engine</artifactId>
<version>1.1.4</version>
</dependency>
I'm getting the following error:
Exception in thread "main" org.swrlapi.parser.SWRLParseException: Invalid SWRL atom predicate 'Person'
at org.swrlapi.parser.SWRLParser.generateEndOfRuleException(SWRLParser.java:479)
at org.swrlapi.parser.SWRLParser.parseSWRLAtom(SWRLParser.java:210)
at org.swrlapi.parser.SWRLParser.parseSWRLRule(SWRLParser.java:106)
at org.swrlapi.factory.DefaultSWRLAPIOWLOntology.createSWRLRule(DefaultSWRLAPIOWLOntology.java:219)
at org.swrlapi.factory.DefaultSWRLAPIOWLOntology.createSWRLRule(DefaultSWRLAPIOWLOntology.java:213)
at org.swrlapi.factory.DefaultSWRLRuleAndQueryEngine.createSWRLRule(DefaultSWRLRuleAndQueryEngine.java:249)
at ilc.cnr.it.swrl4morphology.SimpleToSWRL.main(SimpleToSWRL.java:450)
However if I persist the ontology on file and then I reload it, I don't get the error anymore. It seems that default prefixes are added during first save. It sounds pretty strange to me....
Please, could you help me to understand what I'm getting wrong ?
Thanks in advance,
Andrea
On save and parse, relative IRIs such as ”Person" will be turned into absolute IRIs, using the ontology IRI as base.
Related
I want to combine the datasets within a single hdf5 file to form one dataset in a seperate file, but am struggling to set the dtype of the new dataset. I am getting the error AttributeError: 'Group' object has no attribute 'dtype' on the line with ds_0_dtype = h5f1[ds].dtype. the code below (based on some example code posted on stackoverflow)
with
h5py.File('xxx_xxx_signals.hdf5','r') as h5f1 , \
h5py.File('file2.h5','w') as h5f2 :
for i, ds in enumerate(h5f1.keys()) :
if i == 0:
ds_0 = ds
ds_0_dtype = h5f1[ds].dtype
n_rows = h5f1[ds].shape[0]
n_cols = h5f1[ds].shape[1]
else:
if h5f1[ds].dtype != ds_0_dtype :
print(f'Dset 0:{ds_0}: dtype:{ds_0_dtype}')
print(f'Dset {i}:{ds}: dtype:{h5f1[ds].dtype}')
sys.exit('Error: incompatible dataset dtypes')
if h5f1[ds].shape[0] != n_rows :
print(f'Dset 0:{ds_0}: shape[0]:{n_rows}')
print(f'Dset {i}:{ds}: shape[0]:{h5f1[ds].shape[0]}')
sys.exit('Error: incompatible dataset shape')
n_cols += h5f1[ds].shape[1]
prev_ds = ds
h5f2.create_dataset('ds_xxxx', dtype=ds_0_dtype, shape=(n_rows,n_cols), maxshape=(n_rows,None))
first = 0
for ds in h5f1.keys() :
xfer_arr = h5f1[ds][:]
last = first + xfer_arr.shape[1]
h5f2['ds_xxxx'][:, first:last] = xfer_arr[:]
first = last
Likely you have 1 or more Groups in addition to Datasets at the Root level. h5f1.keys() accesses all Nodes -- which can be Datasets or Groups. You need to add a test to skip over Groups. You do this with an isinstance() logic test. Something like this:
else:
if not isinstance(h5f1[ds], h5py.Dataset) :
print(f'Node 0:{ds_0}: is not a dataset')
sys.exit('Error: unexpected Group; only Datasets expected')
if h5f1[ds].dtype != ds_0_dtype :
Once you know how to identify groups, you can also modify code to avoid copying them to the second file. However, that may not be your desired result. I have an extended SO post on using isinstance(). See this link:
Is there a way to get datasets in all groups at once in h5py?
Getting an error in executing the code. I have a datastore entity which has a property of type Date. An example date property value stored in an entity for a particular row is 2016-01-03 (19:00:00.000) EDT
The code i am executing is filtering the entity values based on date greater than 2016-01-01. Any idea what is wrong with the code
Error
ValueError: Unknown protobuf attr type <type 'datetime.date'>
Code
import pandas as pd
import numpy as np
from datetime import datetime
from google.cloud import datastore
from flask import Flask,Blueprint
app = Flask(__name__)
computation_cron= Blueprint('cron.stock_data_transformation', __name__)
#computation_cron.route('/cron/stock_data_transformation')
def cron():
ds = datastore.Client(project="earningspredictor-173913")
query = ds.query(kind='StockPrice')
query.add_filter('date', '>', datetime.strptime("2016-01-01", '%Y-%m-%d').date())
dataframe_data = []
temp_dict = {}
for q in query.fetch():
temp_dict["stock_code"] = q["stock_code"]
temp_dict["date"] = q["date"]
temp_dict["ex_dividend"] = q["ex_dividend"]
temp_dict["split_ratio"] = q["split_ratio"]
temp_dict["adj_open"] = q["adj_open"]
temp_dict["adj_high"] = q["adj_high"]
temp_dict["adj_low"] = q["adj_low"]
temp_dict["adj_close"] = q["adj_close"]
temp_dict["adj_volume"] = q["adj_volume"]
dataframe_data.append(temp_dict)
sph = pd.DataFrame(data=dataframe_data,columns=temp_dict.keys())
# print sph.to_string()
query = ds.query(kind='EarningsSurprise')
query.add_filter('act_rpt_date', '>', datetime.strptime("2016-01-01", '%Y-%m-%d').date())
dataframe_data = []
temp_dict = {}
for q in query.fetch():
temp_dict["stock_code"] = q["stock_code"]
temp_dict["eps_amount_diff"] = q["eps_amount_diff"]
temp_dict["eps_actual"] = q["eps_actual"]
temp_dict["act_rpt_date"] = q["act_rpt_date"]
temp_dict["act_rpt_code"] = q["act_rpt_code"]
temp_dict["eps_percent_diff"] = q["eps_percent_diff"]
dataframe_data.append(temp_dict)
es = pd.DataFrame(data=dataframe_data,columns=temp_dict.keys())
You seem to be using the generic google-cloud-datastore client library, not the NDB Client Library.
For google-cloud-datastore all date and/or time properties have the same format. From Date and time:
JSON
field name: timestampValue
type: string (RFC 3339 formatted, with milliseconds, for instance 2013-05-14T00:01:00.234Z)
Protocol buffer
field name: timestamp_value
type: Timestamp
Sort order: Chronological
Notes: When stored in Cloud Datastore, precise only to microseconds; any additional precision is rounded down.
So when setting/comparing such properties try to use strings formatted as specified (or integers for protobuf Timestamp?), not directly objects from the datetime modules (which work with the NDB library). The same might be true for queries as well.
Note: this is based on documentation only, I didn't use the generic library myself.
Django migration can detect if a field was renamed and ask you about it (instead of the old fashion delete/create)
Even if multiple fields are changed it seems to find the corresponding match. For example:
Before:
class DirectoryMirror(models.Model):
directory_origin = models.ForeignKey(TapeDirectory)
machine_target = models.ForeignKey(GenericMachine)
directory_target = models.CharField(max_length=255, blank=False)
After (changing field names):
class DirectoryMirror(models.Model):
source_directory = models.ForeignKey(TapeDirectory)
target_machine = models.ForeignKey(GenericMachine)
target_directory = models.CharField(max_length=255, blank=False)
Generating migration:
$ ./manage.py makemigrations
Did you rename directorymirror.directory_origin to directorymirror.source_directory (a ForeignKey)? [y/N] y
Did you rename directorymirror.directory_target to directorymirror.target_directory (a CharField)? [y/N] y
Did you rename directorymirror.machine_target to directorymirror.target_machine (a ForeignKey)? [y/N] y
How does it manage to detect the renaming and find the correct match?
Here it is the algorithm https://github.com/django/django/blob/bc77eb6d0858652e197c08c299efaeb06c51efee/django/db/migrations/autodetector.py#L757
Copying it here
def generate_renamed_fields(self):
"""
Works out renamed fields
"""
self.renamed_fields = {}
for app_label, model_name, field_name in sorted(self.new_field_keys - self.old_field_keys):
old_model_name = self.renamed_models.get((app_label, model_name), model_name)
old_model_state = self.from_state.models[app_label, old_model_name]
field = self.new_apps.get_model(app_label, model_name)._meta.get_field(field_name)
# Scan to see if this is actually a rename!
field_dec = self.deep_deconstruct(field)
for rem_app_label, rem_model_name, rem_field_name in sorted(self.old_field_keys - self.new_field_keys):
if rem_app_label == app_label and rem_model_name == model_name:
old_field_dec = self.deep_deconstruct(old_model_state.get_field_by_name(rem_field_name))
if field.remote_field and field.remote_field.model and 'to' in old_field_dec[2]:
old_rel_to = old_field_dec[2]['to']
if old_rel_to in self.renamed_models_rel:
old_field_dec[2]['to'] = self.renamed_models_rel[old_rel_to]
if old_field_dec == field_dec:
if self.questioner.ask_rename(model_name, rem_field_name, field_name, field):
self.add_operation(
app_label,
operations.RenameField(
model_name=model_name,
old_name=rem_field_name,
new_name=field_name,
)
)
self.old_field_keys.remove((rem_app_label, rem_model_name, rem_field_name))
self.old_field_keys.add((app_label, model_name, field_name))
self.renamed_fields[app_label, model_name, field_name] = rem_field_name
break
I need to convert the WordNet database files (noun.shape, noun.state, verb.cognition ecc) from their custom extension to .txt in order to more easily extract their nouns, verbs, adjectives and adverbs in their custom category.
In other words, in "DATABASE FILES ONLY" you'll find the files I'm looking for, unfortunately they have a .STATE or .SHAPE extension. They are readable in the notepad but I need a list with all the items in those files without their definition in parenthesis.
If you're using WordNet simply as a dictionary, you can try Open Multilingual WordNet, see http://compling.hss.ntu.edu.sg/omw/
import os, codecs
from nltk.corpus import wordnet as wn
# Read Open Multi WN's .tab file
def readWNfile(wnfile, option="ss"):
reader = codecs.open(wnfile, "r", "utf8").readlines()
wn = {}
for l in reader:
if l[0] == "#": continue
if option=="ss":
k = l.split("\t")[0] #ss as key
v = l.split("\t")[2][:-1] #word
else:
v = l.split("\t")[0] #ss as value
k = l.split("\t")[2][:-1] #word as key
try:
temp = wn[k]
wn[k] = temp + ";" + v
except KeyError:
wn[k] = v
return wn
if not os.path.exists('msa/wn-data-zsm.tab'):
os.system('wget http://compling.hss.ntu.edu.sg/omw/wns/zsm.zip')
os.system('unzip zsm.zip')
msa_wn = readWNfile('msa/wn-data-zsm.tab')
eng_wn_keys = {(str(i.offset).zfill(8) + '-'+i.pos).decode('utf8'):i for i in wn.all_synsets()}
for i in set(eng_wn_keys).intersection(msa_wn.keys()):
print eng_wn_keys[i], msa_wn[i]
Meanwhile, hold on for a while because the NLTK developers are going to put the Open Multilingual Wordnet API together soon, see https://github.com/nltk/nltk/blob/develop/nltk/corpus/reader/wordnet.py from line 1048
I'm trying to synchronize OpenLDAP and Active directory together. To do so I'm using a program called LSC-Project which is specified to do this sort of thing.
I have configured the program the best I can however I can't find a way to shake off the following error:
javax.naming.NameNotFoundException: [LDAP: error code 32 - 0000208D: NameErr: DSID-
031001CD,
problem 2001 (NO_OBJECT), data 0, best match of:
'DC=domname,DC=com'
]; remaining name
'uid=user1,ou=Users'
May 09 15:19:25 - ERROR - Error while synchronizing ID uid=user1,ou=Users:
java.lang.Exception:
Technical problem while applying modifications to directory
dn: uid=user1,ou=Users,dc=domname,dc=com
changetype: add
userPassword: 3+kU2th/WMo/v553A24a3SBw2kU=
objectClass: uid
This is the configuration file that the program runs on:
###############################
Destination LDAP directory #
##############################
dst.java.naming.provider.url = ldap://192.168.1.3:389/dc=Windows,dc=com
dst.java.naming.security.authentication = simple
dst.java.naming.security.principal = cn=Administrator,cn=Users,dc=Windows,dc=com
dst.java.naming.security.credentials = 11111
dst.java.naming.referral = ignore
dst.java.naming.ldap.derefAliases = never
dst.java.naming.factory.initial = com.sun.jndi.ldap.LdapCtxFactory
dst.java.naming.ldap.version = 3
dst.java.naming.ldap.pageSize = 1000
#########################
Source LDAP directory
#########################
src.java.naming.provider.url = ldap://192.168.1.2:389/dc=Linux,dc=com
src.java.naming.security.authentication = simple
src.java.naming.security.principal = uid=root,ou=users,dc=Linux,dc=com
src.java.naming.security.credentials = 11111
src.java.naming.referral = ignore
src.java.naming.ldap.derefAliases = never
src.java.naming.factory.initial = com.sun.jndi.ldap.LdapCtxFactory
src.java.naming.ldap.version = 3
#######################
Tasks configuration
#######################
lsc.tasks = Administrator
lsc.tasks.Administrator.srcService = org.lsc.jndi.SimpleJndiSrcService
lsc.tasks.Administrator.srcService.baseDn = ou=users
lsc.tasks.Administrator.srcService.filterAll = (&(objectClass=person))
lsc.tasks.Administrator.srcService.pivotAttrs = uid
lsc.tasks.Administrator.srcService.filterId = (&(objectClass=person)(uid={uid}))
lsc.tasks.Administrator.srcService.attrs = description uid userPassword
lsc.tasks.Administrator.dstService = org.lsc.jndi.SimpleJndiDstService
lsc.tasks.Administrator.dstService.baseDn = cn=Users
lsc.tasks.Administrator.dstService.filterAll = (&(cn=*)(objectClass=organizationalPerson))
lsc.tasks.Administrator.dstService.pivotAttrs = cn, top, person, user, organizationalPerson
lsc.tasks.Administrator.dstService.filterId = (&(objectClass=user) (sAMAccountName={cn}))
lsc.tasks.Administrator.dstService.attrs = description cn userPassword objectClass
lsc.tasks.Administrator.bean = org.lsc.beans.SimpleBean
lsc.tasks.Administrator.dn = "uid=" + srcBean.getAttributeValueById("uid") + ",ou=Users"
dn.real_root = dc=Domname,dc=com
#############################
Syncoptions configuration
#############################
lsc.syncoptions.Administrator = org.lsc.beans.syncoptions.PropertiesBasedSyncOptions
lsc.syncoptions.Administrator.default.action = M
lsc.syncoptions.Administrator.objectClass.action = M
lsc.syncoptions.Administrator.objectClass.force_value = srcBean.getAttributeValueById("cn").toUpperCase()
lsc.syncoptions.Administrator.userPassword.default_value = SecurityUtils.hash(SecurityUtils.HASH_SHA1, "defaultPassword")
lsc.syncoptions.Administrator.default.delimiter=;
lsc.syncoptions.Administrator.objectClass.force_value = "top";"user";"person";"organizationalPerson"
lsc.syncoptions.Administrator.userPrincipalName.force_value = srcBean.getAttributeValueById("uid") + "#Domname.com"
lsc.syncoptions.Administrator.userAccountControl.create_value = AD.userAccountControlSet ( "0", [AD.UAC_SET_NORMAL_ACCOUNT])
I'm suspecting that it has something to do with the baseDn of the Task configuration in the part of the source configuration.
The OSs is ubuntu 10.04 and Windows2K3
Someone suggested to me to make a manual sync between them but I have not found any guides to do so. And this program is pretty much the only thing that says that is does this kind of job without costs.
The baseDn should be the distinguished name of the base object of the search, for example, ou=users,dc=domname,dc=com.
see also
LDAP: Mastering Search Filters
LDAP: Search best practices
LDAP: Programming practices
The main reason for NameNotFoundException is that the object which you're searching doesn't exist or the container in which you are searching is not correct.
In case of Spring-ldap, we used to get this error when we specify the baseDn in the context file(LdapContextSource bean) and also in createUser code to build userDn.we need not specify the dc again in the buildUserDn()
protected Name buildUserDn(String userName) {
DistinguishedName dn = new DistinguishedName();
//only cn is required as the base dn is already specified in context file
dn.add("cn", userName);
return dn;
}
In Active Directory: Users catalog is container class, not OrganizationalUnit, so you should use: cn=users,dc=domname,dc=com