Relation IDs mismatch - Mapping OWL to Oracle DB with Ontop - owl

As a Part of my little App I try to map Data between my Ontology and an Oracle DB with ontop. But my first mapping is not accepted by the reasoner and it's not clear why.
As my first attempt I use the following target:
:KIS/P_PVPAT_PATIENT/{PPVPAT_PATNR} a :Patient .
and the following source:
select * from P_PVPAT_PATIENT
Here KIS is the schema, p_pvpat_patient the table and ppvpat_patnr the key.
Caused by: it.unibz.inf.ontop.exception.InvalidMappingSourceQueriesException:
Error: Relation IDs mismatch: P_PVPAT_PATIENT v "KIS"."P_PVPAT_PATIENT" P_PVPAT_PATIENT
Problem location: source query of triplesMap
[id: MAP_PATIENT
target atoms: triple(s,p,o) with
s/RDF(http://www.semanticweb.org/grossmann/ontologies/kis-ontology#KIS/P_PVPAT_PATIENT/{}(TmpToVARCHAR2(PPVPAT_PATNR)),IRI), p/<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>, o/<http://www.semanticweb.org/grossmann/ontologies/kis-ontology#Patient>
source query: select * from P_PVPAT_PATIENT]

As the error said my source query was wrong because I forgot to use the schema in my sql.
the correct sql is
select * from kis.P_PVPAT_PATIENT

Related

R : problem with the dplyr::tbl() function due to restricted permission

I work with large databases that needs to be stored into a server.
So, to work with them on Rstudio I have to open a connection to my Microsoft SQL Server with the dbConnect function :
conn <- dbConnect(odbc(),"myconnection",uid="***",pwd="***",schema="dbo",access="readonly")
and in order to use dplyr, I have to create data references with the tbl function :
data <- tbl(conn, "data")
But one of the online dataframe contains a columns that I can't read because I dont have the access, but I can read everything else.
The SQL query behind the tbl() function is :
SELECT * FROM data
and this is my problem.
Even when I try to select a specific column it doesn't work (see below), so I can't create my references and I can't work.
select(tbl(conn, "data"), "columnX")
=
SELECT columnX FROM data
I think this is the tbl() function and the call of "SELECT *" that blocks me.
Do you know what can I do ? Is there smilar functions that could resolve my problem ?
If you know the columns that you have access to, then one option is to bypass the default access SELECT * FROM ... with your own SQL query.
A remote table is defined by two components:
The database conneciton
The query to the database
When you connect with the default approach tbl(conn, 'data') then it defaults to a query SELECT * FROM data.
But here is another approach:
custom_query = 'SELECT columnX FROM data'
remote_table = tbl(conn, dbplyr::sql(customer_query))

MSSQL Issue Hibernate Criteria

I am using Java-Hibernate with two Databases (Postgresql and MSSQL).
SqlServer2012 with dialect:
hibernate.dialect = org.hibernate.dialect.SQLServer2012Dialect
I have written a Criteria query like :
DetachedCriteria detachedCriteria=DetachedCriteria.forClass(entityClazz);
ProjectionList proj = Projections.projectionList();
proj.add(Projections.max(COMPOSEDID_VERSION_ID));
proj.add(Projections.groupProperty(COMPOSEDID_ID));
detachedCriteria.setProjection(proj);
criteria = session.createCriteria(entityClazz)
.add( Subqueries.propertiesIn(new String[] { COMPOSEDID_VERSION_ID, COMPOSEDID_ID }, detachedCriteria));
This query worked fine with Postgre Db. But when i switch to MSSQL i get the following error:
Caused by: java.sql.SQLException: An expression of non-boolean type specified in a context where a condition is expected, near ','.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:70)[201:org.hibernate.core:5.0.0.Final]
Can anyone help me out? What change should i made in Criteria API to achieve my goal to get maxVersion record against each Id??
Instead of adding subqueries in criteria of projections from detached critaria, add projection directly in critaria like this:
DetachedCriteria detachedCriteria=DetachedCriteria.forClass(entityClazz);
ProjectionList proj = Projections.projectionList();
proj.add(Projections.max(COMPOSEDID_VERSION_ID));
proj.add(Projections.groupProperty(COMPOSEDID_ID));
criteria = session.createCriteria(entityClazz)
.setProjection( proj );

Hive Serde errors with Array<Struct<>> org.json.JSONArray cannot be cast to [Ljava.lang.Object;

I have created a table :
add jar /../xlibs/hive-json-serde-0.2.jar;
CREATE EXTERNAL TABLE SerdeTest
(Unique_ID STRING
,MemberID STRING
,Data ARRAY>
)
PARTITIONED BY (Pyear INT, Pmonth INT)
ROW FORMAT SERDE "org.apache.hadoop.hive.contrib.serde2.JsonSerde";
ALTER TABLE SerdeTest ADD
PARTITION (Pyear = 2014, Pmonth =03) LOCATION '../Test2';
The data in the file :
{"Unique_ID":"ABC6800650654751","MemberID":"KHH966375835","Data":[{"SerialNo":1,"VariableName":"Var1","VariableValue":"A_49"},{"SerialNo":2,"VariableName":"Var2","VariableValue":"B_89"},{""SerialNo":3,"VariableName":"Var3","VariableValue":"A_99"}]}
Select query that I am using:
select Data[0].SerialNo from SerdeTest where Unique_ID = 'ABC6800650654751';
however, when I run this query I get the following error:
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row [Error getting row data with exception java.lang.ClassCastException: org.json.JSONArray cannot be cast to [Ljava.lang.Object;
at org.apache.hadoop.hive.serde2.objectinspector.StandardListObjectInspector.getList(StandardListObjectInspector.java:98)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:330)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:386)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:237)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:223)
at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:539)
at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:157)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349)
at org.apache.hadoop.mapred.Child$4.run(Child.java:270)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.mapred.Child.main(Child.java:264)
]
Can anyone please suggest me what am I doing wrong
Few suggestions:
Make sure that all the packages of hive and hive-json-serde-0.2.jar have execute permission for hadoop user.
Hive creates a file called derby.log and metastore_db in the hive directory. It should be allowed to the user invoking the hive query to create files and directories.
Location for data should have / at the end. e.g. LOCATION '../Test2/';
In short, the working JAR is json-serde-1.3-jar-with-dependencies.jar which can be found here. This one is working with 'STRUCT' and can even ignore some malformed JSON. During the creation of the table, include the following code:
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES ("ignore.malformed.json" = "true")
LOCATION ...
If needed, it is possible to recompile it from here or here. I tried the first repository and it is compiling fine for me, after adding the necessary libs. The repository has also been updated recently.
Check for more details here.

'do_replace()' not working?

while trying ATK4 I've found a problem:
$this->api->db->dsql()->table('person')->set('id', 1)->set('name', 'Test user')->do_replace();
This is not working. Then I looked a little bit deeper in ATK4 source and found in /opt/ipism/www/atk4/lib/DB/dsql.php the lines
public $sql_templates=array(
'select'=>"select [options] [field] [from] [table] [join] [where] [group] [having] [order] [limit]",
'insert'=>"insert [options_insert] into [table_noalias] ([set_fields]) values ([set_values])",
'replace'=>"replace [options_replace] into [table_noalias] ([set_fields]) values ([set_values])",
'update'=>"update [table_noalias] set [set] [where]",
'delete'=>"delete from [table_noalias] [where]",
'truncate'=>'truncate table [table_noalias]',
'describe'=>'desc [table_noalias]',
);
After changing the 'replace'-line into
'replace'=>"replace into [table_noalias] ([set_fields]) values ([set_values])",
it worked for me (removing the options_replace and appending a 's' to set_value). I'm using latest version from git with a MySQL database connection.
But I'm not sure, if I'm using 'do-replace()' in the wrong way?
ByE...
By the way: Is there a way to send fixes, without creating an account on GitHub or somewhere?
Edit: Here is the output if the options_replace isn't removed from the template:
replace [options_replace] into `person` (`id`,`name`) values ("1","John Doe") [:a_2, :a]Application Error: Database Query Failed
Exception_DB, code: 0Additional information: pdo_error: SQLSTATE[42000]: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '[options_replace] into `person` (`id`,`name`) values ('1' at line 1 mode: replace params: :a: 1 :a_2: John Doe query: replace [options_replace] into `person` (`id`,`name`) values (:a,:a_2) template: replace [options_replace] into [table_noalias] ([set_fields]) values ([set_values])/opt/ipism/www/atk4/lib/DB/dsql.php:1519
Stack trace:
File Object NameStack Trace/opt/ipism/www/atk4/lib/BaseException.php:63 Exception_DBException_DB->collectBasicData(Null)
/opt/ipism/www/atk4/lib/AbstractObject.php:545 Exception_DBException_DB->__construct("Database Query Failed", Null)
/opt/ipism/www/atk4/lib/DB/dsql.php:1519 sample_project_db_db_dsql_mysqlDB_dsql_mysql->exception("Database Query Failed")
/opt/ipism/www/atk4/lib/DB/dsql.php:1586 sample_project_db_db_dsql_mysqlDB_dsql_mysql->execute()
/opt/ipism/www/atk4/lib/DB/dsql.php:1624 sample_project_db_db_dsql_mysqlDB_dsql_mysql->replace()
/opt/ipism/www/page/test.php:40 sample_project_db_db_dsql_mysqlDB_dsql_mysql->do_replace()
/opt/ipism/www/atk4/lib/AbstractObject.php:306 sample_project_testpage_test->init()
/opt/ipism/www/atk4/lib/ApiFrontend.php:130 sample_projectFrontend->add("page_test", "test", "Content")
/opt/ipism/www/atk4/lib/ApiWeb.php:428 sample_projectFrontend->layout_Content()
/opt/ipism/www/atk4/lib/ApiFrontend.php:39 sample_projectFrontend->addLayout("Content")
/opt/ipism/www/atk4/lib/ApiWeb.php:275 sample_projectFrontend->initLayout()
/opt/ipism/www/index.php:15 sample_projectFrontend->main()
Note: To hide this information from your users, add $config['logger']['web_output']=false to your config.php file. Refer to documentation on 'Logger' for alternative logging options
Replace is similar to "insert" by it's nature, but instead of failing when primary key is duplicated, it replaces the value.
Please add ->debug() to your line before do_replace and give me the output, which would help me understand why that parameter needs removing.
set_value seems to be a typo, I have changed and committed it into master: https://github.com/atk4/atk4/commit/24b20865b9e3345a8e7504dfb68b7ef96335009e
the best way to submit changes is by creating a pull request. The best way to report issues is through "issues" in github currently.

How to select all columns from multiple SQLite databases

I am doing queries over multiple databases in SQLite, and I'm having trouble using a .* in my queries. I have successfully used the ATTACH function to reference both databases:
dbOne.execute("ATTACH DATABASE 'dbOne.sql' as db1");
dbOne.execute("ATTACH DATABASE 'dbTwo.sql' as db2");
This query here gives me a syntax error (syntax error near *):
dbOne.execute("SELECT db2.myTable.* FROM db2.myTable");
Can I do db2.myTable.*? Or do I have to select each individual column one at a time?
SELECT db2.myTable.columnA, db2.myTable.columnB, db2.myTable.columnC, etc.
Thanks!
In case you have not solved this already, this will work:
a) dbOne.execute("SELECT * FROM db2.myTable");
b) dbOne.execute("SELECT abc.* FROM db2.myTable abc");
Also, you don't have to specify the database name when the tablename is unique across all attached databases.
b) is commonly used when you select or join multiple tables, e.g.
SELECT abc.*, xyz.* FROM db2.myTable abc, db1.myOtherTable xyz

Resources