I am using Java-Hibernate with two Databases (Postgresql and MSSQL).
SqlServer2012 with dialect:
hibernate.dialect = org.hibernate.dialect.SQLServer2012Dialect
I have written a Criteria query like :
DetachedCriteria detachedCriteria=DetachedCriteria.forClass(entityClazz);
ProjectionList proj = Projections.projectionList();
proj.add(Projections.max(COMPOSEDID_VERSION_ID));
proj.add(Projections.groupProperty(COMPOSEDID_ID));
detachedCriteria.setProjection(proj);
criteria = session.createCriteria(entityClazz)
.add( Subqueries.propertiesIn(new String[] { COMPOSEDID_VERSION_ID, COMPOSEDID_ID }, detachedCriteria));
This query worked fine with Postgre Db. But when i switch to MSSQL i get the following error:
Caused by: java.sql.SQLException: An expression of non-boolean type specified in a context where a condition is expected, near ','.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:372)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2988)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2421)
at net.sourceforge.jtds.jdbc.TdsCore.getMoreResults(TdsCore.java:671)
at net.sourceforge.jtds.jdbc.JtdsStatement.executeSQLQuery(JtdsStatement.java:505)
at net.sourceforge.jtds.jdbc.JtdsPreparedStatement.executeQuery(JtdsPreparedStatement.java:1029)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:70)[201:org.hibernate.core:5.0.0.Final]
Can anyone help me out? What change should i made in Criteria API to achieve my goal to get maxVersion record against each Id??
Instead of adding subqueries in criteria of projections from detached critaria, add projection directly in critaria like this:
DetachedCriteria detachedCriteria=DetachedCriteria.forClass(entityClazz);
ProjectionList proj = Projections.projectionList();
proj.add(Projections.max(COMPOSEDID_VERSION_ID));
proj.add(Projections.groupProperty(COMPOSEDID_ID));
criteria = session.createCriteria(entityClazz)
.setProjection( proj );
Related
I have two data sources - an S3 bucket and a postgres database table. Both sources have records in the same format with a unique identifier of type uuid. Some of the records present in the S3 bucket are not part of the postgres table and the intent is to find those missing records. The data is bounded as it is partitioned by every day in the s3 bucket.
Reading the s3-source (I believe this operation is reading the data in batch mode since I am not providing the monitorContinuously() argument) -
final FileSource<GenericRecord> source = FileSource.forRecordStreamFormat(
AvroParquetReaders.forGenericRecord(schema), path).build();
final DataStream<GenericRecord> avroStream = env.fromSource(
source, WatermarkStrategy.noWatermarks(), "s3-source");
DataStream<Row> s3Stream = avroStream.map(x -> Row.of(x.get("uuid").toString()))
.returns(Types.ROW_NAMED(new String[] {"uuid"}, Types.STRING));
Table s3table = tableEnv.fromDataStream(s3Stream);
tableEnv.createTemporaryView("s3table", s3table);
For reading from Postgres, I created a postgres catalog -
PostgresCatalog postgresCatalog = (PostgresCatalog) JdbcCatalogUtils.createCatalog(
catalogName,
defaultDatabase,
username,
pwd,
baseUrl);
tableEnv.registerCatalog(postgresCatalog.getName(), postgresCatalog);
tableEnv.useCatalog(postgresCatalog.getName());
Table dbtable = tableEnv.sqlQuery("select cast(uuid as varchar) from `localschema.table`");
tableEnv.createTemporaryView("dbtable", dbtable);
My intention was to simply perform left join and find the missing records from the dbtable. Something like this -
Table resultTable = tableEnv.sqlQuery("SELECT * FROM s3table LEFT JOIN dbtable ON s3table.uuid = dbtable.uuid where dbtable.uuid is null");
DataStream<Row> resultStream = tableEnv.toDataStream(resultTable);
resultStream.print();
However, it seems that the UUID column type is not supported just yet because I get the following exception.
Caused by: java.lang.UnsupportedOperationException: Doesn't support Postgres type 'uuid' yet
at org.apache.flink.connector.jdbc.dialect.psql.PostgresTypeMapper.mapping(PostgresTypeMapper.java:171)
As an alternative, I tried to read the database table as follows -
TypeInformation<?>[] fieldTypes = new TypeInformation<?>[] {
BasicTypeInfo.of(String.class)
};
RowTypeInfo rowTypeInfo = new RowTypeInfo(fieldTypes);
JdbcInputFormat jdbcInputFormat = JdbcInputFormat.buildJdbcInputFormat()
.setDrivername("org.postgresql.Driver")
.setDBUrl("jdbc:postgresql://127.0.0.1:5432/localdatabase")
.setQuery("select cast(uuid as varchar) from localschema.table")
.setUsername("postgres")
.setPassword("postgres")
.setRowTypeInfo(rowTypeInfo)
.finish();
DataStream<Row> dbStream = env.createInput(jdbcInputFormat);
Table dbtable = tableEnv.fromDataStream(dbStream).as("uuid");
tableEnv.createTemporaryView("dbtable", dbtable);
Only this time, I get the following exception on performing the left join (as above) -
Exception in thread "main" org.apache.flink.table.api.TableException: Table sink '*anonymous_datastream_sink$3*' doesn't support consuming update and delete changes which is produced by node Join(joinType=[LeftOuterJoin]
It works if I tweak the resultStream to publish the changeLogStream -
Table resultTable = tableEnv.sqlQuery("SELECT * FROM s3table LEFT JOIN dbtable ON s3table.sync_id = dbtable.sync_id where dbtable.sync_id is null");
DataStream<Row> resultStream = tableEnv.toChangelogStream(resultTable);
resultStream.print();
Sample O/P
+I[9cc38226-bcce-47ce-befc-3576195a0933, null]
+I[a24bf933-1bb7-425f-b1a7-588fb175fa11, null]
+I[da6f57c8-3ad1-4df5-9636-c6b36df2695f, null]
+I[2f3845c1-6444-44b6-b1e8-c694eee63403, null]
-D[9cc38226-bcce-47ce-befc-3576195a0933, null]
-D[a24bf933-1bb7-425f-b1a7-588fb175fa11, null]
However, I do not want the sink to have Inserts and Deletes as separate. I want just the final list of missing uuids. I guess it happens because my Postgres Source created with DataStream<Row> dbStream = env.createInput(jdbcInputFormat); is a streaming source. If I try to execute the whole application in BATCH mode, I get the following exception -
org.apache.flink.table.api.ValidationException: Querying an unbounded table '*anonymous_datastream_source$2*' in batch mode is not allowed. The table source is unbounded.
Is it possible to have a bounded JDBC source? If not, how can I achieve this using the streaming API. (using Flink version - 1.15.2)
I believe this kind of case would be a common usecase that can be implemented with Flink but clearly I'm missing something. Any leads would be appreciated.
For now common approach would be to sink the resultStream to a table. So you can schedule a job which truncates the table and then executes the Apache Flink job. And then read the results from this table.
I also noticed Apache Flink Table Store 0.3.0 is just released. And they have materialized views on the roadmap for 0.4.0. This might be a solution too. Very exciting imho.
As a Part of my little App I try to map Data between my Ontology and an Oracle DB with ontop. But my first mapping is not accepted by the reasoner and it's not clear why.
As my first attempt I use the following target:
:KIS/P_PVPAT_PATIENT/{PPVPAT_PATNR} a :Patient .
and the following source:
select * from P_PVPAT_PATIENT
Here KIS is the schema, p_pvpat_patient the table and ppvpat_patnr the key.
Caused by: it.unibz.inf.ontop.exception.InvalidMappingSourceQueriesException:
Error: Relation IDs mismatch: P_PVPAT_PATIENT v "KIS"."P_PVPAT_PATIENT" P_PVPAT_PATIENT
Problem location: source query of triplesMap
[id: MAP_PATIENT
target atoms: triple(s,p,o) with
s/RDF(http://www.semanticweb.org/grossmann/ontologies/kis-ontology#KIS/P_PVPAT_PATIENT/{}(TmpToVARCHAR2(PPVPAT_PATNR)),IRI), p/<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>, o/<http://www.semanticweb.org/grossmann/ontologies/kis-ontology#Patient>
source query: select * from P_PVPAT_PATIENT]
As the error said my source query was wrong because I forgot to use the schema in my sql.
the correct sql is
select * from kis.P_PVPAT_PATIENT
I am using spring data backed by hibernate for my project for the CRUD layer and ORM. I was using H2 first. But when switching to SQL server 2014, I faced the following issue:
I use the following service:
#Query("Select example from Example example where
example.exampleProperty like CONCAT('%',:param,'%')")
List<Example> findByProductLibe(#Param("param") String param);
To get Example object (from example table) using a property. It is working well in H2, but moving to sql server (by switching configuration channel AND Dialect to sql server) i have a BadSqlGrammarException due to the query generated by Hibernate is as follows:
Hibernate:
select
ex.param1 as param1,
ex.param2 as param2
from
example ex
where
example.exampleProperty like ('%'||?||'%')
the problem is with the '|' character, it prints 'Incorrect syntax near '|' '
Here is my database configuration:
database.driver = com.microsoft.sqlserver.jdbc.SQLServerDriver
database.password =
database.username =
hibernate.dialect = org.hibernate.dialect.SQLServerDialect
hibernate.ejb.naming_strategy = org.hibernate.cfg.ImprovedNamingStrategy
hibernate.hbm2ddl.auto = create
hibernate.generate_statistics = true
hibernate.format_sql = true
hibernate.show_sql = true
Thanks for any help or indication.
Replace with the below code in repo should work.
#Query("Select example from Example example where
example.exampleProperty like %:param%")
List<Example> findByProductLibe(#Param("param") String param);
Thank you for your answers, The problem is finally resolved, I made my hibernate.dialect to org.hibernate.dialect.SQLServer2012Dialect and it generated the following query:
Hibernate:
select
ex.param1 as param1,
ex.param2 as param2
from
example ex
where example.exampleProperty like ('%'+?+'%')
I must have not cleaned and install the project properly.
Thank you.
I executed the query like this in razor sql.
SELECT * FROM number_log where phonenumber = '6032969081' and is_active='1' ALLOW FILTERING
Its giving me an error like -
ERROR: No secondary indexes on the restricted columns support the
provided operators: 'SELECT * FROM number_log
where phonenumber = '6032969081' and is_active='1' ALLOW FILTERING'
can anyone please help me out -
Try to add an index on your column....
CREATE INDEX ON <b>table_name</b> (<b>field_name</b>);
check this out....It can help you about your issue.
http://tonylixu.blogspot.in/2015/04/cassandra-no-secondary-indexes-on.html
I'm trying to run a custom query on my DB with Doctrine2 using the following:
$qb = $em->createQueryBuilder();
$qb->select(array('c', 'count(uc) as numMembers'))
->from('AcmeGroupBundle:Group', 'c')
->leftJoin('c.members', 'uc', 'WITH', 'uc.community = c.id')
->groupBy('c')
->orderBy('numMembers', 'DESC')
->setFirstResult( $offset )
->setMaxResults( $limit );
$entities = $qb->getQuery()->getResult();
This query runs flawlessly on my local MySQL DB. But when I try to run it against my production DB (MSSQL), i get the following error:
SQLSTATE[42000]: [Microsoft][SQL Server Native Client 11.0][SQL Server]Column 'Group.discr' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I do have a discriminator column because I have classes inheriting from Group.
Any suggestions on how should I change the query to make it compatible with MSSQL?
Thanks!