I am using QueryDSl to dynamiclly query the documents.
I have Document like
Transaction {
TxnId
List ExceptionList [ {excpId, fieldName="ABC"} , {excpId, fieldName="XYZ"}]
.....
}
Now I would like to search/sort the transactions based on Transaction.ExceptionList.fieldName
How can I do it using QueryDSL-Spring-Data(Mongo repositories) ?
Related
I have setup the snowflake - kafka connector. I setup a sample table (kafka_connector_test) in snowflake with 2 fields both are VARCHAR type.
Fields are CUSTOMER_ID and PURCHASE_ID.
Here is my configuration that I created for the connector
curl -X POST \
-H "Content-Type: application/json" \
--data '{
"name":"kafka_connector_test",
"config":{
"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector",
"tasks.max":"2",
"topics":"kafka-connector-test",
"snowflake.topic2table.map": "kafka-connector-test:kafka_connector_test",
"buffer.count.records":"10000",
"buffer.flush.time":"60",
"buffer.size.bytes":"5000000",
"snowflake.url.name":"XXXXXXXX.snowflakecomputing.com:443",
"snowflake.user.name":"XXXXXXXX",
"snowflake.private.key":"XXXXXXXX",
"snowflake.database.name":"XXXXXXXX",
"snowflake.schema.name":"XXXXXXXX",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"value.converter":"com.snowflake.kafka.connector.records.SnowflakeJsonConverter"}}'\
I send data to the topic that I have configured in the connector configuration.
{"CUSTOMER_ID" : "test_id", "PURCHASE_ID" : "purchase_id_test"}
then when I check the kafka-connect server I get the below error:
[SF KAFKA CONNECTOR] Detail: Table doesn't have a compatible schema
Is there something I need to setup in either kafka connect or snowflake that says which parts of the json go into which columns of the table? Not sure how to specify how it parses the json.
I setup a different topic as well and didn't create a table in snowlake. In that I was able to populate this table but the connector makes a table with 2 columns RECORD_METADATA and RECORD_CONTENT. But I don't want to write a scheduled job to parse this I want to directly insert into a queryable table.
Snowflake Kafka connector writes data as json by design. The default columns RECORD_METADATA and RECORD_CONTENT are variant. If you like to query them you can create a view on top the table to achieve your goal and you don't need a scheduled job
So, your table created by connector would be something like
RECORD_METADATA, RECORD_CONTENT
{metadata fields in json}, {"CUSTOMER_ID" : "test_id", "PURCHASE_ID" : "purchase_id_test"}
You can create a view to display your data
create view v1 as
select RECORD_CONTENT:CUSTOMER_ID::text CUSTOMER_ID,
RECORD_CONTENT:PURCHASE_ID::text PURCHASE_ID
Your query will be
select CUSTOMER_ID , PURCHASE_ID from v1
PS. If you like to create your own tables you need to use variant as your data type instead of varchar
Also looks like it's not supported at this time in reference to this github issue
I am trying to delete index in Solr. When i run XML query it deletes, but if i try JSON query it gives success message but it does not delete and weirdly it increase count by 1 by adding new row:
XML:
<delete><query>id:5a048bb5-f661-4517-a55e-e15e663c2cd7</query></delete>
JSON:
{
"delete": {
"query": "id:5a048bb5-f661-4517-a55e-e15e663c2cd7"
}
}
I've got two JSON data row from the column in postgresql database and that looks like this.
{
"details":[{"to":"0:00:00","from":"00:00:12"}]
}
{
"details":[
{"to":"13:01:11","from":"13:00:12"},
{"to":"00:00:12","from":"13:02:11"}
]
}
I want to iterate over details and get only the "from" key values using a query in postgresql.
I want it like
from
00:00:12
13:00:12
13:02:11
Use jsonb_array_elements
select j->>'from' as "from" from t
cross join jsonb_array_elements(s->'details') as j;
Demo
I'm currently using postgres (version 9.4.4) to save entire json documents of projects in one table like this (simplified):
CREATE TABLE projects (
id numeric(19,0) NOT NULL,
project bson NOT NULL )
The project json is something like this (OVERLY-simplified):
{
"projectData": {
"guid":"project_guid",
"name":"project_name"
},
"types": [
{
"class":"window",
"provider":"glassland"
"elements":[
{
"name":"example_name",
"location":"2nd floor",
"guid":"1a",
},
{
"name":"example_name",
"location":"3rd floor",
"guid":"2a",
}
]
},
{
"class":"door",
"provider":"woodland"
"elements":[
{
"name":"example_name",
"location":"1st floor",
"guid":"3a",
},
{
"name":"example_name",
"location":"2nd floor",
"guid":"4a",
}
]
}
]
}
I've been reading documentation on operators ->, ->>, #>, #> and so on. I did some tests and successful selects. But I can't manage to index properly, specially nested arrays (types and elements).
Those are some example selects I would like to learn how to optimize (there are plenty like this):
select distinct types->'class' as class, types->'provider' as type
from projects, jsonb_array_elements(project#>'{types}') types;
select types->>'class' as class,
types->>'provider' as provider,
elems->>'name' as name,
elems->>'location' as location,
elems->>'guid' as guid,
from projects, jsonb_array_elements(project#>'{types}') types, jsonb_array_elements(types#>'{elements}') elems
where types->>'class' like 'some_text%' and elems->'guid' <> '""';
Also I have this index:
CREATE INDEX idx_gin ON projects USING GIN (project jsonb_ops);
Both of those selects work, but they don't use te #> operator or any operator that can use the GIN index. I can't create a index btree ( create index idx_btree on tcq.json_test using btree ((obra->>'types')); ) because the size of the value exceeds the limit (for the real json). Also I can't ( or I don't know how to ) create an index for, let's say, guids of elements ( create index idx_btree2 on tcq.json_test using btree((obra->>'types'->>'elements'->>'guid')); ). This produces a syntax error.
I been trying to translate queries to something using #> but things like this:
select count(*)
from projects, jsonb_array_elements(project#>'{types}') types
where types->>'class' = 'window';
select count(*)
from projects
where obra #> '{"types":[{"class":"window"}]}';
produce a different output.
Is there a way to properly index the nested arrays of that json? or to properly select taking advantage of the GIN index?
I am using DSE version = 5.0.1 . Using Solr for Search activities. I created core successfully for particular table.
After that once I am trying to execute solr seach query , I am getting below issue :-
cqlsh:tradebees_dev> SELECT * FROM yf_product_books where solr_query = ':';
ServerError: <Error from server: code=0000 [Server error]
message="java.io.IOException:
No shards available for ranges: [(2205014674981121837,2205014674981121837]]">
Please suggest some solutions.
Your query should be something like this : -
SELECT * FROM yf_product_books where solr_query = ':';