Solr query delete by Id work for XML but not for Json - solr

I am trying to delete index in Solr. When i run XML query it deletes, but if i try JSON query it gives success message but it does not delete and weirdly it increase count by 1 by adding new row:
XML:
<delete><query>id:5a048bb5-f661-4517-a55e-e15e663c2cd7</query></delete>
JSON:
{
"delete": {
"query": "id:5a048bb5-f661-4517-a55e-e15e663c2cd7"
}
}

Related

How to convert json to table format in Snowflake

I have an external table INV_EXT_TBL( 1 variant column named "VALUE") in Snowflake and it has 6000 rows (each row is json file). The json record has has double quote as its in dynamo_json format.
What is the best approach to parse all the json files and convert it into table format to run sql queries. I have given sample format of just top 3 json files.
"{
""Item"": {
""sortKey"": {
""S"": ""DR-1630507718""
},
""vin"": {
""S"": ""1FMCU9GD2JUA29""
}
}
}"
"{
""Item"": {
""sortKey"": {
""S"": ""affc5dd0875c-1630618108496""
},
},
""vin"": {
""S"": ""SALCH625018""
}
}
}"
"{
""Item"": {
""sortKey"": {
""S"": ""affc5dd0875c-1601078453607""
},
""vin"": {
""S"": ""KL4CB018677""
}
}
}"
I created local table and inserted data into it from external table by casting the data type. Is this correct approach OR should i use parse_json function against the json files to store data in local table.
insert into DB.SCHEMA.INV_HIST(VIN,SORTKEY)
(SELECT value:Item.vin.S::string AS VIN, value:Item.sortKey.S::string AS SORTKEY FROM INV_EXT_TBL);```
I resolved this by creating a Materialized view by using cast on variant column on External table. This helped to get rid of outer double-quotes and the performance improved multifold. I did not progress with table creation approach.
CREATE OR REPLACE MATERIALIZED VIEW DB.SCHEMA.MVW_INV_HIST
AS
SELECT value:Item.vin.S::string AS VIN, value:Item.sortKey.S::string AS SORTKEY
FROM DB.SCHEMA.INV_HIST;

Snowflake select query to return data in json format

I am making a select call on a table and it always returns 1 row. I would like to get the data in json format.
{
"column_name1": "value1",
"column_name2": "value2",
}
Does snowflake query allows anything like this ?
object_construct is the way to go for this.
For example,
select object_construct(*) from t1;

Unable to get docs and facets in one solr CQL query in DSE Search

Requirement: Is to implement faceted search using DSE Search
Problem: Unable to get docs or data and facets in one CQL solr query.
Tools & Technology Used: Datastax Sandbox 5.1 (Cent OS + Virtual box) and trying DSE search
Created following table & used dsetool enable solr (DSE Search):
CREATE TABLE test.employee_copy1 (
empid int,
deptid int,
name text,
solr_query text, -- column got created by enabling DSE Search
PRIMARY KEY (empid, deptid)
)
Inserted following data,
INSERT INTO employee (empid,deptid,name) VALUES (100,200,'John');
INSERT INTO employee (empid,deptid,name) VALUES (101,201,'Helen');
INSERT INTO employee (empid,deptid,name) VALUES (102,201,'John');
I tried facet query from Solr Admin as below
http://localhost:8983/solr/test.employee/select?q=*:*&wt=json&indent=true&facet=true&facet.field=name
Got the results containing both docs ( or data) and facets as expected,
{
"responseHeader": {
"status": 0,
"QTime": 1
},
"response": {
"numFound": 3,
"start": 0,
"docs": [{
"_uniqueKey": "[\"100\",\"200\"]",
"empid": 100,
"deptid": 200,
"name": "John"
},
{
"_uniqueKey": "[\"101\",\"201\"]",
"empid": 101,
"deptid": 201,
"name": "Helen"
},
{
"_uniqueKey": "[\"102\",\"201\"]",
"empid": 102,
"deptid": 201,
"name": "John"
}
]
},
"facet_counts": {
"facet_queries": {},
"facet_fields": {
"name": [
"john", 2,
"helen", 1
]
},
"facet_dates": {},
"facet_ranges": {},
"facet_intervals": {}
}
}
But when tried the following query (CQL) in Datastax devcenter expecting to see data and facets, I see only facets
select JSON * from test.employee where solr_query = '{"q":"*:*", "facet" : {"field":"name"}}';
Got the result but it has only facets and not data:
{"facet_fields" : {"name" : {"john" : 2,"helen" : 1 } } }
Question: Can anyone explain why CQL query doesn't return data in-spite of specifying "q":"*:*"?
Unlike with the HTTP query interface, it is not possible to get both row results and facets from a single CQL solr_query in DSE 5.1 (or any earlier versions).
It's simply a product decision. As Caleb pointed out, the developer experience with parsing the result was seen as less than desirable. Further more, with a distributed, P2P datastore (C*), the workaround of issuing 2 async queries, 1 for the facet result and 1 for the "Top Ten" is the preferred query pattern. They do not have to be done serially.
Ultimately, if the Solr behavior is desired, than the Solr HTTP API is available to use in DSE. The CQL API is aimed more at providing simple FTS on the C* data, not necessarily supporting or doing everything vanilla Solr does through CQL.

Convert column into nested field in destination table on load job in Big Query

I am currently running a job to transfer data from one table to another via a query. But I can't seem to find a way convert a column into a nested field containing the column as a child field. For example, I have a column customer_id: 3 and I would like to convert it to {"customer": {"id":3}}. Below is a snippet of my job data.
query='select * FROM ['+ BQ_DATASET_ID+'.'+Table_name+'] WHERE user="'+user+'"'
job_data={"projectId": PROJECT_ID,
'jobReference': {
'projectId': PROJECT_ID,
'job_id': str(uuid.uuid4())
},
'configuration': {
'query': {
'query': query,
'priority': 'INTERACTIVE',
'allowLargeResults': True,
"destinationTable":{
"projectId": PROJECT_ID,
"datasetId": user,
"tableId": destinationTable,
},
"writeDisposition": "WRITE_APPEND"
},
}
}
Unfortunately, if the "customer" RECORD does not exist in the input schema, it is not currently possible to generate that nested RECORD field with child fields through a query. We have features in the works that will allow schema manipulation like this via SQL, but I don't think it's possible to do accomplish this today.
I think your best option today would be an export, transformation to desired format, and re-import of the data to the desired destination table.
a simple solution is to run
select customer_id as customer.id ....

Spring data mongodb query on subdocument list using querydsl

I am using QueryDSl to dynamiclly query the documents.
I have Document like
Transaction {
TxnId
List ExceptionList [ {excpId, fieldName="ABC"} , {excpId, fieldName="XYZ"}]
.....
}
Now I would like to search/sort the transactions based on Transaction.ExceptionList.fieldName
How can I do it using QueryDSL-Spring-Data(Mongo repositories) ?

Resources