I am using Sequelize query for postgrsql db.
I have one field in table, that's type is JSONB. I would like to get data from table like not contain particular value from that field.
For example:
I have field 'field1' JSONB type.
field1 :[8,10,23] values
No need to fetch particular row, if the field1 have value 8. Those rows don't have field1 with 8 i need to fetch those rows.
{
[Op.not]:{
field1:
{
[Op.contains]: [8]
}
}
}
I tried above query but its not getting.
Try this:
{
field1:
{
[Op.not]:{
[Op.contains]: [8]
}
}
}
Related
like in question is it possible to create that query?
db.persons.find({ 'oi': '5f2417e3c655cb13e85186df', 'ch': { $elemMatch: { 'type': { $in: ['MAN'] }}}}, {'ch.$': 1})
The last part of this query is problematic. How to retrive fields that pass the $elemMatch predicate only.
Spring data #Query annotation have field property but if specify it by {'children: 1'} I retrive all children instead of this which pass the query.
{ 'children.$': 1} doesn't work of course.
You can use #Query annotation. Pass the query in value and projection in fields inside #Query annotation like this
#Query(value = "{ 'oi': ?0, 'ch': { $elemMatch: { 'type': { $in: ?1 }}}}", fields = "{'ch.$' : 1}")
Person findPersonCustom(String id, List<String> types);
Here ?0 & ?1 will be passed as method arguments 0 & 1 respectively. You can also put that values inside #Query directly if they are static.
The resulted Person object will contain only matched elements from ch array.
Below is the search definition of my document. I have field "expire" which is a timestamp in my document.Now I want to search documents using yql query if isActive="1" and test.expire - now() > 0.Can I achieve this by query?
search test {
document test {
field Id type string {
indexing: index|summary
}
field isActive type string {
indexing: index|summary
}
field expire type long {
indexing: index | summary
}
field detail type string {
indexing: summary
}
}
}
If yes then what would be my query? How can I apply condition in my query?Please help
YQL for this is
?query=select * from test where (isActive contains "1" and expire > nowTimestamp);&type=yql
You cannot use now() so you need to insert the timestamp yourself.
You could also construct the query in a Searcher component (bypassing the need to construct a YQL string).
I have some JSON data which looks like this:
{
"key1":"value1",
"key2":[
1,
2,
3
],
"key3":{
"key31":"value31",
"key32":"value32"
},
"key4":[
{
"key41":"value411",
"key42":"value412",
"key43":"value413"
},
{
"key41":"value421",
"key42":"value422",
"key43":"value423"
}
],
"key5":{
"key51":[
{
"key511":"value511",
"key512":"value512",
"key513":"value513"
},
{
"key511":"value521",
"key512":"value522",
"key513":"value523"
}
]
},
"key6":{
"key61":{
"key611":[
{
"key_611":"value_611",
"key_612":"value_612",
"key_613":"value_613"
},
{
"key_611":"value_621",
"key_612":"value_622",
"key_613":"value_623"
},
{
"key_611":"value_621",
"key_612":"value_622",
"key_613":"value_623"
}
]
}
}
}
It contains the a mix of simple, complex and array type values.
If I try to get the datatype of key1 schema.("key1").dataType, I get StringType and likewise for key2, key3 and key4.
For key5 also, I get StructType.
But when I try to get the datatype for key51, which is nested under key5 using schema.("key5.key51").dataType, I'm getting the following error:
java.lang.IllegalArgumentException: Field "key5.key51" does not exist.
at org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:264)
at org.apache.spark.sql.types.StructType$$anonfun$apply$1.apply(StructType.scala:264)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:59)
at org.apache.spark.sql.types.StructType.apply(StructType.scala:263)
... 48 elided
The main intention for me is to be able to explode a given type if its of ArrayType and not explode for any other type.
The explode function is able to recognize this given key (key5.key51) properly and exploding the array. But the problem is with determining the datatype.
One possible solution for me is to do a select of key5.key51 as a separate column key51 and then explode that column.
But is there any better and more elegant way of doing this while still being able to determine the datatype of the given column?
The simplest solution is to select the field of interest, and then retrieve the schema:
df.select("key5.key51").schema.head.dataType
Using full schema directly, would require traversing schema, and might be hard to do right, while with embedded ., StructTypes and complex types (Maps and Arrays).
Here is some (recursive) code to find all ArrayType fields names:
import org.apache.spark.sql.types._
def findArrayTypes(parents:Seq[String],f:StructField) : Seq[String] = {
f.dataType match {
case array: ArrayType => parents
case struct: StructType => struct.fields.toSeq.map(f => findArrayTypes(parents:+f.name,f)).flatten
case _ => Seq.empty[String]
}
}
val arrayTypeColumns = df.schema.fields.toSeq
.map(f => findArrayTypes(Seq(f.name),f))
.filter(_.nonEmpty).map(_.mkString("."))
For your dataframe, this gives:
arrayTypeColumns.foreach(println)
key2
key4
key5.key51
key6.key61.key611
This does not work yet for arrays inside maps or nested arrays
How do I query for an object vs an array of objects in a document?
I need to select all the documents that are like Dataset 2.
Dataset 1:
{
'firstname' : 'John',
'lastname': 'Smith',
'assistance': [
{'type': 'Food', 'amount': 20}
]
}
Dataset 2:
{
'firstname' : 'John',
'lastname': 'Smith',
'assistance': {
'type': 'Food',
'amount': 20
}
}
db.foo.find( { "assistance" : { $type : 3 } } ); // 3 is bson type for object
will return both the documents and
db.foo.find( { "assistance" : { $type : 4 } } ); // 4 is bson type for object
will return none of the two documents.
This is becuase in mongodb, when querying on an array, the array elements are checked instead on the complete array.
You can eliminate all elements where assistance is of type array by:
db.foo.find({"assistance.0" : {$exists : false}})
you can use $type , $type won't work for dataset1, the reason being that it doesn't check if the field is array or not, what it checks is that the field contains array or not.
But if you are looking for dataset 2, you can use $type for object
db.whatever.find( { "assistance" : { $type : 3 } } );
or
db.whatever.find( { "assistance" : { $type : "object" } } );
MongoDB stores data in BSON format.
BSON stands for Binary JSON.
BSON supports various datatypes for values in documents.
According to documentation of MongoDB
$type selects the documents where the value of the field is an
instance of the specified BSON type.
According to above mentioned documents
Dataset 1:Datatype of assistance field is an array.
Dataset2 : Datatype of assistance field is an object.
To retrieve only documents containing assistance key having object as a data type please try executing following query in MongoDB shell.
db.collection.find({assistance:{$type:3}})
For more detailed description regarding BSON types please refer documentation as mentioned in below URL
https://docs.mongodb.com/manual/reference/bson-types/
According to the Dapper documentation, you can get a dynamic list back from dapper using below code :
var rows = connection.Query("select 1 A, 2 B union all select 3, 4");
((int)rows[0].A)
.IsEqualTo(1);
((int)rows[0].B)
.IsEqualTo(2);
((int)rows[1].A)
.IsEqualTo(3);
((int)rows[1].B)
.IsEqualTo(4);
What is however the use of dynamic if you have to know the field names and datatypes of the fields.
If I have :
var result = Db.Query("Select * from Data.Tables");
I want to be able to do the following :
Get a list of the field names and data types returned.
Iterate over it using the field names and get back data in the following ways :
result.Fields
["Id", "Description"]
result[0].values
[1, "This is the description"]
This would allow me to get
result[0].["Id"].Value
which will give results 1 and be of type e.g. Int 32
result[0].["Id"].Type --- what datattype is the value returned
result[0].["Description"]
which will give results "This is the description" and will be of type string.
I see there is a results[0].table which has a dapperrow object with an array of the fieldnames and there is also a result.values which is an object[2] with the values in it, but it can not be accessed. If I add a watch to the drilled down column name, I can get the id. The automatically created watch is :
(new System.Collections.Generic.Mscorlib_CollectionDebugView<Dapper.SqlMapper.DapperRow>(result as System.Collections.Generic.List<Dapper.SqlMapper.DapperRow>)).Items[0].table.FieldNames[0] "Id" string
So I should be able to get result[0].Items[0].table.FieldNames[0] and get "Id" back.
You can cast each row to an IDictionary<string, object>, which should provide access to the names and the values. We don't explicitly track the types currently - we simply don't have a need to. If that isn't enough, consider using the dapper method that returns an IDataReader - this will provide access to the raw data, while still allowing convenient call / parameterization syntax.
For example:
var rows = ...
foreach(IDictionary<string, object> row in rows) {
Console.WriteLine("row:");
foreach(var pair in row) {
Console.WriteLine(" {0} = {1}", pair.Key, pair.Value);
}
}