Regex to check if JSON contains an array - arrays

I need to be able to quickly discern whether or not a JSON data structure contains an array in it. For example the regex should return true for
{
"array": [
1,
2,
3
],
"boolean": true,
"null": null,
"number": 123,
"object": {
"a": "b",
"c": "d",
"e": "f"
},
"string": "Hello World"
}
and false for
{
"boolean": true,
"null": null,
"number": 123,
"object": {
"a": "b",
"c": "d",
"e": "f"
},
"string": "[Hello World]"
}
Any ideas?
I Would MUCH prefer if this check could be done via regex instead of traversing json, unless anyone can show me a better way.

You could format the JSON on some "canonical" from using a tool like jshon.
I do not recommend this approach.
$ cat foo.json
{"root": { "foo": 1, "bar": [2,3,4], "baz":"BAZ", "qux":[5,6, [7,8, [9, 10]]]}}
$ jshon < foo.json
{
"root": {
"foo": 1,
"bar": [
2,
3,
4
],
"baz": "BAZ",
"qux": [
5,
6,
[
7,
8,
[
9,
10
]
]
]
}
}
$ jshon < foo.json | grep '^\s*\]'
],
]
]
]
$ echo $?
0

Use this regex:
/:?\s+\[/g
And then you can simply do:
var json = "json here";
var containsArray = json.match(/:?\s+\[/g) !== null; // returns true or false
Demo: http://jsfiddle.net/pMMgb/1

Try
var a = {} // required json.
var regex = /\[(.*)\]/
regex.test(a)

After reviewing the answers, I've reconsidered, and I will go with parsing and traversing to find the answer to my question.
Regexing turned out to be unreliable, and the other options just add more convolution.
Thanks to everyone for the answers!

Related

How do I provide an incrementing counter in place of an existing JSON value using jq

I have an JSON file similar to this:
{
"version": "2.0",
"stage" : {
"objects" : [
{
"foo" : 1100,
"bar" : false,
"id" : "56a983f1-8111-4abc-a1eb-263d41cfb098"
},
{
"foo" : 1100,
"bar" : false,
"id" : "6369df4b-90c4-4695-8a9c-6bb2b8da5976"
}],
"bish" : "#FFFFFF"
},
"more": "abcd"
}
I would like the output to be exactly the same, with the exception of an incrementing integer in place of the "id" : "guid" - something like:
{
"version": "2.0",
"stage" : {
"objects" : [
{
"foo" : 1100,
"bar" : false,
"id" : 1
},
{
"foo" : 1100,
"bar" : false,
"id" : 2
}],
"bish" : "#FFFFFF"
},
"more": "abcd"
}
I'm new to jq. I can set the id's to a fixed integer with .stage.objects[].id |= 1.
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 1
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
I can't figure out the syntax to make the assigned number iterate.
I tried various combinations of map, reduce, to_entries, foreach and other strategies mentioned in answers to similar questions but the data in those examples always consisted of something simple.
You can exploit the fact that to_entries on arrays uses the index as "key", then modify your value:
.stage.objects |= (to_entries | map(.value.id = .key + 1 | .value))
or
.stage.objects |= (to_entries | map(.value += {id: (.key + 1)} | .value))
Output:
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 2
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
Here's a variant using reduce to iterate over the keys:
.stage.objects |= reduce keys[] as $i (.; .[$i].id = $i + 1)
{
"version": "2.0",
"stage": {
"objects": [
{
"foo": 1100,
"bar": false,
"id": 1
},
{
"foo": 1100,
"bar": false,
"id": 2
}
],
"bish": "#FFFFFF"
},
"more": "abcd"
}
Demo
Update:
Is there a way to make the search and replace go deep? If the items in the objects array had children arrays with id's, could they be replaced as well?
Of course. You could enhance the LHS of the update to also cover all .children arrays recursively using recurse(.[].children | arrays):
(.stage.objects | recurse(.[].children | arrays)) |=
reduce keys[] as $i (.; .[$i].id = $i + 1)
Demo
Note that in this case each .children array is treated independently, thus numbering starts from 1 in each of them. If you want a continuous numbering instead, it has to be done outside and brought down into the iteration. Here's a solution gathering the target paths using path, numbering them using to_entries, and setting them iteratively using setpath:
reduce (
[path(.stage.objects[] | recurse(.children | arrays[]).id)] | to_entries[]
) as $i (.; setpath($i.value; $i.key + 1))
Demo

Rails how to extract data from nested JSON data

I am trying to parse out the values I need from this nested JSON data. I need to get from quotas is responders and I need to get from qualified is service_id, codes.
I tried first to get just the quotas but kept getting this error []': no implicit conversion of String into Integer
hash = JSON::parse(response.body)
hash.each do |data|
p data["quotas"]
end
Json data
{
"id": 14706,
"relationships" : [
{
"id": 538
}
]
"quotas": [
{
"id": 48894,
"name": "Test",
"responders": 6,
"qualified": [
{
"service_id": 12,
"codes": [
1,
2,
3,
6,
]
},
{
"service_id": 23,
"pre_codes": [
1,
2
]
}
]
}
]
}
I needed to convert your example into json. Then I could loop the quotas and output the values.
hash = JSON::parse(data.to_json)
hash['quotas'].each do |data|
p data["responders"]
data["qualified"].each do |responder|
p responder['service_id']
p responder['codes']
end
end
Hash in data variable (needed for the sample code to work):
require "json"
data = {
"id": 14706,
"relationships": [
{
"id": 538
}
],
"quotas": [
{
"id": 48894,
"name": "Test",
"responders": 6,
"qualified": [
{
"service_id": 12,
"codes": [
1,
2,
3,
6,
]
},
{
"service_id": 23,
"pre_codes": [
1,
2
]
}
]
}
]
}

How to perform a "NOT IN" from an array in another array in aggregate (MongoDB) [duplicate]

I have an array A in memory created at runtime and another array B saved in a mongo database. How can I efficiently get all the elements from A that are not in B?
You can assume that the array stored in mongodb is several orders of magnitude bigger than the array created at runtime, for that reason I think that obtaining the full array from mongo and computing the result would not be efficient, but I have not found any query operation in mongo that allows me to compute the result I want.
Note that the $nin operator does the opposite of what I want, i.e., it retrieves the elements from B that are not in A.
Example:
Array A, created in my appliction at runtime, is [2, 3, 4].
Array B, stored in mongodb, is [1, 3, 5, 6, 7, 10].
The result I expect is [2, 4].
The only things that "modify" the document in response are .aggregate() and .mapReduce(), where the former is the better option.
In that case you are asking for $setDifference which compares the "sets" and returns the "difference" between the two.
So representing a document with your array:
db.collection.insert({ "b": [1, 3, 5, 6, 7, 10] })
Run the aggregation:
db.collection.aggregate([{ "$project": { "c": { "$setDifference": [ [2,3,4], "$b" ] } } }])
Which returns:
{ "_id" : ObjectId("596005eace45be96e2cb221b"), "c" : [ 2, 4 ] }
If you do not want "sets" and instead want to supply an array like [2,3,4,4] then you can compare with $filter and $in instead, if you have MongoDB 3.4 at least:
db.collection.aggregate([
{ "$project": {
"c": {
"$filter": {
"input": [2,3,4,4],
"as": "a",
"cond": {
"$not": { "$in": [ "$$a", "$b" ] }
}
}
}
}}
])
Or with $filter and $anyElementTrue in earlier versions:
db.collection.aggregate([
{ "$project": {
"c": {
"$filter": {
"input": [2,3,4,4],
"as": "a",
"cond": {
"$not": {
"$anyElementTrue": {
"$map": {
"input": "$b",
"as": "b",
"in": {
"$eq": [ "$$a", "$$b" ]
}
}
}
}
}
}
}
}}
])
Where both would return:
{ "_id" : ObjectId("596005eace45be96e2cb221b"), "c" : [ 2, 4, 4 ] }
Which is of course "not a set" since the 4 was provided as input "twice" and is therefore returned "twice" as well.

How do I sort a possibly-absent array with jq?

Given the following JSON:
{
"alice": { "items": ["foo", "bar"] },
"bob": { "items": ["bar", "foo"] },
"charlie": { "items": ["foo", "bar"] }
}
I can sort the items array as follows:
$ jq < users.json 'map(.items |= sort)'
[
{
"items": [
"bar",
"foo"
]
},
{
"items": [
"bar",
"foo"
]
},
{
"items": [
"bar",
"foo"
]
}
]
However, this crashes if any user doesn't have items:
{
"alice": { "items": ["foo", "bar"] },
"bob": { "items": ["bar", "foo"] },
"charlie": {}
}
Trying to sort it gives an error.
$ jq < users.json 'map(.items |= sort)'
jq: error (at <stdin>:5): null (null) cannot be sorted, as it is not an array
How can I obtain the following?
$ jq < users.json SOMETHING
[
{
"items": [
"bar",
"foo"
]
},
{
"items": [
"bar",
"foo"
]
},
{
"items": [
]
}
]
I've tried using // [] but I'm not sure how to do this in relation to sorting.
You can preserve the original structure if you map the values of the object using map_values. Then from there, if you want to preserve the absence of the items array, just check for it beforehand.
map_values(if has("items") then .items |= sort else . end)
Otherwise if you don't care about adding the empty items:
map_values(.items |= (. // [] | sort))
Aha, this seems to work:
$ jq < users.json 'map(.items |= (. // [])) | map(.items |= sort)'

variant of jq from_entries that collate values for each key occurrence

Can I use jq to run a filter that behaves similarly to from_entries, with the one difference being, if multiple entries for the same key are encountered, it will collate the values into an array, rather than just use the last value?
If so, what filter would achieve this? For example, if my input is:
[
{
"key": "a",
"value": 1
},
{
"key": "b",
"value": 2
},
{
"key": "a",
"value": 3
},
{
"key": "b",
"value": 4
}
]
then the desired output would be:
{ "a": [1,3], "b": [2,4] }
Note that, using 'from_entries' alone as the filter, the resulting values are just the last value (that is, { "a": 3, "b": 4 })
With your example and the following lines in merge.jq:
def merge_entries:
reduce .[] as $pair ({}; .[$pair["key"]] += [$pair["value"]] );
merge_entries
the invocation: jq -c -f merge.jq
yields:
{"a":[1,3],"b":[2,4]}
You could also use the invocation:
jq 'reduce .[] as $p ({}; .[$p.key] += [$p.value])'

Resources