JQ - access nested square brackets with fields with no names - arrays

trying to access a field in the list array via jq. The fields doesnt have a name for me to gain access to and extract. Please assist?
Trying to extract John and Smith.
$ cat test.txt
{
"content": {
"list": [
[
[
"name",
"John",
123
],
[
"surname",
"Smith",
345
],
1
]
]
}
}
$ jq -r '.content | {name: ."list"}' test.txt
{
"name": [
[
[
"name",
"John",
123
],
[
"surname",
"Smith",
345
],
1
]
]
}

You could do something as naive as:
$ jq -r '.content.list[][][1]?' test.json
John
Smith
Which will extract the second field from the array third nested arrays, and ignore the numeric literal.
Alternative you could manipulate the data before-hand to make it easier to manipulate afterwards:
$ jq '.content.list | map(map({ (.[0]): .[1] }?) | add)'
[
{
"name": "John",
"surname": "Smith"
}
]
Extracting the name(s) would be as simple as just using | [].name:
$ jq '.content.list | map(map({ (.[0]): .[1] }?) | add) | .[].name'
"John"

Related

Creating Array of Objects from Bash Array using jq

I am trying to create an array of objects in bash given an array in bash using jq.
Here is where I am stuck:
IDS=("baf3eca8-c4bd-4590-bf1f-9b1515d521ba" "ef2fa922-2038-445c-9d32-8c1f23511fe4")
echo "${IDS[#]}" | jq -R '[{id: ., names: ["bob", "sally"]}]'
Results in:
[
{
"id": "baf3eca8-c4bd-4590-bf1f-9b1515d521ba ef2fa922-2038-445c-9d32-8c1f23511fe4",
"names": [
"bob",
"sally"
]
}
]
My desired result:
[
{
"id": "baf3eca8-c4bd-4590-bf1f-9b1515d521ba",
"names": [
"bob",
"sally"
]
},
{
"id": "ef2fa922-2038-445c-9d32-8c1f23511fe4",
"names": [
"bob",
"sally"
]
}
]
Any help would be much appreciated.
Split your bash array into NUL-delimited items using printf '%s\0', then read the raw stream using -R or --raw-input and within your jq filter split them into an array using split and the delimiter "\u0000":
printf '%s\0' "${IDS[#]}" | jq -Rs '
split("\u0000") | map({id:., names: ["bob", "sally"]})
'
for id in "${IDS[#]}" ; do
echo "$id"
done | jq -nR '[ {id: inputs, names: ["bob", "sally"]} ]'
or as a one-liner:
printf "%s\n" "${IDS[#]}" | jq -nR '[{id: inputs, names: ["bob", "sally"]}]'

jq - find the name of the array inside JSON object and then get the content of the array

I have the following JSON array
[
{
"city": "Seattle",
"array10": [
"1",
"2"
]
},
{
"city": "Seattle",
"array11": [
"3"
]
},
{
"city": "Chicago",
"array20": [
"1",
"2"
]
},
{
"city": "Denver",
"array30": [
"3"
]
},
{
"city": "Reno",
"array50": [
"1"
]
}
]
My task is the following: for each "city" values, which are known, get the names of arrays and for each array, get its contents printed/displayed. Names of cities and arrays are unique, the content of arrays - are not.
The result should look like the following:
Now working on Seattle
Seattle has the following arrays:
array10
array11
Content of the array10
1
2
Content of the array11
3
Now working on Chicago
Chicago has the following arrays:
array20
Content of the array array20
1
2
Now working on Denver
Denver has the following arrays:
array30
Content of the array array30
3
Now working on Reno
Denver has the following arrays:
array50
Content of the array array50
1
Now, for each city name (which are provided/known) I can find names of arrays using the following filter (I can put city names in the vars obviously):
jq -r .[] | select ( .name | test("Seattle") ) | del (.name) | keys |#tsv
Then assign these names to a bash variable and iterate in the new cycle to get the content of each array.
While I can get what I want with the above, my question - is there a more efficient way to do it with jq?
And the second, related question - if my JSON had the following structure below, would it make my task easier for the speed/efficiency/simplicity standpoint?
[
{
"name": "Seattle",
"content": {
"array10": [
"1",
"2"
],
"array11": [
"3"
]
}
},
{
"name": "Chicago",
"content": {
"array20": [
"1",
"2"
]
}
},
{
"name": "Denver",
"content": {
"array30": [
"3"
]
}
},
{
"name": "Reno",
"content": {
"array50": [
"1"
]
}
}
]
Using the -r command-line option, the following program produces the output as shown below:
group_by(.city)[]
| .[0].city as $city
| map(keys_unsorted[] | select(test("^array"))) as $arrays
| "Now working on \($city)",
"\($city) has the following arrays:",
$arrays[],
(.[] | to_entries[] | select(.key | test("^array"))
| "Content of the \(.key)", .value[])
Output
Now working on Chicago
Chicago has the following arrays:
array20
Content of the array20
1
2
Now working on Denver
Denver has the following arrays:
array30
Content of the array30
3
Now working on Reno
Reno has the following arrays:
array50
Content of the array50
1
Now working on Seattle
Seattle has the following arrays:
array10
array11
Content of the array10
1
2
Content of the array11
3

jq - subtracting one array from another using a single command

I have three operations with jq to get the right result. How can I do it within one command?
Here is a fragment from the source JSON file
[
{
"Header": {
"Tenant": "tenant-1",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "0b0b-0c0c",
"name": "NumberOfSearchResults"
},
{
"id": "aaaa0001-0a0a",
"name": "LoadTest"
}
]
}
},
{
"Header": {
"Tenant": "tenant-2",
"Rcode": 200
},
"Body": {
"values": []
}
},
{
"Header": {
"Tenant": "tenant-3",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "cccca0003-0b0b",
"name": "LoadTest"
}
]
}
},
{
"Header": {
"Tenant": "tenant-4",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "0f0g-0e0a",
"name": "NumberOfSearchResults"
}
]
}
}
]
I apply two filters and create two intermediate JSON files. First I create the list of all tenants
jq -r '[.[].Header.Tenant]' source.json >all-tenants.json
And then I select to create an array of all tenants not having a particular key present in the Body.values[] array:
jq -r '[.[] | select (all(.Body.values[]; .name !="LoadTest") ) ] | [.[].Header.Tenant]' source.json >filter1.json
Results - all-tenants.json
["tenant-1",
"tenant-2",
"tenant-3",
"tenant-4"
]
filter1.json
["tenant-2",
"tenant-4"
]
And then I substruct filter1.json from all-tenants.json to get the difference:
jq -r -n --argfile filter filter1.json --argfile alltenants all-tenants.json '$alltenants - $filter|.[]'
Result:
tenant-1
tenant-3
Tenant names - values for the "Tenant" key are unique and each of them occurs only once in the source.json file.
Just to clarify - I understand that I can have a select condition(s) that would give me the same resut as subtracting two arrays.
What I want to understand - how can I assign and use these two arrays into vars directly in a single command not involving the intermediate files?
Thanks
Use your filters to fill in the values of a new object and use the keys to refer to the arrays.
jq -r '{
"all-tenants": [.[].Header.Tenant],
"filter1": [.[]|select (all(.Body.values[]; .name !="LoadTest"))]|[.[].Header.Tenant]
} | .["all-tenants"] - .filter1 | .[]'
Note: .["all-tenants"] is required by the special character "-" in that key. See the entry under Object Identifier-Index in the manual.
how can I assign and use these two arrays into vars directly in a single command not involving the intermediate files?
Simply store the intermediate arrays as jq "$-variables":
[.[].Header.Tenant] as $x
| ([.[] | select (all(.Body.values[]; .name !="LoadTest") ) ] | [.[].Header.Tenant]) as $y
| $x - $y
If you want to itemize the contents of $x - $y, then simply add a final .[] to the pipeline.

Generate CSV pairing a field and multiple array elements

Example input:
{
"firstName": "Jam",
"Product": [
{
"productId": "5e09ad38986b7c30f339c5c0"
},
{
"productId": "5e09407b986b7c30f339c18e"
},
{
"productId": "5e094c2a986b7c30f339c1d2"
}
]
}
Expected output:
Jam,5e09ad38986b7c30f339c5c0
Jam,5e09407b986b7c30f339c18e
Jam,5e094c2a986b7c30f339c1d2
Current command is producing the output but on the same row comma seperated:
jq -rc '.firstName,.Product[0] .productId'
To generate a report in CSV format you need to put column values into an array and pass it to #csv filter.
$ jq -r '[.firstName] + (.Product[] | [.productId]) | #csv' file
"Jam","5e09ad38986b7c30f339c5c0"
"Jam","5e09407b986b7c30f339c18e"
"Jam","5e094c2a986b7c30f339c1d2"

Converting tsv with arrays to JSON with jq

i found jq very helpful in converting tsv to JSON file, however, i want to figure out how to do it with jq when i have array in my tsv:
name age pets
Tim 15 cats,dogs
Joe 11 rabbits,birds
...
ideal JSON:
[
{
name: "Tim",
age: "15",
pet:["cats","dogs"]
},
name: "Joe",
age: "11",
pet:["rabbits","birds"]
}, ...
]
This is the command i tried:
cat file.tsv | jq -s --slurp --raw-input --raw-output 'split("\n") | .[1:-1] | map(split("\t")) |
map({"name": .[0],
"age": .[1],
"pet": .[2]})'
and the output the the above command is:
[
{
name: "Tim",
age: "15",
pet:"cats,dogs"
},
name: "Joe",
age: "11",
pet:"rabbits,birds"-
}, ...
]
Like this:
jq -rRs 'split("\n")[1:-1] |
map([split("\t")[]|split(",")] | {
"name":.[0],
"age":.[1],
"pet":.[2]
}
)' input.tsv
In case the name includes any commas, I'd go with the following, which also avoids having to "slurp" the input:
inputs
| split("\t")
| {name: .[0], age: .[1], pet: .[2]}
| .pet |= split(",")
To skip the header, simply invoke jq with the -R option, e.g. like this:
jq -R -f program.jq input.tsv
If you want the result as an array, simply enclose the entire filter above in square brackets.

Resources