I have some JSON data which is pretty typical CSV-style data, however it's represented in JSON. I am struggling to figure out the correct jq expression to convert the following JSON back to some JSON which can generate the appropriate CSV with #csv.
There's a fixed number of 'columns', i.e. the "AAA" values, but the number of values in each 'column' is dynamic yet fixed across columns. That is, the length of the arrays "AAA", "BBB", "CCC", etc are all the same, but the length is dynamic and can change between data sets.
Input (note invalid numbers present, to illustrate example):
{
"AAA": [
111.1,
111.2,
111.3,
111..,
111.n
],
"BBB": [
222.1,
222.2,
222.3,
222..,
222.n
],
"CCC": [
333.1,
333.2,
333.3,
333..,
333.n
],
"DDD": [
444.1,
444.2,
444.3,
444..,
444.n
],
"EEE": [
555.1,
555.2,
555.3,
555..,
555.n
]
}
Desired output (note invalid numbers present, to illustrate example):
{
[
"AAA",
"BBB",
"CCC",
"DDD",
"EEE"
],
[
111.1,
222.1,
333.1,
444.1,
555.1
],
[
111.2,
222.2,
333.2,
444.2,
555.2
],
[
111.3,
222.3,
333.3,
444.3,
555.3
],
[
111..,
222..,
333..,
444..,
555..
],
[
111.n,
222.n,
333.n,
444.n,
555.n
]
}
Here is the desired CSV, for illustration purposes (as converting with #csv is pretty straightforward):
AAA,BBB,CCC,DDD,EEE
111.1,222.1,333.1,444.1,555.1
111.2,222.2,333.2,444.2,555.2
111.3,222.3,333.3,444.3,555.3
111..,222..,333..,444..,555..
111.n,222.n,333.n,444.n,555.n
If the required expression is far easier without the first array in the result object containing the "AAA" 'header' values then I can easily live without them.
Thank you.
You can use the transpose function in jq to do the transposing of arrays, formed from keys/values.
jq '[ to_entries[] | [.key, .value[]] ] | transpose'
The bulk of the magic is performed by the transpose built-in, but before that you just need to collect the values into an array of arrays. The CSV result can be generated with the #csv function.
jq --raw-output '[ to_entries[] | [.key, .value[]] ] | transpose[] | #csv'
You could also use map() and be avoid the redundant [..]
jq 'to_entries | map([.key, .value[]]) | transpose'
jq --raw-output 'to_entries | map([.key, .value[]]) | transpose[] | #csv'
{
"Key": "value"
"Results": [
{
"KeyIwant":"value"
...
}
]
}
I want to get a list of objects that have only the keys and their values that i specifiy.
So far Ive found something from the internet, but it creates objects and not a list and there are no commas.
jq '.Results | .[] | with_entries(select([.key] | inside(["key","key2", "key3"])))' input.json
For efficiency, you could use IN:
[.Results[]|with_entries(select(.key|IN("KeyIwant","etc"))) ]
If you want the whitelist to be presented as a JSON array, say $w, then write IN($w[])
I have a series of JSON files that I've been working with via jq. Each file consists of an array of dictionaries, e.g.
file1.json: [{ "id": 1 }]
file2.json: [{ "id": 2 }]
The only command I have found which successfully merges all input files into one output array is:
jq --slurp '.[0] + .[1]' file1.json file2.json
This command outputs [{ "id": 1 }, { "id": 2 }], as expected.
I'm writing a shell script which is expected to merge a variable set of files into a single JSON array as an output. My script will execute something like:
find . -type f -iname '*.json' | xargs jq 'FILTER'
This should invoke jq like jq 'FILTER' file1.json file2.json ....
Is there a feature that I'm missing that will take all input files and first merge them into one contiguous list of objects without having to rewrite the filter to something like .[0] + .[1] + .[2] ...?
Given:
1.json
[{ "id": 1 }]
2.json
[{ "id": 2 }]
3.json
[{ "id": 3 }]
Then this command:
jq --slurp 'map(.[])' 1.json 2.json 3.json
Returns:
[
{
"id": 1
},
{
"id": 2
},
{
"id": 3
}
]
Or simply:
jq --slurp 'flatten' 1.json 2.json 3.json
It's generally best to avoid the -s option, especially if your version of jq supports inputs, as do all versions >= 1.5.
In any case, if your version of jq supports inputs, you could write:
jq -n '[inputs[]]' 1.json 2.json 3.json # etc
or whichever variant meets your needs.
Otherwise, you could simply write:
jq -s add 1.json 2.json 3.json # etc
Note on flatten
flatten itself is ruthless:
$ jq flatten <<< '[[[1], [[[[2]]]]]]'
[1,2]
flatten(1) is less so.
Can someone help me extract values from json like below:
[
[
{
"name": "x",
"age": "y",
"class": "z"
}
]
]
I would like to extract age from the above json using jq
The pedestrian way:
.[] | .[] | .age
The briefer way:
.[][].age
Another possibility to consider (it has different semantics) would be:
.. | .age?
I try to rename the values in an array. However, only parts of them, keeping the other parts. Managed to rename whole strings, but not the "parts-task" using JQ.
JSON input:
{
"values": [
"foo:bar1",
"foo:bar2",
"foo:bar3"
]
}
desired output:
{
"values": [
"bar1",
"bar2",
"bar3"
]
}
Thank you in advance!
Assuming your jq has regex support (e.g. jq 1.5):
.values |= map(sub("foo:";"")))
Or maybe "^foo:"; ...