I have an input row like this: 1374240, 1374241. I need to make json file:
{
"version": "1.0",
"tests": [
{
"id": 1374240,
"selector": ""
},
{
"id": 1374241,
"selector": ""
}
]
}
I maked associated array:
idRow='1374240, 1374241'
IFS=',' read -r -a array <<<"$idRow"
trimmedArray=()
for id in "${array[#]}"; do
trimmedId="$(echo -e "${id}" | xargs)"
testRow="{\"id\":${trimmedId},\"selector\":\"\"}"
trimmedArray+=("$testRow")
done
echo "${trimmedArray[*]}"
Output:
{"id":1374240,"selector":""} {"id":1374241,"selector":""}
How i can insert it in final json structure and write a file?
I am tried a different variants with jq, but I can`t get finally structure. Please, help.
Read in the numbers as raw text using -R, split at the ,, use tonumbers to convert them to numbers, and create the structure on the fly:
echo "1374240, 1374241" | jq -R '
{version:"1.0",tests:(
split(",") | map(
{id: tonumber, selector: ""}
)
)}
'
Demo
If you can omit the comma in the first place, it's even easier to read in numbers as they itself are JSON:
echo "1374240 1374241" | jq -s '
{version:"1.0",tests: map(
{id: tonumber, selector: ""}
)}
'
Demo
Output:
{
"version": "1.0",
"tests": [
{
"id": 1374240,
"selector": ""
},
{
"id": 1374241,
"selector": ""
}
]
}
Related
I want to merge two json arrays with help of jq. Each object in arrays contains name field, which allow me to group by and merge two arrays into one.
LABELS
[
{
"name": "power_branch",
"description": "master"
},
{
"name": "test_branch",
"description": "main"
}
]
RUNNERS
[
{
"name": "power_branch",
"runner": "power",
"runner_tag": "macos"
},
{
"name": "power_branch",
"runner": "power",
"runner_tag": "ubuntu"
},
{
"name": "test_branch",
"runner": "tester",
"runner_tag": ""
},
{
"name": "development",
"runner": "dev",
"runner_tag": "ubuntu"
}
]
Desired Output
[
{
"name": "power_branch",
"description": "master",
"runner": "power",
"runner_tag": "macos"
},
{
"name": "power_branch",
"description": "master",
"runner": "power",
"runner_tag": "ubuntu"
},
{
"name": "test_branch",
"description": "main",
"runner": "tester",
"runner_tag": ""
}
]
I tried with following script, but power_branch entry was override, instead i want another entry with different runner_tag
#!/usr/bin/bash
LABELS='[{"name": "power_branch","description": "master"},{"name": "test_branch","description": "main"}]'
RUNNERS='''
[
{ "name": "power_branch", "runner": "power", "runner_tag": "macos" },
{ "name": "power_branch", "runner": "power", "runner_tag": "ubuntu" },
{ "name": "test_branch", "runner": "tester", "runner_tag": "" },
{ "name": "development", "runner": "dev", "runner_tag": "ubuntu" }
]
'''
FINAL=$(jq -s '[ .[0] + .[1] | group_by(.name)[] | select(length > 1) | add]' <(echo $LABELS) <(echo $RUNNERS))
echo $FINAL
OUTPUT
[
{
"name": "power_branch",
"description": "master",
"runner": "power",
"runner_tag": "ubuntu"
},
{
"name": "test_branch",
"description": "main",
"runner": "tester",
"runner_tag": ""
}
]
If you have two files labels.json and runners.json, you could read in the latter (runners) as a variable using --argjson and append to each element of the input array (labels) using map the corresponding fields determined by select.
jq --argjson runners "$(cat runners.json)" '
map(.name as $name | . + ($runners[] | select(.name == $name)))
' labels.json
However, this reads the whole runners array into your shells command line space (--argjson takes two strings: a name and a value) which can easily overflow if the runners array gets big enough.
Therefore, instead of using command substitution "$(…)", you could read in the runners file directly using either --slurpfile for the cost of another iteration level [][], or (despite the manual saying not to - read more about it in the comments) using --argfile with just a single iteration level as before:
jq --slurpfile runners runners.json '
map(.name as $name | . + ($runners[][] | select(.name == $name)))
' labels.json
jq --argfile runners runners.json '
map(.name as $name | . + ($runners[] | select(.name == $name)))
' labels.json
To circumvent all these issues, #peak suggested using input for each file together with the -n option. Note that this requires the two files to be provided in this exact order as they are being read in sequentially.
jq -n 'input as $runners | input |
map(.name as $name | . + ($runners[] | select(.name == $name)))
' runners.json labels.json
As the second input (labels) is passed on directly as the filter's main input (in contrast to runners, which is stored in a variable for later use), this could be further simplified by removing again the -n option (order of the files still matters):
jq 'input as $runners |
map(.name as $name | . + ($runners[] | select(.name == $name)))
' runners.json labels.json
Finally, here's yet another approach using the SQL-style operators INDEX and JOIN which were introduced in jq v1.6. This also employs the technique using just one input and also the order of the files still matters as we need the runners array as the filter's primary input.
jq '
JOIN(INDEX(input[]; .name); .name) | map(select(.[1]) | add)
' runners.json labels.json
I am trying to create an array of objects in bash given an array in bash using jq.
Here is where I am stuck:
IDS=("baf3eca8-c4bd-4590-bf1f-9b1515d521ba" "ef2fa922-2038-445c-9d32-8c1f23511fe4")
echo "${IDS[#]}" | jq -R '[{id: ., names: ["bob", "sally"]}]'
Results in:
[
{
"id": "baf3eca8-c4bd-4590-bf1f-9b1515d521ba ef2fa922-2038-445c-9d32-8c1f23511fe4",
"names": [
"bob",
"sally"
]
}
]
My desired result:
[
{
"id": "baf3eca8-c4bd-4590-bf1f-9b1515d521ba",
"names": [
"bob",
"sally"
]
},
{
"id": "ef2fa922-2038-445c-9d32-8c1f23511fe4",
"names": [
"bob",
"sally"
]
}
]
Any help would be much appreciated.
Split your bash array into NUL-delimited items using printf '%s\0', then read the raw stream using -R or --raw-input and within your jq filter split them into an array using split and the delimiter "\u0000":
printf '%s\0' "${IDS[#]}" | jq -Rs '
split("\u0000") | map({id:., names: ["bob", "sally"]})
'
for id in "${IDS[#]}" ; do
echo "$id"
done | jq -nR '[ {id: inputs, names: ["bob", "sally"]} ]'
or as a one-liner:
printf "%s\n" "${IDS[#]}" | jq -nR '[{id: inputs, names: ["bob", "sally"]}]'
I have three operations with jq to get the right result. How can I do it within one command?
Here is a fragment from the source JSON file
[
{
"Header": {
"Tenant": "tenant-1",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "0b0b-0c0c",
"name": "NumberOfSearchResults"
},
{
"id": "aaaa0001-0a0a",
"name": "LoadTest"
}
]
}
},
{
"Header": {
"Tenant": "tenant-2",
"Rcode": 200
},
"Body": {
"values": []
}
},
{
"Header": {
"Tenant": "tenant-3",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "cccca0003-0b0b",
"name": "LoadTest"
}
]
}
},
{
"Header": {
"Tenant": "tenant-4",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "0f0g-0e0a",
"name": "NumberOfSearchResults"
}
]
}
}
]
I apply two filters and create two intermediate JSON files. First I create the list of all tenants
jq -r '[.[].Header.Tenant]' source.json >all-tenants.json
And then I select to create an array of all tenants not having a particular key present in the Body.values[] array:
jq -r '[.[] | select (all(.Body.values[]; .name !="LoadTest") ) ] | [.[].Header.Tenant]' source.json >filter1.json
Results - all-tenants.json
["tenant-1",
"tenant-2",
"tenant-3",
"tenant-4"
]
filter1.json
["tenant-2",
"tenant-4"
]
And then I substruct filter1.json from all-tenants.json to get the difference:
jq -r -n --argfile filter filter1.json --argfile alltenants all-tenants.json '$alltenants - $filter|.[]'
Result:
tenant-1
tenant-3
Tenant names - values for the "Tenant" key are unique and each of them occurs only once in the source.json file.
Just to clarify - I understand that I can have a select condition(s) that would give me the same resut as subtracting two arrays.
What I want to understand - how can I assign and use these two arrays into vars directly in a single command not involving the intermediate files?
Thanks
Use your filters to fill in the values of a new object and use the keys to refer to the arrays.
jq -r '{
"all-tenants": [.[].Header.Tenant],
"filter1": [.[]|select (all(.Body.values[]; .name !="LoadTest"))]|[.[].Header.Tenant]
} | .["all-tenants"] - .filter1 | .[]'
Note: .["all-tenants"] is required by the special character "-" in that key. See the entry under Object Identifier-Index in the manual.
how can I assign and use these two arrays into vars directly in a single command not involving the intermediate files?
Simply store the intermediate arrays as jq "$-variables":
[.[].Header.Tenant] as $x
| ([.[] | select (all(.Body.values[]; .name !="LoadTest") ) ] | [.[].Header.Tenant]) as $y
| $x - $y
If you want to itemize the contents of $x - $y, then simply add a final .[] to the pipeline.
Example input:
{
"firstName": "Jam",
"Product": [
{
"productId": "5e09ad38986b7c30f339c5c0"
},
{
"productId": "5e09407b986b7c30f339c18e"
},
{
"productId": "5e094c2a986b7c30f339c1d2"
}
]
}
Expected output:
Jam,5e09ad38986b7c30f339c5c0
Jam,5e09407b986b7c30f339c18e
Jam,5e094c2a986b7c30f339c1d2
Current command is producing the output but on the same row comma seperated:
jq -rc '.firstName,.Product[0] .productId'
To generate a report in CSV format you need to put column values into an array and pass it to #csv filter.
$ jq -r '[.firstName] + (.Product[] | [.productId]) | #csv' file
"Jam","5e09ad38986b7c30f339c5c0"
"Jam","5e09407b986b7c30f339c18e"
"Jam","5e094c2a986b7c30f339c1d2"
So my objective is to merge json files obtain this format:
{
"title": "NamesBook",
"list": [
{
"name": "Ajay"
},
{
"name": "Al"
}
]
}
And I have files that look like this format:
blahblah.json
{
"title": "NamesBook",
"list": [
{
"name": "Ajay"
}
]
}
blueblue.json
{
"title": "NamesBook",
"list": [
{
"name": "Al"
}
]
}
I can store the list array of all my names in a variable with the following:
x = jq -s '.[].list' *.json
And then I was planning on appending the variable to an empty array in a file I created, out.json, which looks like this:
{
"type": "NamesBook",
"list": []
}
However, when my script runs over the line
jq '.list[] += "$x"' out.json'
It brings up a jq error:
Cannot iterate over null.
Even when I add a random element, the same error shows up. Tips on how I should proceed? Are there other tools in jq to help achieve merging arrays?
Let me also provide just what the title asks for, because I'm sure a lot of people that stepped on this question look for something simpler.
Any of the following (added math2001 and pmf answers):
echo -e '["a","b"]\n["c","d"]' | jq -s 'add'
echo -e '["a","b"]\n["c","d"]' | jq -s 'flatten(1)'
echo -e '["a","b"]\n["c","d"]' | jq -s 'map(.[])'
echo -e '["a","b"]\n["c","d"]' | jq -s '[.[][]]'
echo -e '["a","b"]\n["c","d"]' | jq '.[]' | jq -s
results in:
[
"a",
"b",
"c",
"d"
]
Note: Also any of the above can apply to arrays of objects.
You can merge your files with add (jq 1.3+):
jq -s '.[0].list=[.[].list|add]|.[0]' *.json
or flatten (jq 1.5+):
jq -s '.[0].list=([.[].list]|flatten)|.[0]' *.json
[.[].list] - creates an array of all "list" arrays
[
[
{
"name": "Ajay"
}
],
[
{
"name": "Al"
}
]
]
[.[].list]|flatten - flatten it (or .[].list|add - add all the arrays together)
[
{
"name": "Ajay"
},
{
"name": "Al"
}
]
.[0].list=([.[].list]|flatten)|.[0] - replace the first "list" with the merged one, output it.
{
"title": "NamesBook",
"list": [
{
"name": "Ajay"
},
{
"name": "Al"
}
]
}
Assuming every file will have the same title and you're simply combining the list contents, you could do this:
$ jq 'reduce inputs as $i (.; .list += $i.list)' blahblah.json blueblue.json
This just takes the first item and adds to its list, the list of all the other inputs.
The OP did not specify what should happen if there are objects for which .title is not "NamesBook". If the intent is to select objects with .title equal to "NamesBook", one could write:
map(select(.title == "NamesBook"))
| {title: .[0].title, list: map( .list ) | add}
This assumes that jq is invoked with the -s option.
Incidentally, add is the way to go here: simple and fast.