I have a dictionary which looks like:
cat dictionary.json
[
{
"key": "key01",
"value": "value01"
},
{
"key": "key02",
"value": "value02"
},
{
"key": "key03",
"value": "value03",
"extraProperty": {
"foo": "bar"
}
},
{
"key": "key04",
"value": "value04"
}
]
Then, I have an array which is:
echo $array
key01 key02 key03
Expected output:
value01 value02 value03
I have some trouble to make jq using an array which is not json format.
I tried various solutions that I found, but none of them worked.
This post jq - How to select objects based on a 'whitelist' of property values seems to solve a similar problem but it doesn't work with my input:
echo $array | jq --argfile whitelist dictionary.json 'select(any(.key== $whitelist[]; .value))'
parse error: Invalid numeric literal at line 1, column 6
I also tried to use
jq -n --arg array $array --argfile whitelist dico.json 'select(any(.key== $whitelist[]; .valuee))'
jq: error: key02/0 is not defined at <top-level>, line 1:
key02
jq: 1 compile error
Thanks!
Here
jq -r --arg array "$array" \
'from_entries | .[($array | split(" "))[]]' \
dictionary.json
Output
value01
value02
value03
See man jq for further information.
Using INDEX/2, which constructs a dictionary:
echo 'key01 key02 key03' |
jq -Rr --argfile dict dictionary.json '
INDEX($dict[]; .key) as $d
| split(" ") | map( $d[.]|.value )
| join(" ")'
yields:
value01 value02 value03
If your jq does not have INDEX, then now would be an excellent time to upgrade to jq 1.6; alternatively, you can simply snarf its def by googling: jq "def INDEX"
Related
Assuming I have the following JSON object (which is just an example):
{
"foo": 1,
"bar": 2,
"baz": 3
}
And the following JSON array (another example):
["foo", "baz"]
How could I use jq to output the following object?
{
"foo": 1,
"baz": 3
}
I hate asking this question because I feel certain it has been answered before, but google has failed me and my jq-fu is just not where it needs to be to figure this out.
Using a reduce to iteratively build up the result object would be one way:
echo '["foo", "baz"]' | jq --argjson index '{"foo":1,"bar":2,"baz":3}' '
reduce .[] as $x ({}; .[$x] = $index[$x])
'
Using JOIN, creating key-value pairs, and employing from_entries for assembly would be another way:
echo '["baz", "foo"]' | jq --argjson index '{"foo":1,"bar":2,"baz":3}' '
JOIN($index; .) | map({key:.[0], value:.[1]}) | from_entries
'
Output:
{
"foo": 1,
"baz": 3
}
Provided that . is the object and $arr is the array, the following does the trick
delpaths(keys - $arr | map([.]))
To achieve the desired result, one could write:
jq '{foo, baz}'
This can (with some considerable trepidation) be made into a solution for the given problem by text wrangling, e.g. along the lines of:
jq "$(echo '["foo", "baz"]' | jq -Rr '"{" + .[1:-1] + "}" ')"
or
jq "$(echo '["foo", "baz"]' | sed -e 's/\[/{/' -e 's/\]/}/')"
Here's a reduce-free solution that assumes $keys is the array of keys of interest and that is possibly more efficient than the one involving array subtraction:
. as $in | INDEX( $keys[]; $in[.] )
I have a dictionary that looks like this:
{
"uid": "d6fc3e2b-0001a",
"name": "ABC Mgmt",
"type": "host"
}
{
"uid": "d6fc3e2b-0002a",
"name": "Server XYZ",
"type": "group"
}
{
"uid": "d6fc3e2b-0003a",
"name": "NTP Primary",
"type": "host"
}
{
"uid": "d6fc3e2b-0004a",
"name": "H-10.10.10.10",
"type": "host"
}
Then I have a txt file:
"d6fc3e2b-0001a"
"d6fc3e2b-0001a","d6fc3e2b-0002a","d6fc3e2b-0003a"
"d6fc3e2b-0004a"
Expected Output:
"ABC Mgmt"
"ABC Mgmt","Server XYZ","NTP Primary"
"H-10.10.10.10"
I have some trouble to make jq using an array which is not json format. I tried various solutions that I found, but none of them worked. I am rather new to scripting, need some help.
input=file.txt
while IFS= read -r line
do
{
value=$(jq -r --arg line "$line" \
'from_entries | .[($line | split(","))[]]' \
dictionary.json)
echo $name
}
done < "$input"
In the following solution, the dictionary file is read using the --slurpfile command-line option, and the lines of "text" are read using inputs in conjunction with the -n command-line option. The -r command-line option is used in conjunction with the #csv filter to produce the desired output.
Invocation
jq -n -R -r --slurpfile dict stream.json -f program.jq stream.txt
program.jq
(INDEX($dict[]; .uid) | map_values(.name)) as $d
| inputs
| split(",")
| map(fromjson)
| map($d[.])
| #csv
Caveat
The above assumes that the quoted values in stream.txt do not themselves contain commas.
If the quoted values in stream.txt do contain commas, then it would be much easier if the values given on each line in stream.txt were given as JSON entities, e.g. as an array of strings, or as a sequence of JSON strings with no separator character.
Solution to problem described in a comment
Invocation
< original.json jq -r --slurpfile dict stream.json -f program.jq
program.jq
(INDEX($dict[]; .uid) | map_values(.name)) as $d
| .source
| map($d[.])
| #csv
Is it possible to filter an entire array of items in JQ in only one pass? Compare the following code, which runs jq over and over:
{
"foofoo": {
"barbar": [
{
"foo": "aaaa",
"bar": 0000
},
{
"foo": "bbbb",
"bar": 1111
},
{
"foo": "cccc",
"bar": 2222
}
]
}
}
bash array:
array=("1111" "2222")
my code is working but not very efficient and uses a lot of resources considering the array size in reality:
for k in "${array[#]}"; do
jq --argjson k "$k" '.foofoo.barbar |= map(select(.bar != $k))' json.json | sponge json.json
done
It keeps looping through the array, removing the unneeded entries and storing same file again by using sponge.
any ideas how to achieve a similar behavior with a lighter code?
Desired output:
{
"foofoo": {
"barbar": [
{
"foo": "aaaa",
"bar": 0
}
]
}
}
To improve the performance significantly use the following jq approach (without any shell loops):
arr=("1111" "2222")
jq '($p | split(" ") | map(tonumber)) as $exclude
| .foofoo.barbar
|= map(select(.bar as $b
| any($exclude[]; . == $b) | not))' \
--arg p "${arr[*]}" file.json | sponge file.json
The output:
{
"foofoo": {
"barbar": [
{
"foo": "aaaa",
"bar": 0
}
]
}
}
I'm positive there are better ways to do this: I really just throw stuff at jq until something sticks to the wall ...
# 1. in the shell, construct a JSON object string from the array => {"bbbb":1,"cccc":1}
printf -v jsonobj '{%s}' "$(printf '"%q":1\n' "${array[#]}" | paste -sd,)"
# 2. use that to test for non-membership in the jq select function
jq --argjson o "$jsonobj" '.foofoo.barbar |= map(select((.bar|in($o)) == false))' json.json
outputs
{
"foofoo": {
"barbar": [
{
"foo": "0000",
"bar": "aaaa"
}
]
}
}
You don't actually show your desired output, so I assume this is what you want.
Constructing a dictionary object opens the door to an efficient solution. If your jq has INDEX/2, you could use the following invocation:
jq --arg p "${arr[*]}" '
INDEX($p | split(" ")[]; .) as $dict
| .foofoo.barbar
|= map(select($dict[.bar|tostring] | not))'
If your jq does not have INDEX/2, then now would be an excellent time to upgrade; otherwise, you could snarf its def by googling:
jq "def INDEX"
Struggling with formatting of data in jq. I have 2 issues.
Need to take the last array .rental_methods and concatenate them into 1 line, colon separated.
#csv doesn't seem to work with my query. I get the error string ("5343") cannot be csv-formatted, only array
jq command is this (without the | #csv)
jq --arg LOC "$LOC" '.last_updated as $lu | .data[]|.[]| $lu, .station_id, .name, .region_id, .address, .rental_methods[]'
JSON:
{
"last_updated": 1539122087,
"ttl": 60,
"data": {
"stations": [{
"station_id": "5343",
"name": "Lot",
"region_id": "461",
"address": "Austin",
"rental_methods": [
"KEY",
"APPLEPAY",
"ANDROIDPAY",
"TRANSITCARD",
"ACCOUNTNUMBER",
"PHONE"
]
}
]
}
}
I'd like the output to end up as:
1539122087,5343,Lot,461,Austin,KEY:APPLEPAY:ANDROIDPAY:TRANSITCARD:ACCOUNTNUMBER:PHONE:,
Using #csv:
jq -r '.last_updated as $lu
| .data[][]
| [$lu, .station_id, .name, .region_id, .address, (.rental_methods | join(":")) ]
| #csv'
What you were probably missing with #csv before was an array constructor around the list of things you wanted in the CSV record.
You could repair your jq filter as follows:
.last_updated as $lu
| .data[][]
| [$lu, .station_id, .name, .region_id, .address,
(.rental_methods | join(":"))]
| #csv
With your JSON, this would produce:
1539122087,"5343","Lot","461","Austin","KEY:APPLEPAY:ANDROIDPAY:TRANSITCARD:ACCOUNTNUMBER:PHONE"
... which is not quite what you've said you want. Changing the last line to:
map(tostring) | join(",")
results in:
1539122087,5343,Lot,461,Austin,KEY:APPLEPAY:ANDROIDPAY:TRANSITCARD:ACCOUNTNUMBER:PHONE
This is exactly what you've indicated you want except for the terminating punctuation, which you can easily add (e.g. by appending + "," to the program above) if so desired.
I have a json array that looks like this:
{
"StackSummaries": [
{
"CreationTime": "2016-06-01T22:22:49.890Z",
"StackName": "foo-control-eu-west-1",
"StackStatus": "UPDATE_COMPLETE",
"LastUpdatedTime": "2016-06-01T22:47:58.433Z"
},
{
"CreationTime": "2016-04-13T11:22:04.250Z",
"StackName": "foo-bar-testing",
"StackStatus": "UPDATE_COMPLETE",
"LastUpdatedTime": "2016-04-26T16:17:07.570Z"
},
{
"CreationTime": "2016-04-10T01:09:49.428Z",
"StackName": "foo-ldap-eu-west-1",
"StackStatus": "UPDATE_COMPLETE",
"LastUpdatedTime": "2016-04-17T13:44:04.758Z"
}
]
}
I am looking to create text output that looks like this:
foo-control-eu-west-1
foo-bar-testing
foo-ldap-eu-west-1
Is jq able to do this? Specifically, what would the jq command line be that would select each StackName in the array and output each key one per line?
jq --raw-output '.StackSummaries[].StackName'
$ jq -r '[.StackSummaries[] | .StackName] | unique[]' input.json
foo-bar-testing
foo-control-eu-west-1
foo-ldap-eu-west-1
The -r option strips the quotation marks from the output. You might not want the call to 'unique'.
For reference, if you wanted all the key names:
$ jq '[.StackSummaries[] | keys[]] | unique' input.json
[
"CreationTime",
"LastUpdatedTime",
"StackName",
"StackStatus"
]
Here is another solution
jq -M -r '..|.StackName?|values' input.json