I want output in the below JSON array format with command on linux/bash platform . Can anybody help
data in text file
test:test
test1:test1
test4:test4
Expecting output:
{array :
[
{test:test},
{test1:test1},
{test4:test4}
]
}
Using jq command line JSON parser:
<file jq -Rs '{array:split("\n")|map(split(":")|{(.[0]):.[1]}?)}'
{
"array": [
{
"test": "test"
},
{
"test1": "test1"
},
{
"test4": "test4"
}
]
}
The options Rs let jq read the whole file as one string.
The script splits this string into pieces in order to have the expected format.
Assuming you want actual JSON output:
$ jq -nR '{array: (reduce inputs as $line ([]; . + [$line | split(":") | {(.[0]):.[1]}]))}' input.txt
{
"array": [
{
"test": "test"
},
{
"test1": "test1"
},
{
"test4": "test4"
}
]
}
Creating a JSON works best with a dedicated JSON tool that can also make sense of raw text.
xidel is such a tool.
XPath:
xidel -s input.txt -e '{"array":x:lines($raw) ! {substring-before(.,":"):substring-after(.,":")}}'
(x:lines($raw) is a shorthand for tokenize($raw,'\r\n?|\n'), which creates a sequence of all lines.)
XQuery:
xidel -s input.txt --xquery '{"array":for $x in x:lines($raw) let $a:=tokenize($x,":") return {$a[1]:$a[2]}}'
See also this Xidel online tester.
Related
This question already has answers here:
bash & jq: add attribute with object value
(2 answers)
Passing variable in jq to filter with select fails
(1 answer)
Closed 5 days ago.
I'm simply trying to replace an objects value in a json file with an array, using jq in a bash script.
The json file (truncated) looks like this:
{
"objects": {
"type": "foo",
"host": "1.1.1.1",
"port": "1234"
}
}
I want to replace the host objects value with an array of different values, so it looks like this:
{
"objects": {
"type": "foo",
"host": ["1.1.1.1","2.2.2.2"],
"port": "1234"
}
}
I tested around with this script. The Input comes from a simple, comma separated string which I convert into a proper json array (which seems to work).
But I'm not able replace the value with the array.
#!/bin/bash
objectshost="1.1.1.1,2.2.2.2"
objectshost_array=$(jq -c -n --arg arg $objectshost '$arg|split(",")')
jq --arg value "$objectshost_array" '.objects.host = $value' ./test.json > ./test.json.tmp
The best I ended up with, is this:
{
"objects": {
"type": "foo",
"host": "[\"1.1.1.1\",\"2.2.2.2\"]",
"port": "1234"
}
}
The result seems to be some logical result, as the script simply replaces the value with the arrays string. But it's not what I expected to get. ;)
I found some similar questions, but all of them were dealing with replacing values in existing arrays or key/value pairs, but my problem seems to be the conversion from a single value to an array.
Can somebody please push me into the right direction? Or should I forget about jq and threat the json file as a simple text file?
Thanks in advance,
André
It would work with a conditional assignment from arguments:
jq '
.objects.host = (
.objects.host |
if type == "array"
then .
else [ . ]
end + $ARGS.positional
)
' input.json --args 1.2.3.4 2.2.2.2 4.4.4.4
Or the same as a stand-alone jq script; which is more readable and maintainable:
myJQScript:
#!/usr/bin/env -S jq --from-file --args
.objects.host = (
.objects.host |
if type == "array"
then .
else [ . ]
end + $ARGS.positional
)
Make it executable:
chmod +x myJQScript
Run it with arguments to add array entries to host
$ ./myJQScript 1.2.3.4 2.2.2.2 4.4.4.4 < input.json
{
"objects": {
"type": "foo",
"host": [
"1.1.1.1",
"1.2.3.4",
"2.2.2.2",
"4.4.4.4"
],
"port": "1234"
}
}
You can do it with a single jq command:
#!/bin/sh
objectshost="1.1.1.1,2.2.2.2"
jq --arg value "$objectshost" '.objects.host = ($value / ",")' ./test.json > ./test.json.tmp
This has the added benefit of not requiring Bash arrays, but can be used with any shell.
If you already have a JSON array, you must use --argjson and not --arg. --arg always creates a variable of type string, --argjson however parses the value as JSON entity.
#!/bin/bash
objectshost="1.1.1.1,2.2.2.2"
objectshost_array=$(printf '%s\n' "$objectshost" | jq -c 'split(",")')
jq --argjson value "$objectshost_array" '.objects.host = $value' ./test.json > ./test.json.tmp
I have a series of JSON files that I've been working with via jq. Each file consists of an array of dictionaries, e.g.
file1.json: [{ "id": 1 }]
file2.json: [{ "id": 2 }]
The only command I have found which successfully merges all input files into one output array is:
jq --slurp '.[0] + .[1]' file1.json file2.json
This command outputs [{ "id": 1 }, { "id": 2 }], as expected.
I'm writing a shell script which is expected to merge a variable set of files into a single JSON array as an output. My script will execute something like:
find . -type f -iname '*.json' | xargs jq 'FILTER'
This should invoke jq like jq 'FILTER' file1.json file2.json ....
Is there a feature that I'm missing that will take all input files and first merge them into one contiguous list of objects without having to rewrite the filter to something like .[0] + .[1] + .[2] ...?
Given:
1.json
[{ "id": 1 }]
2.json
[{ "id": 2 }]
3.json
[{ "id": 3 }]
Then this command:
jq --slurp 'map(.[])' 1.json 2.json 3.json
Returns:
[
{
"id": 1
},
{
"id": 2
},
{
"id": 3
}
]
Or simply:
jq --slurp 'flatten' 1.json 2.json 3.json
It's generally best to avoid the -s option, especially if your version of jq supports inputs, as do all versions >= 1.5.
In any case, if your version of jq supports inputs, you could write:
jq -n '[inputs[]]' 1.json 2.json 3.json # etc
or whichever variant meets your needs.
Otherwise, you could simply write:
jq -s add 1.json 2.json 3.json # etc
Note on flatten
flatten itself is ruthless:
$ jq flatten <<< '[[[1], [[[[2]]]]]]'
[1,2]
flatten(1) is less so.
I'd like to take 50 or so json files, and simply concat them with jq.
Files look like this
file-1.json
{
"name": "john"
}
file-2.json
{
"name": "Xiaoming"
}
I want one file
file-all.json
That looks like:
[
{
"name": "Xiaoming"
},
{
"name": "Xiaoming"
}
]
An array of all the other files.
How do I do that? : (
If your files are named following a sequence like in your example, then you can use this:
jq -s '.' file-{1..50}.json > file-all.json
If you want all of the objects from those files combined in a single array:
jq -n '[inputs]' file-{1..50}.json > file-all.json
I have a dictionary that looks like this:
{
"uid": "d6fc3e2b-0001a",
"name": "ABC Mgmt",
"type": "host"
}
{
"uid": "d6fc3e2b-0002a",
"name": "Server XYZ",
"type": "group"
}
{
"uid": "d6fc3e2b-0003a",
"name": "NTP Primary",
"type": "host"
}
{
"uid": "d6fc3e2b-0004a",
"name": "H-10.10.10.10",
"type": "host"
}
Then I have a txt file:
"d6fc3e2b-0001a"
"d6fc3e2b-0001a","d6fc3e2b-0002a","d6fc3e2b-0003a"
"d6fc3e2b-0004a"
Expected Output:
"ABC Mgmt"
"ABC Mgmt","Server XYZ","NTP Primary"
"H-10.10.10.10"
I have some trouble to make jq using an array which is not json format. I tried various solutions that I found, but none of them worked. I am rather new to scripting, need some help.
input=file.txt
while IFS= read -r line
do
{
value=$(jq -r --arg line "$line" \
'from_entries | .[($line | split(","))[]]' \
dictionary.json)
echo $name
}
done < "$input"
In the following solution, the dictionary file is read using the --slurpfile command-line option, and the lines of "text" are read using inputs in conjunction with the -n command-line option. The -r command-line option is used in conjunction with the #csv filter to produce the desired output.
Invocation
jq -n -R -r --slurpfile dict stream.json -f program.jq stream.txt
program.jq
(INDEX($dict[]; .uid) | map_values(.name)) as $d
| inputs
| split(",")
| map(fromjson)
| map($d[.])
| #csv
Caveat
The above assumes that the quoted values in stream.txt do not themselves contain commas.
If the quoted values in stream.txt do contain commas, then it would be much easier if the values given on each line in stream.txt were given as JSON entities, e.g. as an array of strings, or as a sequence of JSON strings with no separator character.
Solution to problem described in a comment
Invocation
< original.json jq -r --slurpfile dict stream.json -f program.jq
program.jq
(INDEX($dict[]; .uid) | map_values(.name)) as $d
| .source
| map($d[.])
| #csv
I have a json array that looks like this:
{
"StackSummaries": [
{
"CreationTime": "2016-06-01T22:22:49.890Z",
"StackName": "foo-control-eu-west-1",
"StackStatus": "UPDATE_COMPLETE",
"LastUpdatedTime": "2016-06-01T22:47:58.433Z"
},
{
"CreationTime": "2016-04-13T11:22:04.250Z",
"StackName": "foo-bar-testing",
"StackStatus": "UPDATE_COMPLETE",
"LastUpdatedTime": "2016-04-26T16:17:07.570Z"
},
{
"CreationTime": "2016-04-10T01:09:49.428Z",
"StackName": "foo-ldap-eu-west-1",
"StackStatus": "UPDATE_COMPLETE",
"LastUpdatedTime": "2016-04-17T13:44:04.758Z"
}
]
}
I am looking to create text output that looks like this:
foo-control-eu-west-1
foo-bar-testing
foo-ldap-eu-west-1
Is jq able to do this? Specifically, what would the jq command line be that would select each StackName in the array and output each key one per line?
jq --raw-output '.StackSummaries[].StackName'
$ jq -r '[.StackSummaries[] | .StackName] | unique[]' input.json
foo-bar-testing
foo-control-eu-west-1
foo-ldap-eu-west-1
The -r option strips the quotation marks from the output. You might not want the call to 'unique'.
For reference, if you wanted all the key names:
$ jq '[.StackSummaries[] | keys[]] | unique' input.json
[
"CreationTime",
"LastUpdatedTime",
"StackName",
"StackStatus"
]
Here is another solution
jq -M -r '..|.StackName?|values' input.json