Sample Json file:
[
{
"Header": {
"Tenant": "Test-d1",
"Stage": "dev",
"ProductType": "b2b",
"Rcode": 401
},
"Body": {
"error": {
"code": 401,
"message": "Unsupported authorization scheme"
}
}
},
{
"Header": {
"Tenant": "2734d7ac0f0e",
"Stage": "unknown",
"ProductType": "unknown",
"Rcode": 404
},
"Body": {
"error": {
"code": 404,
"message": "Not found"
}
}
}
]
Desired output:
Test-d1, dev, b2b, Unsupported authorization scheme, 401
2734d7ac0f0e, unknown, unknown, Not found, 404
So skip the keys, interested only with certain values and form a single line, separate them by commas or semicolumn or some other separator.
The simplest procedure I could imagine with jq was to put values into an array and use #csv
jq -r .[] | [ .Header.Tenant, .Header.Stage, .Header.ProductType, .Body.error.message, .Body.error.code ] | #csv
Above almost does what I wanted, but encloses every value with double-quotes. I can deal with double-quotes using some other tools but I'm sure it should be possible within jq itself.
What are alternative approaches using jq?
Thanks
#csv is guaranteed to produce CSV, but if you want strings to be presented unconditionally without the surrounding quotation marks, you could consider using join(", ") instead of #csv. Since you indicated you're open to some other value-separator, you might also wish to consider #tsv.
The redundancy in the jq program can also be reduced, so you might end up with:
.[]
| (.Header | [.Tenant, .Stage, .ProductType]) +
(.Body.error | [.message, .code ])
| #tsv
Related
Given the example JSON below:
{
"account_number": [
"123456"
],
"account_name": [
"name"
],
"account_id": [
654321
],
"username": [
"demo"
]
}
I'd like to get:
{
"account_number": "123456",
"account_name": "name",
"account_id": 654321,
"username": "demo"
}
Currently, I'm brute forcing it with | sed 's/\[//g' | sed 's/\]//g' | jq '.' ... but of course, that's ugly and causes issues if any of the values contain [ or ].
I've been unsuccessful with jq's flatten and other loops and mapping techniques like | jq -s '{Item:.[]} | .Item |add' to try and flatten the single-item arrays. Ideally, it would work where it would flatten arrays [...] to flat elements/objects {...}. Either way something better than replacing all occurrences of square brackets.
Short and sweet:
map_values(first)
Use with_entries, changing each value to the first element of itself:
jq 'with_entries(.value |= .[0])' file.json
I am trying to grab 'n' item from a nested json array. The scenario is that I need to get the IP addresses from newly created cloud instances from my cloud provider so I can perform automation task with ansible. Here is a sample of the json output from the api that my cloud provider gives. (details obscured for privacy and security reasons)
[
{
"alerts": {
"cpu": 180,
"io": 10000,
"network_in": 10,
"network_out": 10,
"transfer_quota": 80
},
"backups": {
"enabled": false,
"last_successful": null,
"schedule": {
"day": null,
"window": null
}
},
"created": "2022-04",
"group": "",
"hypervisor": "kvm",
"id": 36084613,
"image": "ubuntu20.04",
"ipv4": [
"12.34.56.78", #<--- Need to grab this public address
"192.168.x.x" #<--- and this private address
],
"ipv6": "0000::0000/128",
"label": "node-1",
"region": "us",
"specs": {
"disk": 81920,
"memory": 4096,
"transfer": 4000,
"vcpus": 2
},
"status": "running",
"tags": [],
"type": "standard",
"updated": "2022-04",
"watchdog_enabled": true
}
]
I need to get the public IP address to I can add the node to an inventory file. So far, I have managed to get the following:
$ cat json.json | jq -r '.[0].ipv4'
[
"12.34.56.78",
"192.168.x.x"
]
I can get what I want by repiping into jq, but I feel there has to be a more elegant way to do so.
$ cat json.json | jq -r '.[0].ipv4' | jq -r '.[0]'
12.34.56.78
$ cat json.json | jq -r '.[0].ipv4' | jq -r '.[1]'
192.168.x.x
New to posting on StackOverflow so I apologize in advance if someone already answered this on another thread. I looked around and couldn't find what I was looking for. Thanks! 😀
It seems you want:
jq -r '.[0].ipv4[]'
or perhaps:
jq -r '.[].ipv4[]'
I am trying to do what I think should be a fairly simple filter but I keep running into errors. I have this JSON:
{
"versions": [
{
"archived": true,
"description": "Cod version 3.3/Sprint 8",
"id": "11500",
"name": "v 3.3",
"projectId": 11500,
"releaseDate": "2016-03-15",
"released": true,
"self": "https://xxxxxxx.atlassian.net/rest/api/2/version/11500",
"startDate": "2016-02-17",
"userReleaseDate": "14/Mar/16",
"userStartDate": "16/Feb/16"
},
{
"archived": true,
"description": "Hot fix",
"id": "12000",
"name": "v3.3.1",
"projectId": 11500,
"releaseDate": "2016-03-15",
"released": true,
"self": "https://xxxxxxx.atlassian.net/rest/api/2/version/12000",
"startDate": "2016-03-15",
"userReleaseDate": "14/Mar/16",
"userStartDate": "14/Mar/16"
},
{
"archived": false,
"id": "29704",
"name": "Sync-diff v1.0.0",
"projectId": 11500,
"releaseDate": "2022-02-16",
"released": true,
"self": "https://xxxxxxx.atlassian.net/rest/api/2/version/29704",
"startDate": "2022-02-06",
"userReleaseDate": "15/Feb/22",
"userStartDate": "05/Feb/22"
}
]
}
I just want to return any userReleaseDate that ends with '22'
I can get the boolean result by:
jq '.versions[].userReleaseDate | endswith("22")'
prints out false, false, true
But I am not sure how to retrieve the objects. I tried variations of this:
[.versions[] as $keys | $keys select(endswith("22"))]
and each threw an error. Any help would be appreciated.
This was so close:
jq '.versions[].userReleaseDate | endswith("22")'
Rather than outputting whether they end with 22 or not, you want to select the values which end with 22. Fixed:
jq '.versions[].userReleaseDate | select( endswith("22") )'
Now, your question asks for the dates that end with 22, but the title suggests you want the objects. For that, you'd want something a little different. We want to select from the versions, not from the dates.
jq '.versions[] | select( .userReleaseDate | endswith("22") )' # As a stream
jq '[ .versions[] | select( .userReleaseDate | endswith("22") ) ]' # As an array
jq '.versions | map( select( .userReleaseDate | endswith("22") ) )' # As an array
There are a number of issues with [ .versions[] as $keys | $keys select(endswith("22")) ].
The keys of array element aren't usually called keys but indexes. $indexes would be a better name.
Except .versions[] gets the values of the array elements, not the keys/indexes. $values would be a better name.
Except the variable only takes on a single value at a time. $value would be a better name.
$version would be an even better name.
There's a | missing between $keys and select(endswith("22")).
There's no mention of userReleaseDate anywhere.
The result is placed in an array (because of the [ ]). There's no need or desire for this.
You could use
.versions[] as $version | $version.userReleaseDate | select(endswith("22"))
or
.versions[].userReleaseDate as $date | $date | select(endswith("22"))
But these are just overly-complicated versions of
jq '.versions[].userReleaseDate | select( endswith("22") )'
Use select directly on the list of objects, extract and check the release date inside its argument:
jq '.versions[] | select(.userReleaseDate | endswith("22"))'
{
"logs": [
{
"timestamp": "20181216T14:36:12",
"description": "IP connectivity via interface ipmp1 has become degraded.",
"type": "alert",
"uuid": "1234567",
"severity": "Minor"
},
{
"timestamp": "20181216T14:38:16",
"description": "Network connectivity via port ibp4 has been established.",
"type": "alert",
"uuid": "12345678",
"severity": "Minor"
}
]
}
I have this JSON object, and I want to iterate through each object and update the timestamp to a more readable date. Right now, I have
$currentLogs.logs |
Where{$_.type -eq 'alert'} |
ForEach{$_.timestamp = {[datetime]::parseexact($_.timestamp, 'yyyyMMdd\THH:mm:ss', $null)}}
But when I read the object $currentLogs, it still hasn't updated.
You will need to first parse your date/time and then apply the formatting you want. If you apply no formatting, then the timestamp property will be a datetime object type and the conversion back to JSON will do weird formatting to it. It would be best to make your new format a string so that it won't be manipulated by the JSON serialization:
$currentLogs.logs | Where type -eq 'alert' | ForEach-Object {
$_.timestamp = [datetime]::parseexact($_.timestamp, 'yyyyMMddTHH:mm:ss', $null).ToString('yyyy-MM-dd HH:mm:ss')
}
In your attempt, you used the following code:
{[datetime]::parseexact($_.timestamp, 'yyyyMMdd\THH:mm:ss', $null)}
The use of surrounding {} denotes a script block. If that script block is not called or invoked, it will just output its contents verbatim. You can run the above code in your console and see that result.
You also did not format your datetime object after the parse attempt. By default, the output in the console would apply a ToString() implicitly when the datetime value is set to a property, but that implicit formatting does not translate to your JSON conversion (for whatever reason).
Thanks for showing the desired format.
To update those elements where the 'type' equals 'alert', you can do this:
$json = #'
{
"logs": [
{
"timestamp": "20181216T14:36:12",
"description": "IP connectivity via interface ipmp1 has become degraded.",
"type": "alert",
"uuid": "1234567",
"severity": "Minor"
},
{
"timestamp": "20181216T14:38:16",
"description": "Network connectivity via port ibp4 has been established.",
"type": "alert",
"uuid": "12345678",
"severity": "Minor"
}
]
}
'# | ConvertFrom-Json
# find the objects with 'type' equals 'alert'
$json.logs | Where-Object { $_.type -eq 'alert' } | ForEach-Object {
# parse the date in its current format
$date = [datetime]::ParseExact($_.timestamp, 'yyyyMMddTHH:mm:ss', $null)
# and write back with the new format
$_.timestamp = '{0:yyyy-MM-dd HH:mm:ss}' -f $date
}
# convert back to json
$json | ConvertTo-Json
If you like to save to file, append the above last line with | Set-Content -Path 'X:\Path\To\Updated.json'
I have a series of JSON files that I've been working with via jq. Each file consists of an array of dictionaries, e.g.
file1.json: [{ "id": 1 }]
file2.json: [{ "id": 2 }]
The only command I have found which successfully merges all input files into one output array is:
jq --slurp '.[0] + .[1]' file1.json file2.json
This command outputs [{ "id": 1 }, { "id": 2 }], as expected.
I'm writing a shell script which is expected to merge a variable set of files into a single JSON array as an output. My script will execute something like:
find . -type f -iname '*.json' | xargs jq 'FILTER'
This should invoke jq like jq 'FILTER' file1.json file2.json ....
Is there a feature that I'm missing that will take all input files and first merge them into one contiguous list of objects without having to rewrite the filter to something like .[0] + .[1] + .[2] ...?
Given:
1.json
[{ "id": 1 }]
2.json
[{ "id": 2 }]
3.json
[{ "id": 3 }]
Then this command:
jq --slurp 'map(.[])' 1.json 2.json 3.json
Returns:
[
{
"id": 1
},
{
"id": 2
},
{
"id": 3
}
]
Or simply:
jq --slurp 'flatten' 1.json 2.json 3.json
It's generally best to avoid the -s option, especially if your version of jq supports inputs, as do all versions >= 1.5.
In any case, if your version of jq supports inputs, you could write:
jq -n '[inputs[]]' 1.json 2.json 3.json # etc
or whichever variant meets your needs.
Otherwise, you could simply write:
jq -s add 1.json 2.json 3.json # etc
Note on flatten
flatten itself is ruthless:
$ jq flatten <<< '[[[1], [[[[2]]]]]]'
[1,2]
flatten(1) is less so.