{
"logs": [
{
"timestamp": "20181216T14:36:12",
"description": "IP connectivity via interface ipmp1 has become degraded.",
"type": "alert",
"uuid": "1234567",
"severity": "Minor"
},
{
"timestamp": "20181216T14:38:16",
"description": "Network connectivity via port ibp4 has been established.",
"type": "alert",
"uuid": "12345678",
"severity": "Minor"
}
]
}
I have this JSON object, and I want to iterate through each object and update the timestamp to a more readable date. Right now, I have
$currentLogs.logs |
Where{$_.type -eq 'alert'} |
ForEach{$_.timestamp = {[datetime]::parseexact($_.timestamp, 'yyyyMMdd\THH:mm:ss', $null)}}
But when I read the object $currentLogs, it still hasn't updated.
You will need to first parse your date/time and then apply the formatting you want. If you apply no formatting, then the timestamp property will be a datetime object type and the conversion back to JSON will do weird formatting to it. It would be best to make your new format a string so that it won't be manipulated by the JSON serialization:
$currentLogs.logs | Where type -eq 'alert' | ForEach-Object {
$_.timestamp = [datetime]::parseexact($_.timestamp, 'yyyyMMddTHH:mm:ss', $null).ToString('yyyy-MM-dd HH:mm:ss')
}
In your attempt, you used the following code:
{[datetime]::parseexact($_.timestamp, 'yyyyMMdd\THH:mm:ss', $null)}
The use of surrounding {} denotes a script block. If that script block is not called or invoked, it will just output its contents verbatim. You can run the above code in your console and see that result.
You also did not format your datetime object after the parse attempt. By default, the output in the console would apply a ToString() implicitly when the datetime value is set to a property, but that implicit formatting does not translate to your JSON conversion (for whatever reason).
Thanks for showing the desired format.
To update those elements where the 'type' equals 'alert', you can do this:
$json = #'
{
"logs": [
{
"timestamp": "20181216T14:36:12",
"description": "IP connectivity via interface ipmp1 has become degraded.",
"type": "alert",
"uuid": "1234567",
"severity": "Minor"
},
{
"timestamp": "20181216T14:38:16",
"description": "Network connectivity via port ibp4 has been established.",
"type": "alert",
"uuid": "12345678",
"severity": "Minor"
}
]
}
'# | ConvertFrom-Json
# find the objects with 'type' equals 'alert'
$json.logs | Where-Object { $_.type -eq 'alert' } | ForEach-Object {
# parse the date in its current format
$date = [datetime]::ParseExact($_.timestamp, 'yyyyMMddTHH:mm:ss', $null)
# and write back with the new format
$_.timestamp = '{0:yyyy-MM-dd HH:mm:ss}' -f $date
}
# convert back to json
$json | ConvertTo-Json
If you like to save to file, append the above last line with | Set-Content -Path 'X:\Path\To\Updated.json'
Related
Given the example JSON below:
{
"account_number": [
"123456"
],
"account_name": [
"name"
],
"account_id": [
654321
],
"username": [
"demo"
]
}
I'd like to get:
{
"account_number": "123456",
"account_name": "name",
"account_id": 654321,
"username": "demo"
}
Currently, I'm brute forcing it with | sed 's/\[//g' | sed 's/\]//g' | jq '.' ... but of course, that's ugly and causes issues if any of the values contain [ or ].
I've been unsuccessful with jq's flatten and other loops and mapping techniques like | jq -s '{Item:.[]} | .Item |add' to try and flatten the single-item arrays. Ideally, it would work where it would flatten arrays [...] to flat elements/objects {...}. Either way something better than replacing all occurrences of square brackets.
Short and sweet:
map_values(first)
Use with_entries, changing each value to the first element of itself:
jq 'with_entries(.value |= .[0])' file.json
I am trying to grab 'n' item from a nested json array. The scenario is that I need to get the IP addresses from newly created cloud instances from my cloud provider so I can perform automation task with ansible. Here is a sample of the json output from the api that my cloud provider gives. (details obscured for privacy and security reasons)
[
{
"alerts": {
"cpu": 180,
"io": 10000,
"network_in": 10,
"network_out": 10,
"transfer_quota": 80
},
"backups": {
"enabled": false,
"last_successful": null,
"schedule": {
"day": null,
"window": null
}
},
"created": "2022-04",
"group": "",
"hypervisor": "kvm",
"id": 36084613,
"image": "ubuntu20.04",
"ipv4": [
"12.34.56.78", #<--- Need to grab this public address
"192.168.x.x" #<--- and this private address
],
"ipv6": "0000::0000/128",
"label": "node-1",
"region": "us",
"specs": {
"disk": 81920,
"memory": 4096,
"transfer": 4000,
"vcpus": 2
},
"status": "running",
"tags": [],
"type": "standard",
"updated": "2022-04",
"watchdog_enabled": true
}
]
I need to get the public IP address to I can add the node to an inventory file. So far, I have managed to get the following:
$ cat json.json | jq -r '.[0].ipv4'
[
"12.34.56.78",
"192.168.x.x"
]
I can get what I want by repiping into jq, but I feel there has to be a more elegant way to do so.
$ cat json.json | jq -r '.[0].ipv4' | jq -r '.[0]'
12.34.56.78
$ cat json.json | jq -r '.[0].ipv4' | jq -r '.[1]'
192.168.x.x
New to posting on StackOverflow so I apologize in advance if someone already answered this on another thread. I looked around and couldn't find what I was looking for. Thanks! 😀
It seems you want:
jq -r '.[0].ipv4[]'
or perhaps:
jq -r '.[].ipv4[]'
I am trying to do what I think should be a fairly simple filter but I keep running into errors. I have this JSON:
{
"versions": [
{
"archived": true,
"description": "Cod version 3.3/Sprint 8",
"id": "11500",
"name": "v 3.3",
"projectId": 11500,
"releaseDate": "2016-03-15",
"released": true,
"self": "https://xxxxxxx.atlassian.net/rest/api/2/version/11500",
"startDate": "2016-02-17",
"userReleaseDate": "14/Mar/16",
"userStartDate": "16/Feb/16"
},
{
"archived": true,
"description": "Hot fix",
"id": "12000",
"name": "v3.3.1",
"projectId": 11500,
"releaseDate": "2016-03-15",
"released": true,
"self": "https://xxxxxxx.atlassian.net/rest/api/2/version/12000",
"startDate": "2016-03-15",
"userReleaseDate": "14/Mar/16",
"userStartDate": "14/Mar/16"
},
{
"archived": false,
"id": "29704",
"name": "Sync-diff v1.0.0",
"projectId": 11500,
"releaseDate": "2022-02-16",
"released": true,
"self": "https://xxxxxxx.atlassian.net/rest/api/2/version/29704",
"startDate": "2022-02-06",
"userReleaseDate": "15/Feb/22",
"userStartDate": "05/Feb/22"
}
]
}
I just want to return any userReleaseDate that ends with '22'
I can get the boolean result by:
jq '.versions[].userReleaseDate | endswith("22")'
prints out false, false, true
But I am not sure how to retrieve the objects. I tried variations of this:
[.versions[] as $keys | $keys select(endswith("22"))]
and each threw an error. Any help would be appreciated.
This was so close:
jq '.versions[].userReleaseDate | endswith("22")'
Rather than outputting whether they end with 22 or not, you want to select the values which end with 22. Fixed:
jq '.versions[].userReleaseDate | select( endswith("22") )'
Now, your question asks for the dates that end with 22, but the title suggests you want the objects. For that, you'd want something a little different. We want to select from the versions, not from the dates.
jq '.versions[] | select( .userReleaseDate | endswith("22") )' # As a stream
jq '[ .versions[] | select( .userReleaseDate | endswith("22") ) ]' # As an array
jq '.versions | map( select( .userReleaseDate | endswith("22") ) )' # As an array
There are a number of issues with [ .versions[] as $keys | $keys select(endswith("22")) ].
The keys of array element aren't usually called keys but indexes. $indexes would be a better name.
Except .versions[] gets the values of the array elements, not the keys/indexes. $values would be a better name.
Except the variable only takes on a single value at a time. $value would be a better name.
$version would be an even better name.
There's a | missing between $keys and select(endswith("22")).
There's no mention of userReleaseDate anywhere.
The result is placed in an array (because of the [ ]). There's no need or desire for this.
You could use
.versions[] as $version | $version.userReleaseDate | select(endswith("22"))
or
.versions[].userReleaseDate as $date | $date | select(endswith("22"))
But these are just overly-complicated versions of
jq '.versions[].userReleaseDate | select( endswith("22") )'
Use select directly on the list of objects, extract and check the release date inside its argument:
jq '.versions[] | select(.userReleaseDate | endswith("22"))'
Sample Json file:
[
{
"Header": {
"Tenant": "Test-d1",
"Stage": "dev",
"ProductType": "b2b",
"Rcode": 401
},
"Body": {
"error": {
"code": 401,
"message": "Unsupported authorization scheme"
}
}
},
{
"Header": {
"Tenant": "2734d7ac0f0e",
"Stage": "unknown",
"ProductType": "unknown",
"Rcode": 404
},
"Body": {
"error": {
"code": 404,
"message": "Not found"
}
}
}
]
Desired output:
Test-d1, dev, b2b, Unsupported authorization scheme, 401
2734d7ac0f0e, unknown, unknown, Not found, 404
So skip the keys, interested only with certain values and form a single line, separate them by commas or semicolumn or some other separator.
The simplest procedure I could imagine with jq was to put values into an array and use #csv
jq -r .[] | [ .Header.Tenant, .Header.Stage, .Header.ProductType, .Body.error.message, .Body.error.code ] | #csv
Above almost does what I wanted, but encloses every value with double-quotes. I can deal with double-quotes using some other tools but I'm sure it should be possible within jq itself.
What are alternative approaches using jq?
Thanks
#csv is guaranteed to produce CSV, but if you want strings to be presented unconditionally without the surrounding quotation marks, you could consider using join(", ") instead of #csv. Since you indicated you're open to some other value-separator, you might also wish to consider #tsv.
The redundancy in the jq program can also be reduced, so you might end up with:
.[]
| (.Header | [.Tenant, .Stage, .ProductType]) +
(.Body.error | [.message, .code ])
| #tsv
In the example JSON at the bottom of this question, how can I count the number of key/value pairs in the array "Tags" using JMESPath?
According to the JMESPath documentation, I can do this using the count() function -
For example, the following expression creates an array containing the total number of elements in the foo object followed by the value of foo["bar"].
However, it seems that the documentation is incorrect. Using the JMESPath website, the query Reservations[].Instances[].[count(#), Tags] yeilds the result [ [ null ] ]. I then tested via the AWS command line and an error was returned -
Unknown function: count()
Is there actually a way of doing this using JMESPath?
Example JSON -
{
"Reservations": [
{
"Instances": [
{
"InstanceId": "i-asdf1234",
"InstanceName": "My Instance",
"Tags": [
{
"Value": "Value1",
"Key": "Key1"
},
{
"Value": "Value2",
"Key": "Key2"
},
{
"Value": "Value3",
"Key": "Key3"
},
{
"Value": "Value4",
"Key": "Key4"
}
]
}
]
}
]
}
The answer here is that the JMESPath documentation is shocking, and for some reason I was seeing out of date documentation (check the bottom right corner of the screen to see what version you are viewing.
I can do what I need to do using the length() function -
Reservations[].Instances[].Tags[] | length(#)
I managed to incorporate this usage of length length(Tags[*]) within a larger statement I think is useful and wanted to share:
aws ec2 describe-instances --region us-west-2 --query 'Reservations[*].Instances[*].{id: InstanceId, ami_id: ImageId, type: InstanceType, tag_count: length(Tags[*])}' --profile prod --output table;
--------------------------------------------------------------------
| DescribeInstances |
+--------------+-----------------------+------------+--------------+
| ami_id | id | tag_count | type |
+--------------+-----------------------+------------+--------------+
| ami-abc123 | i-redacted1 | 1 | m3.medium |
| ami-abc456 | i-redacted2 | 7 | m3.xlarge |
| ami-abc789 | i-redacted3 | 12 | t2.2xlarge |
+--------------+-----------------------+------------+--------------+