Map array to other array values in JQ - arrays

How to map array to other array values in jq?
I have two JSON arrays.
[
{
"date": "2021/9/12",
"rate": 7,
"path": "f"
},
{
"date": "2021/9/13",
"rate": 8,
"path": "f"
},
{
"date": "2021/9/14",
"rate": 8,
"path": "f"
},
]
[
"562949953740755",
"562949953740743",
"562949953740744"
]
I want to have a result like this below.
[
{
"date": "2021/9/12",
"rate": 7,
"path": "f",
"inode": "562949953740755"
},
{
"date": "2021/9/13",
"rate": 8,
"path": "f",
"inode": "562949953740743"
},
{
"date": "2021/9/14",
"rate": 8,
"path": "f",
"inode": "562949953740744"
},
]
I tried:
But I have no clue how to achieve this.
jq -s '.[1] as $file | .[0] | (.[].path) |= (range($file|length) as $i | $file[$i]) ' <(cat a.json) <(cat b.json)

Don't reinvent the transpose wheel.
jq -s 'transpose | map(.[0] + {inode: .[1]})' a.json b.json
Online demo

Use the array file (inode) as reference and slurp its content ahead of processing the original file.
jq --slurpfile inode b.json '
reduce range(0, ($inode[0]|length)) as $d (.; .[$d] += {inode: $inode[0][$d]})' a.json
Note that, this works as long as there are equal number of elements in both your JSON arrays.
Another attempt without involving any "slurps" of the input file (probably faster than the earlier one)
jq -n 'input as $inode | input |
reduce range(0, length) as $d (.; .[$d] += {inode: $inode[$d]})' b.json a.json

Related

jq - subtracting one array from another using a single command

I have three operations with jq to get the right result. How can I do it within one command?
Here is a fragment from the source JSON file
[
{
"Header": {
"Tenant": "tenant-1",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "0b0b-0c0c",
"name": "NumberOfSearchResults"
},
{
"id": "aaaa0001-0a0a",
"name": "LoadTest"
}
]
}
},
{
"Header": {
"Tenant": "tenant-2",
"Rcode": 200
},
"Body": {
"values": []
}
},
{
"Header": {
"Tenant": "tenant-3",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "cccca0003-0b0b",
"name": "LoadTest"
}
]
}
},
{
"Header": {
"Tenant": "tenant-4",
"Rcode": 200
},
"Body": {
"values": [
{
"id": "0f0g-0e0a",
"name": "NumberOfSearchResults"
}
]
}
}
]
I apply two filters and create two intermediate JSON files. First I create the list of all tenants
jq -r '[.[].Header.Tenant]' source.json >all-tenants.json
And then I select to create an array of all tenants not having a particular key present in the Body.values[] array:
jq -r '[.[] | select (all(.Body.values[]; .name !="LoadTest") ) ] | [.[].Header.Tenant]' source.json >filter1.json
Results - all-tenants.json
["tenant-1",
"tenant-2",
"tenant-3",
"tenant-4"
]
filter1.json
["tenant-2",
"tenant-4"
]
And then I substruct filter1.json from all-tenants.json to get the difference:
jq -r -n --argfile filter filter1.json --argfile alltenants all-tenants.json '$alltenants - $filter|.[]'
Result:
tenant-1
tenant-3
Tenant names - values for the "Tenant" key are unique and each of them occurs only once in the source.json file.
Just to clarify - I understand that I can have a select condition(s) that would give me the same resut as subtracting two arrays.
What I want to understand - how can I assign and use these two arrays into vars directly in a single command not involving the intermediate files?
Thanks
Use your filters to fill in the values of a new object and use the keys to refer to the arrays.
jq -r '{
"all-tenants": [.[].Header.Tenant],
"filter1": [.[]|select (all(.Body.values[]; .name !="LoadTest"))]|[.[].Header.Tenant]
} | .["all-tenants"] - .filter1 | .[]'
Note: .["all-tenants"] is required by the special character "-" in that key. See the entry under Object Identifier-Index in the manual.
how can I assign and use these two arrays into vars directly in a single command not involving the intermediate files?
Simply store the intermediate arrays as jq "$-variables":
[.[].Header.Tenant] as $x
| ([.[] | select (all(.Body.values[]; .name !="LoadTest") ) ] | [.[].Header.Tenant]) as $y
| $x - $y
If you want to itemize the contents of $x - $y, then simply add a final .[] to the pipeline.

How get index of array with jq

I want search this string tb1qpvtnfqqs3cp4ly4375km7n5sga8hkdkujkm854 in that structure
{
"txid": "67bc5194442dc350312a7c0a5fc7ef912c31bf00b23349b4c3afdf177c91fb2f",
"hash": "8392ded0647e4166eda342cee409c7d0e1e3ffab24de41866d2e6a7bd0a245b3",
"version": 2,
"size": 245,
"vsize": 164,
"weight": 653,
"locktime": 1764124,
"vin": [
{
"txid": "69eed058cbd18b3bf133c8341582adcd76a4d837590d3ae8fa0ffee1d597a8c3",
"vout": 0,
"scriptSig": {
"asm": "0014759fc698313da549948940508df6db93a319096e",
"hex": "160014759fc698313da549948940508df6db93a319096e"
},
"txinwitness": [
"3044022014a8eb758063c52bc970d42013e653f5d3fb3c190b55f7cfa72680280cc5138602202a873b5cad4299b2f52d8cccb4dcfa66fa6ec256d533788f54440d4cdad7dd6501",
"02ec8ba22da03ed1870fe4b9f9071067a6a1fda6f582c5c858644e44bd401bfc0a"
],
"sequence": 4294967294
}
],
"vout": [
{
"value": 0.37841708,
"n": 0,
"scriptPubKey": {
"asm": "0 686bc8ce41505642c96f3eb99919fff63f4c0f11",
"hex": "0014686bc8ce41505642c96f3eb99919fff63f4c0f11",
"reqSigs": 1,
"type": "witness_v0_keyhash",
"addresses": [
"tb1qdp4u3njp2pty9jt086uejx0l7cl5crc3x3phwd"
]
}
},
{
"value": 0.00022000,
"n": 1,
"scriptPubKey": {
"asm": "0 0b173480108e035f92b1f52dbf4e90474f7b36dc",
"hex": "00140b173480108e035f92b1f52dbf4e90474f7b36dc",
"reqSigs": 1,
"type": "witness_v0_keyhash",
"addresses": [
"tb1qpvtnfqqs3cp4ly4375km7n5sga8hkdkujkm854"
]
}
}
],
"hex": "02000000000101c3a897d5e1fe0ffae83a0d5937d8a476cdad821534c833f13b8bd1cb58d0ee690000000017160014759fc698313da549948940508df6db93a319096efeffffff022c6b410200000000160014686bc8ce41505642c96f3eb99919fff63f4c0f11f0550000000000001600140b173480108e035f92b1f52dbf4e90474f7b36dc02473044022014a8eb758063c52bc970d42013e653f5d3fb3c190b55f7cfa72680280cc5138602202a873b5cad4299b2f52d8cccb4dcfa66fa6ec256d533788f54440d4cdad7dd65012102ec8ba22da03ed1870fe4b9f9071067a6a1fda6f582c5c858644e44bd401bfc0a1ceb1a00",
"blockhash": "000000009acb8b4f06a97beb23b3d9aeb3df71052dabec94465933b564c27f50",
"confirmations": 2,
"time": 1591687001,
"blocktime": 1591687001
}
I'd like to get the index of vout, in this case 1. is it possible with jq?
It's not clear what exactly you want.
I guess you want the n of the element of vout that contains the given address in its addresses list. That can be achieved with
jq '.vout[]
| select(.scriptPubKey.addresses[] == "tb1qpvtnfqqs3cp4ly4375km7n5sga8hkdkujkm854")
| .n
' file.json
You can also use
select((.scriptPubKey.addresses[]
| contains("tb1qpvtnfqqs3cp4ly4375km7n5sga8hkdkujkm854")))
to search for the address.
The following assumes that you want the index in .vout of the first object which has the given string as a leaf value, and that you have in mind using 0 as the index origin.
A simple and reasonably efficient jq program that finds all such indices is as follows:
.vout
| range(0;length) as $i
| if any(.[$i]|..;
. == "tb1qpvtnfqqs3cp4ly4375km7n5sga8hkdkujkm854")
then $i
else empty
end
With the given input, this in fact yields 1, which is in accordance with the problem description, so we seem to be on right track.
To get the first index, you could wrap the above in first(...), but in that case the result would be the empty stream if there is no occurrence. So perhaps you would prefer to wrap the above in first(...) // null
You could try something like this:
$vout={{ your json }}
$value="tb1qpvtnfqqs3cp4ly4375km7n5sga8hkdkujkm854"
result=$(echo "$vout" | jq -r '.[0] | select($value)')

How I can round digit on the last column to 2 decimal after a dot using JQ?

How I round digit on the last column to 2 decimal places?
I have json:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 9,
"successful": 9,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 2.575364,
"hits": [
{
"_index": "my-2017-08",
"_type": "log",
"_id": "AV5V8l0oDDWj-VP3YnCw",
"_score": 2.575364,
"_source": {
"acb": {
"version": 1,
"id": "7",
"owner": "pc",
"item": {
"name": "Account Average Latency",
"short_name": "Generate",
"description": "Generate of last month"
},
"service": "gsm"
},
"#timestamp": "2017-07-31T22:00:00.000Z",
"value": 210.08691986891395
}
},
{
"_index": "my-2017-08",
"_type": "log",
"_id": "AV5V8lbE28ShqBNuBl60",
"_score": 2.575364,
"_source": {
"acb": {
"version": 1,
"id": "5",
"owner": "pc",
"item": {
"name": "Profile Average Latency",
"short_name": "Profile",
"description": "Profile average latency of last month"
},
"service": "gsm"
},
"#timestamp": "2017-07-31T22:00:00.000Z",
"value": 370.20963260148716
}
}
]
}
}
I use JQ to get csv data:
["Name","Description","Result"],(.hits.hits[]._source | [.acb.item.name,.acb.item.description,.value])|#csv
I see result:
"Name","Description","Result"
"Account Average Latency","Generate of last month",210.08691986891395
"Profile Average Latency","Profile average latency of last month",370.20963260148716
I have 210.08691986891395 and 370.20963260148716 but I want 210.09 and 370.21
Depending on your build of jq, you may have access to some cstdlib math functions (e.g., sin or cos). Since you're on *nix, you very likely do. In my particular build, I don't seem to have access to round but perhaps you do.
def roundit: .*100.0|round/100.0;
["Name","Description","Result"],
(.hits.hits[]._source | [.acb.item.name, .acb.item.description, (.value|roundit)])
| #csv
Fortunately, it could be implemented in terms of floor which I do have access to.
def roundit: .*100.0 + 0.5|floor/100.0;
Here is your current filter with minor reformatting:
["Name", "Description", "Result"]
, ( .hits.hits[]._source
| [.acb.item.name, .acb.item.description, .value]
)
| #csv
Here is a filter which rounds the value column. Note we do this after the #csv so that we have full control over the string
def round: # e.g.
(split(".") + ["0"])[:2] # ["210","08691986891395"]
| "\(.[1])000"[:3] as $x | [.[0], $x[:2], $x[2:3]] # ["210","08","6"]
| map(tonumber) # [210,8,6]
| if .[2] > 4 then .[2] = 0 | .[1] += 1 else . end # [210,9,0]
| if .[1] > 99 then .[1] = 0 | .[0] += 1 else . end # [210,9,0]
| ["\(.[0])", "00\(.[1])"[-2:]] # ["210","09"]
| join(".") # 210.09
;
( ["Name", "Description", "Result"] | #csv )
, ( .hits.hits[]._source
| [.acb.item.name, .acb.item.description, .value]
| #csv
| split(",") | .[-1] |= round | join(",")
)
If this filter is in filter.jq and the sample data is in data.json then the command
$ jq -Mr -f filter.jq data.json
produces
"Name","Description","Result"
"Account Average Latency","Generate of last month",210.09
"Profile Average Latency","Profile average latency of last month",370.21
I would pass it to awk via pipeline:
jq -r '["Name","Description","Result"],(.hits.hits[]._source |
[.acb.item.name,.acb.item.description,.value])|#csv' yourfile |
awk 'BEGIN{ FS=OFS="," }NR>1{ $3=sprintf("%.2f",$3) }1'
The output:
"Name","Description","Result"
"Account Average Latency","Generate of last month",210.09
"Profile Average Latency","Profile average latency of last month",370.21

How to use jq to produce a cartesian product of two arrays present in the input JSON

I'd like to be able to use jq to output the 'product' of 2 arrays in the input JSON... for example, given the following input JSON:
{
"quantities": [
{
"product": "A",
"quantity": 30
},
{
"product": "B",
"quantity": 10
}
],
"portions": [
{
"customer": "C1",
"percentage": .6
},
{
"customer": "C2",
"percentage": .4
}
]
}
I'd like to produce the following output (or similar...):
[
{
"customer": "C1",
"quantities": [
{
"product": "A",
"quantity": 18
},
{
"product": "B",
"quantity": 6
}
]
},
{
"customer": "C2",
"quantities": [
{
"product": "A",
"quantity": 12
},
{
"product": "B",
"quantity": 4
}
]
}
]
So in other words, for each portion, use its value of percentage, and apply it to each product quantity. Given 2 quantities and 2 portions should yield 4 results.. given 3 quantities and 2 portions should yield 6 results, etc...
I've made some attempts using foreach filters, but to no avail...
I think this will do what you want.
[
.quantities as $q
| .portions[]
| .percentage as $p
| {
customer,
quantities: [
$q[] | .quantity = .quantity * $p
]
}
]
Since you indicated you want the Cartesian product, and that you only gave the sample output as being indicative of what you're looking for, it may be worth mentioning that one can obtain the Cartesian product very simply:
.portions[] + .quantities[]
This produces objects such as:
{
"product": "B",
"quantity": 10,
"customer": "C2",
"percentage": 0.4
}
You could then use reduce or (less efficiently, group_by) to obtain the data in whatever form it is you really want.
For example, assuming .customer is always a string, we could transform
the input into the requested format as follows:
def add_by(f;g): reduce .[] as $x ({}; .[$x|f] += [$x|g]);
[.quantities[] + .portions[]]
| map( {customer, quantities: {product, quantity: (.quantity * .percentage)}} )
| add_by(.customer; .quantities)
| to_entries
| map( {customer: .key, quantities: .value })

variant of jq from_entries that collate values for each key occurrence

Can I use jq to run a filter that behaves similarly to from_entries, with the one difference being, if multiple entries for the same key are encountered, it will collate the values into an array, rather than just use the last value?
If so, what filter would achieve this? For example, if my input is:
[
{
"key": "a",
"value": 1
},
{
"key": "b",
"value": 2
},
{
"key": "a",
"value": 3
},
{
"key": "b",
"value": 4
}
]
then the desired output would be:
{ "a": [1,3], "b": [2,4] }
Note that, using 'from_entries' alone as the filter, the resulting values are just the last value (that is, { "a": 3, "b": 4 })
With your example and the following lines in merge.jq:
def merge_entries:
reduce .[] as $pair ({}; .[$pair["key"]] += [$pair["value"]] );
merge_entries
the invocation: jq -c -f merge.jq
yields:
{"a":[1,3],"b":[2,4]}
You could also use the invocation:
jq 'reduce .[] as $p ({}; .[$p.key] += [$p.value])'

Resources