POST data in json array format with curl command in linux - arrays

I am able to POST data with curl in the JSON format with the loop in bash script
Current
$(curl -o /dev/null -s -X POST “${url}" -H "accept: application/json" -H "Authorization: ${base}” -H "Content-Type: application/json" -d “{\”data1\”:\”${data1}\”,\”data2\”:\”${data2}\”}”)
[
{
"data1": "data1",
"data2”: "data2”
},
{
"data3”: "data3”,
"data4”: "data4”
}
]
But my requirement is to achieve below format in JSON array with curl. Please help me to achieve me to append JSON with curl command with every iteration of loop.
{“Array": [
{
"data1": "data1",
"data2”: "data2”
},
{
"data3”: "data3”,
"data4”: "data4”
}
]
}

Related

Put JSON keys in a shell script array

I need to put the json keys of a file as a array in a shell script how can i do that?
{
"employee4" : {
"aliases" : { }
},
"employee3" : {
"aliases" : { }
},
"employee" : {
"aliases" : { }
},
"employee2" : {
"aliases" : { }
}
}
I need to have a array like keys["employee", "employee2", "employee3", "employee4"]
If there is more keys the array need to find them
The jq keys function returns a list of keys. So with your example data in data.json, we see:
$ jq 'keys' data.json
[
"employee",
"employee2",
"employee3",
"employee4"
]
To get rid of the JSON list, we run:
$ jq -r 'keys[]' data.json
employee
employee2
employee3
employee4
And to get that into a bash array:
myarray=( $(jq -r 'keys[]' data.json) )
As #glennjackman mentions in a comment, the above construct will have problems if your keys contain whitespace or shell special characters. For example, given this data:
{
"employee*" : {
"aliases" : { }
}
}
If your directory contains files named employee1 and employee2, then you'll get, effectively:
myarray=( employee1 employee2 )
...which is not what you want. You can fix this by using the mapfile builtin (also known as readarray, which makes its purpose more obvious):
mapfile -t myarray < <(jq -r 'keys[]' data.json)
The JSON-parser xidel can do what you want.
To return the JSON keys / attributes:
$ xidel -s input.json -e '$json()' # or -e 'map:keys($json)'
employee4
employee3
employee
employee2
To create the Bash array:
$ mapfile -t myArr < <(xidel -s input.json -e '$json()')
Alternatively, xidel can do this too with --output-format=bash:
$ eval "$(xidel -s input.json -e 'myArr:=$json()' --output-format=bash)"
In both cases this will result in:
$ printf '%s\n' "${myArr[*]}" ${#myArr[*]} ${myArr[0]} ${myArr[1]}
employee4 employee3 employee employee2
4
employee4
employee3

Check if tag of an image exists in a docker registry using shell script

I have a list of images and their respective tags in a given images.json file, like below:
{
"image1":"1.1",
"image2": "1.2",
"image3": "1.3"
}
I was able to use a sed command to read values from above images.json and store each column in an array , such as
image=${arraydata[0]}
tag=${arraydata[1]}
For each image, I would like to compare if its tag exists in the below output I get from a curl command:
curl -X GET https://docker.jfrog.io/artifactory/api/docker/docker-local/v2/$image/tags/list?
Output of above:
{
"name" : "image1",
"tags" : [ "1.1", "1.2" ]
}{
"name" : "image2",
"tags" : [ "1.1", "1.2", "1.3" ]
}{
"name" : "image3",
"tags" : [ "1.1", "1.2", "1.4" ]
}
I would like to have an if/else statement to check if tag for an image exists in the results above:
If exists then do nothing
Else do docker push jfrog/$image:$tag
Perhaps (untested)
assumes your shell is bash
uses jq for your JSON-parsing needs
declare -A versions
while IFS=$'\t' read -r img ver; do
versions[$img]=$ver
done < <(
jq -r '. as $obj | keys[] | [., $obj[.]] | #tsv' images.json
)
# versions=([image1]="1.1" [image2]="1.2" [image3]="1.3" )
for img in "${!versions[#]}"; do
ver=${versions[$img]}
url="https://docker.jfrog.io/artifactory/api/docker/docker-local/v2/$img/tags/list?"
artifacts=$(curl -X GET "$url")
result=$(jq --arg ver "$ver" '.tags | contains([$ver])' <<<"$artifacts")
if [[ $result == true ]]; then
echo "image $img has version $ver"
else
echo "image $img does not have version $ver"
fi
done
status=$(wget --spider --server-response http://any_url:5000/v2/repository/image-name-you-query/manifests/latests 2>&1 | grep "HTTP/" | awk '{print $2}')
This snipped will print 200 if the image exists and something else if tehre is any problem. Particularly, if it does not exist, you will get 404.
You can then leverage this in your shell script like this:
if [ $status -eq 200 ]; then
# do something
else
# do something else
fi

Modifying JSON inside bash loop using jq - reading/writing from files or store all data at vars?

I have the following JSON template file - tmpl.json
{
"locations": [],
"name": "",
"script": {
"events": [
{
"description": "",
"type": "navigate",
"url": "",
"wait": {
"waitFor": "page_complete"
}
}
],
"version": "1.0"
},
"type": "BROWSER"
}
I need to use the above file as a template and add content into .locations[], .name, .script.events[].description and .script.events[].url inside a loop dynamically and then use in the same loop the resulted JSON with curl PUT call.
The content of the locations[] which needs to be added is a static array in a separate loc.json file:
["LOCATION-577B","LOCATION-D7FF","LOCATION-8BE4","LOCATION-0CE9"]
While the values for other keys are calculated dynamically inside the loop.
Here is the way I manipulate the data to create a temporary JSON file for each iteration of the loop. $1 is the parameter calculated in the loop and then passed to the function to create temporary JSON which then is used with the curl.
jq --slurpfile loc "loc.json" \
--arg URL "$1" \
'.locations|=$loc[] |
.name=$URL |
.script.events[].url=$URL |
.script.events[].description="Loading of URL \"" +$URL + "\""' \
"tmpl.json" >"$1-temp.json"
While the above works, I don't consider it a very clean or efficient way to deal with the problem. I need to iterate the loop over 1000 times which means creating 1000 temporary files locally and clean up afterward.
What would be a better way to deal with the problem? Read both the static locations array and the template files into variables via heredocs and use them inside the loop?
Or assign the resulted JSON output to a variable and then use it in the curl PUT call?
However, in the latter case, careful handling needs to be done of whitespaces and other special characters... The template file I've shown is just a fragment from the whole JSON file which contains way more key/values, but I need to modify only the keys outlined in the above.
Update/clarification: $1 parameter used in the jq call is a single URL without http/https prefix. The list of URLs is calculated using another function/jq call and assigned to a bash var $URL_list. Then this var is used in the for loop to call the function which creates updates JSON for each URL.
The REST calls to also use curl_combined_parameter and curl_combined_update_put vars which are full requests with various parameters but they don't have a relation with the problem I am trying to solve.
So the cut-down version of the whole script is the following:
#!/bin/bash
# Initiate the REST call which generates URL list
function get_url_list() {
# The function initiates REST call via curl with the $curl_combined_parameter,
# pipes the result to `jq`, and assigns the resulted list to a var URL_list.
services_json=$(curl -s \
--location \
--request GET \
"$curl_combined_parameter" \
--header "Authorization: Bearer ${token}")
# Now we filter the resulted `json` and get the list of sites
URL_list=$(echo "$services_json" |
jq -r ' map(.Body[].webServerName |
select( (. != null and endswith(":443") ) and ( test("commerce|backoffice") | not ) ) ) |
unique[] | .[0:-4] ')
}
function update_json() {
jq --slurpfile loc "loc.json" \
--arg URL "$1" \
'.locations|=$loc[] |
.name=$URL |
.script.events[].url=$URL |
.script.events[].description="Loading of URL \"" +$URL + "\""' \
"tmpl.json" >"$1-temp.json"
}
push_changes(){
# take the resulted `JSON` generated from `update_json` and push it via curl PUT call
curl --location \
--request PUT \
"$curl_combined_update_put" \
-H "accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Authorization: Bearer ${token}" \
-d "#$1-temp.json" >>"$1-updated.json"
}
for i in $URL_list; do
update_json "$i"
push_changes "$i"
done
Suggestions are welcome. I just want to reduce creating unnecessary temp files and encapsulate all data inside the script.
Thanks.
First, we're going to want to make your URL_list be an array instead of a string:
readarray -t URL_list < <(jq -r 'map(.Body[].webServerName |
select( (. != null and endswith(":443") ) and
( test("commerce|backoffice") | not ) ) ) |
unique[] | .[0:-4]' <<<"$services_json")
Next, we're going to make only one copy of jq take all your URLs as line-oriented input, and emit one JSON document per URL on stdout, with a tab between the URL and the document itself:
build_updated_json_documents() {
jq --slurpfile loc "loc.json" \
--argjson tmpl "$(<tmpl.json)" \
-c -Rr '
($tmpl | .locations|=$loc[]) as $tmpl_with_loc |
. as $URL |
($tmpl_with_loc |
.name=$URL |
.script.events[].url=$URL |
.script.events[].description="Loading of URL \"\($URL)\"") |
"\($URL)\t\(. | tojson)"
' < <(printf '%s\n' "${URL_list[#]}")
}
...and pipe the resulting stream to a function that reads it line-by-line and does the curl requests, piping direct from the copy of curl that would otherwise be generating a temp file to the one that's coming up with your final result to be stored:
handle_each_document() {
while IFS=$'\t' read -r url doc; do
# first, ask our remote server to update this document for us
# (would be nice if the server would do this in bulk, no?)
# ...and then forward that request to the other server.
curl --location \
--request PUT \
"$curl_combined_update_put" \
-H "accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Authorization: Bearer ${token}" \
-d- <<<"$doc" \
| curl --location \
--request PUT \
"$curl_combined_update_put" \
-H "accept: application/json; charset=utf-8" \
-H "Content-Type: application/json; charset=utf-8" \
-H "Authorization: Bearer ${token}" \
-d- >"${url}-updated.json"
done
}
build_updated_json_documents | handle_each_document

convert data of text file into JSON array in bash

I want output in the below JSON array format with command on linux/bash platform . Can anybody help
data in text file
test:test
test1:test1
test4:test4
Expecting output:
{array :
[
{test:test},
{test1:test1},
{test4:test4}
]
}
Using jq command line JSON parser:
<file jq -Rs '{array:split("\n")|map(split(":")|{(.[0]):.[1]}?)}'
{
"array": [
{
"test": "test"
},
{
"test1": "test1"
},
{
"test4": "test4"
}
]
}
The options Rs let jq read the whole file as one string.
The script splits this string into pieces in order to have the expected format.
Assuming you want actual JSON output:
$ jq -nR '{array: (reduce inputs as $line ([]; . + [$line | split(":") | {(.[0]):.[1]}]))}' input.txt
{
"array": [
{
"test": "test"
},
{
"test1": "test1"
},
{
"test4": "test4"
}
]
}
Creating a JSON works best with a dedicated JSON tool that can also make sense of raw text.
xidel is such a tool.
XPath:
xidel -s input.txt -e '{"array":x:lines($raw) ! {substring-before(.,":"):substring-after(.,":")}}'
(x:lines($raw) is a shorthand for tokenize($raw,'\r\n?|\n'), which creates a sequence of all lines.)
XQuery:
xidel -s input.txt --xquery '{"array":for $x in x:lines($raw) let $a:=tokenize($x,":") return {$a[1]:$a[2]}}'
See also this Xidel online tester.

Parse json response and get all specific ID and store them into array then delete from shell script

I had read some of questions and answers from this forum, still not finding what I am looking for.
Here the response from curl cmd:
I would like to get all id and store them into array, then delete these IDs using curl cmds as well
[{
"_links": {
"list": {
"href": "http://10.10.10.185:8080/vndfs/lcd/v3/vdfs"
},
"modifyInfo": {
"href": "http://10.10.10.185:8080/vdfs/lcd/v3/vdfs/TEST1-5cda5079a2cb47c28466bc1983f8b2e6"
}
},
"description": "Purple is my color",
"id": "TEST1-5cda5079a2cb47c28466bc1983f8b2e6"
}, {
"_links": {
"list": {
"href": "http://10.10.10.185:8080/vndfs/lcd/v3/vdfs"
},
"modifyInfo": {
"href": "http://10.10.10.185:8080/vdfs/lcd/v3/vdfs/TEST2-5cda5079a2cb47c28466bc1983f8b2e6"
}
},
"description": "Blue is my color",
"id": "TEST2-5cda5079a2cb47c28466bc1983f8b2e6"
}]
getid.sh:
#!/bin/bash
VDF=`curl -s GET http://10.10.10.185:8080/vdfs/lcd/v3/vdfs
VDFSID=`echo $VNF | python -c 'import json,sys; response=json.loads(sys.stdin.read()); print response[0]["id"]'`
echo $VDFSID
output from echo:
TEST1-5cda5079a2cb47c28466bc1983f8b2e6
but I want to get all all id and store the into array, then I can delete each TEST1-XX and TEST2-XXX id
for i in response['id']:
curl -XDELETE http://10.10.10.185:8080/vdfs/lcd/v3/vdfs/$i
any advises for this? Thanks in advance.
You can use jq JSON parser with xarg to inject the results into curl :
curl -s -X GET http://10.10.10.185:8080/vdfs/lcd/v3/vdfs | \
jq -r ' .[] | .id ' | \
xargs -I {} curl -X DELETE http://10.10.10.185:8080/vdfs/lcd/v3/vdfs/{}
This will extract id from your JSON array and make successive call to curl per id with the url http://10.10.10.185:8080/vdfs/lcd/v3/vdfs/<ID>
You can also use jsontool for parsing JSON (install with npm) :
curl -s -X GET http://10.10.10.185:8080/vdfs/lcd/v3/vdfs | \
json -a id | \
xargs -I {} curl -X DELETE http://10.10.10.185:8080/vdfs/lcd/v3/vdfs/{}

Resources