How to use iteration data with newman - database

I have a collection in Postman which loads "payload objects" from a json file and want to make it run in newman from command like.
POST request
Body: of POST request I have got {{jsonBody}}
Pre-request Script: logically pm.globals.set("jsonBody", JSON.stringify(pm.iterationData.toObject()));
and a file.json file with this kind of "objects":
[
{
"data": {
"propert1": 24,
"property2": "24__DDL_VXS",
...
},
{
"data": {
"propert1": 28,
"property2": "28__HDL_VDS",
...
}
...
]
Works like a charm in Postman.
Here is what I'm trying to run in cmd.
newman run \
-d file.json \
--global-var access_token=$TOK4EN \
--folder '/vlanspost' \
postman/postman_collection_v2.json
Based on the results I am getting - it looks like that newman is not resolving flag:
-d, --iteration-data <path> Specify a data file to use for iterations (either JSON or CSV)
and simply passes as payload literally this string from Body section: {{jsonBody}}
Anyone has got the same issue ?
Thx

I did that way and it worked.
Put collection and data file into a same directory. For example:
C:\USERS\DUNGUYEN\DESKTOP\SO
---- file.json
\___ SO.postman_collection.json
From this folder, make newman command.
newman run .\SO.postman_collection.json -d .\file.json --folder 'vlanspost'
This is the result:

Related

Is there a way in Thruk to extract the 'Expanded command' somewhere, via JSON, REST, or curl?

It shows up on the page, and I can see it in view source on chrome, but I do not appear to be able to get this info via curl, as the page stops loading - probably due to the JSON/js involved in formatting the page.
is there a way to obtain this information via either rest, JSON, or curling?
There is a rest endpoint for this
https://thruk.org/documentation/rest.html#_get-hosts-name-commandline
for hosts and
https://thruk.org/documentation/rest.html#_get-services-host-service-commandline
for services.
Available from commandline:
thruk r /hosts/localhost/commandline
[
{
"check_command" : "check-host-alive",
"command_line" : "/omd/sites/devel/lib/monitoring-plugins/check_icmp -H 127.0.0.1 -w 3000.0,80% -c 5000.0,100% -p 5",
"error" : "",
"host_name" : "localhost",
"peer_key" : "78bcd"
}
]
The same information is available by curl from
https://thrukhost/thruk/r/hosts/localhost/commandline

Apache Solr JSON Querying Local Params

In the Solr documentation it focuses explanations on how to use the GET parameters to define queries, but gives very little information on how to accomplish the same tasks using the better structured JSON POST support. I have been unable to find any documentation that goes deeper than very surface-level explanation.
Particularly, I'm trying to utilize local params in my queries and would like to know how to accomplish the following using a JSON POST instead of GET params:
http://localhost:8983/solr/city/query?sort={!sfield=location pt=35.5514,-97.4075}geodist() asc&q={!geofilt sfield=location pt=35.5514,-97.4075 d=5}
According to JSON Request API / Parameters Mapping your query would map to:
{
"sort": "{!sfield=location pt=35.5514,-97.4075}geodist() asc",
"query": "{!geofilt sfield=location pt=35.5514,-97.4075 d=5}"
}
Just to complete #MatsLindh answer, you can use usual parameter names as long as you wrap them in params (no mapping needed), for example :
file.json
{
"params": {
"q":"{!geofilt sfield=location pt=35.5514,-97.4075 d=5}",
"sort":"{!sfield=location pt=35.5514,-97.4075}geodist() asc",
"wt": "json",
"indent": "true"
}
}
Request example using curl :
curl -H "Content-Type: application/json" -X "POST" --data #file.json http://localhost:8983/solr/city/query

DB2 Warehouse on Cloud Load_jobs filename or path is not valid failure

I have to try to load data to DB2 WoC (formerly dashDB) from IBM Cloud Object Storage (Softlayer) by using /load_jobs API call.
Always getting error response: SQL3025N,A parameter specifying a filename or path is not valid.,0,n/a
Trying different formatting for path key, like following:
us-south/woctestdata/data_example.csv
/woctestdata/data_example.csv
woctestdata/data_example.csv
woctestdata::data_example.csv
also tried folowing suggestions from comments:
us-south::woctestdata\data_example.csv
us-south::woctestdata::data_example.csv
So no more ideas. How the path should be entered correctly?
There is the example of my request:
curl -X POST \
https://dashdb-mpp.services.dal.bluemix.net/dashdb-api/v2/load_jobs \
-H 'Authorization: Bearer <api_key>' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{
"load_source": "SOFTLAYER",
"load_action": "INSERT",
"schema": "MKT_ATBTN",
"table": "TRANSMISSIN_TABLE1",
"max_row_count": 0,
"max_warning_count": 0,
"cloud_source": {
"endpoint": "https://tor01.objectstorage.softlayer.net/auth/v1.0",
"path": "woctestdata/data_example.csv",
"auth_id": "<auth_id>",
"auth_secret": "<auth_secret>"
},
"server_source": {
"file_path": "string"
},
"stream_source": {
"file_name": "string"
},
"file_options": {
"code_page": "1208",
"column_delimiter": ";",
"string_delimiter": "",
"date_format": "YYYY-MM-DD",
"time_format": "HH:MM:SS",
"timestamp_format": "YYYY-MM-DD HH:MM:SS",
"cde_analyze_frequency": 0
}
}'
I also try to use db2 load command to load data from IBM Cloud object storage. But also no luck:
db2 load from Softlayer::https://tor01.objectstorage.softlayer.net/auth/v1.0::IBM:<ibm_email_address>::<password>::woctestdata::data_example.csv of del insert into MKT_ATBTN.TRANSMISSIN_TABLE1;
Result:
Agent Type Node SQL Code Result
_______________________________________________________________________
PRE_PARTITION 000 -00003025 Error.
_______________________________________________________________________
RESULTS: 0 of 0 LOADs completed successfully.
_______________________________________________________________________
Summary of LOAD Agents:
Number of rows read = 0
Number of rows skipped = 0
Number of rows loaded = 0
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 0
SQL3025N A parameter specifying a filename or path is not valid.
To download or access to the file you need to get an X-Auth-Token or set its container with an static URL through the Web page.
X-Auth-Token
I recommend to review Managing the Object Storage and softlayer-object-storage-auth-endpoint
When you run the command
curl -i -H "X-Auth-User: SLOS300001-10:rcuellar" -H "X-Auth-Key: 231222489e90646678364kjsdfhytwterd0259" https://tor01.objectstorage.softlayer.net/auth/v1.0
The response is something like this:
X-Auth-Token: AUTH_tkb26239d441d6401d9482b004d45f7259 – the token we need
X-Storage-Url: https://tor01.objectstorage.softlayer.net/v1/AUTH_df0de35c-d00a-40aa-b697-2b7f1b9331a6
And now you should be able to access to the file with an URL similar like below:
https://tor01.objectstorage.softlayer.net/v1/AUTH_df0de35c-d00a-40aa-b697-2b7f1b9331a6/woctestdata/data_example.csv
Static URL through Web Page
In the portal page go to:
Storage >> Object Storage >> Select Object Storage >> Select Cluster
(e.g. Toronto) >> Select your container
And check the Enable Static Site checkbox, see image below.
you also can use endpoints + object path, the path format is bucketname::filename

how to load a JSON file to solr using `pysolr`?

The following python code adds a document but without the json contents:
solr_instance = pysolr.Solr('http://192.168.45.153:8983/solr/test', timeout=60)
json_filename = '/path/to/file/test.json'
argws = {
'commit': 'true',
'extractOnly': False,
'Content-Type': 'application/json',
}
with open(json_filename, 'rb') as f:
solr_instance.extract(f, **argws)
solr_instance.commit()
using curl from the command line works as expected:
$ curl 'http://192.168.45.153:8983/solr/test/update?commit=true' \
--data-binary #/path/to/file/test.json \
-H 'Content-Type: application/json'
the file has following content:
$ cat /cygdrive/w/mist/test.json
--> [{"x": "a","y": "b"}]
I'm using pysolr 3.6.0 and solr 6.5.0
The extract() method refers to a request made against the ExtractingRequestHandler, which is meant to be used for extracting content from rich documents (such as PDFs, etc.).
You can use the regular .add method to submit the decoded JSON to Solr:
import json
solr.add(json.load(json_filename))
.. should work.

SolrCloud in production - querying q=* gives numFound=0

So I have a three-node cluster deployed using a zookeeper. And successfully created test collection (3 shards). Then after I have
curl -X POST -H 'Content-Type: application/json' 'ec2FirstNodeIP:8983/solr/test/update' --data-binary ' [ { "f1" : "1", "f2" : "2", "f3" : "3" } ]'
I got
{"responseHeader":{"status":0,"QTime":38} ...
However when I have curl "sameIP:8983/solr/test/select?wt=json&indent=true&q=*:*"
I am getting
NumFound:0
But using the admin UI for updating the document, the query now returns the document
image for admin UI
What am I missing?
To make document searchable we should commit. use commit=true
ec2FirstNodeIP:8983/solr/test/update?commit=true this should work.

Resources