Is there a way in Thruk to extract the 'Expanded command' somewhere, via JSON, REST, or curl? - nagios

It shows up on the page, and I can see it in view source on chrome, but I do not appear to be able to get this info via curl, as the page stops loading - probably due to the JSON/js involved in formatting the page.
is there a way to obtain this information via either rest, JSON, or curling?

There is a rest endpoint for this
https://thruk.org/documentation/rest.html#_get-hosts-name-commandline
for hosts and
https://thruk.org/documentation/rest.html#_get-services-host-service-commandline
for services.
Available from commandline:
thruk r /hosts/localhost/commandline
[
{
"check_command" : "check-host-alive",
"command_line" : "/omd/sites/devel/lib/monitoring-plugins/check_icmp -H 127.0.0.1 -w 3000.0,80% -c 5000.0,100% -p 5",
"error" : "",
"host_name" : "localhost",
"peer_key" : "78bcd"
}
]
The same information is available by curl from
https://thrukhost/thruk/r/hosts/localhost/commandline

Related

Http get method for Splunk saved search using access token

What would be the correct HTTP get request call syntax to fetch saved search data from Splunk if we're accessing it through an access token?
My curl command is working but http.get is not.
curl command:
#os.system('curl -H "Authorization: Bearer <token>"
<baseurl>:8089/services/search/jobs/export --data search="savedsearch abc_backup_status" -d output_mode=csv')
request call ::::
BASE_URL = '<baseurl>:8089/services/search/jobs/export'
data = {"search":"savedsearch abc_backup_status"}
headers = {'Authorization': "Bearer <token>"}
auth_response = requests.get(BASE_URL, headers=headers, data = data, verify=False)
this is giving 400 errors.
The curl options -d OR --data imply a POST method by default.
From: https://man7.org/linux/man-pages/man1/curl.1.html
-d, --data <data>
(HTTP MQTT) Sends the specified data in a POST request to
the HTTP server, in the same way that a browser does when
a user has filled in an HTML form and presses the submit
button. This will cause curl to pass the data to the
server using the content-type application/x-www-form-
urlencoded. Compare to -F, --form.
It is interesting that Splunk Docs claim that search/jobs/export takes a GET, but you're creating a job to immediately export, which feels like a POST type of operation.
Also I notice that your search starts with the savedsearch command, if that's a regularly scheduled savedsearch, you may want to GET saved/searches/{name}/history to get the last execution SID, followed either by the results or events endpoint of that already executed job, instead of a new search.... but that's a use case question

How to use iteration data with newman

I have a collection in Postman which loads "payload objects" from a json file and want to make it run in newman from command like.
POST request
Body: of POST request I have got {{jsonBody}}
Pre-request Script: logically pm.globals.set("jsonBody", JSON.stringify(pm.iterationData.toObject()));
and a file.json file with this kind of "objects":
[
{
"data": {
"propert1": 24,
"property2": "24__DDL_VXS",
...
},
{
"data": {
"propert1": 28,
"property2": "28__HDL_VDS",
...
}
...
]
Works like a charm in Postman.
Here is what I'm trying to run in cmd.
newman run \
-d file.json \
--global-var access_token=$TOK4EN \
--folder '/vlanspost' \
postman/postman_collection_v2.json
Based on the results I am getting - it looks like that newman is not resolving flag:
-d, --iteration-data <path> Specify a data file to use for iterations (either JSON or CSV)
and simply passes as payload literally this string from Body section: {{jsonBody}}
Anyone has got the same issue ?
Thx
I did that way and it worked.
Put collection and data file into a same directory. For example:
C:\USERS\DUNGUYEN\DESKTOP\SO
---- file.json
\___ SO.postman_collection.json
From this folder, make newman command.
newman run .\SO.postman_collection.json -d .\file.json --folder 'vlanspost'
This is the result:

Flink REST API POST error while trying to start a new job using the uploaded jar

I am trying to hit the /jars/:jarid/run endpoint to start a Flink job as follows after reading up this SO post -
curl -k -v -X POST -H "Content-Type: application/json" --data '
{
"programArgsList": [
"--runner",
"FlinkRunner",
"--inputTopicName",
"inputTopicNameValue",
"--Argument",
"Value",
"--streaming",
"true"]
}
' http://<JobManager-hostname>:<port>/jars/MyApplication.jar/run
I get the following error when I try the above command -
{"errors":["Internal server error.","<Exception on server side:\norg.apache.flink.client.program.ProgramInvocationException: The main method
caused an error: Argument 'FlinkRunner' does not begin with '--'\n\tat
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)\n\tat
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)\n\tat
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)\n\tat
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
Argument 'FlinkRunner' does not begin with '--' leads me to think that argument values are not being provided correctly in my example. I understand that the Flink documentation provides the JSON schema definition and not the sample request in the REST API docs. What is the correct way to provide argument values? My example is following what the accepted solution suggested in this post.
The following POST request worked for me so I am documenting it here -
curl -k -v -X POST -H "Content-Type: application/json" --data '
{
"programArgsList": [
"--runner=FlinkRunner",
"--inputTopicName=inputTopicNameValue",
"--Argument=Value",
"--streaming=true"]
}
' http://<JobManager-hostname>:<port>/jars/MyApplication.jar/run

DB2 Warehouse on Cloud Load_jobs filename or path is not valid failure

I have to try to load data to DB2 WoC (formerly dashDB) from IBM Cloud Object Storage (Softlayer) by using /load_jobs API call.
Always getting error response: SQL3025N,A parameter specifying a filename or path is not valid.,0,n/a
Trying different formatting for path key, like following:
us-south/woctestdata/data_example.csv
/woctestdata/data_example.csv
woctestdata/data_example.csv
woctestdata::data_example.csv
also tried folowing suggestions from comments:
us-south::woctestdata\data_example.csv
us-south::woctestdata::data_example.csv
So no more ideas. How the path should be entered correctly?
There is the example of my request:
curl -X POST \
https://dashdb-mpp.services.dal.bluemix.net/dashdb-api/v2/load_jobs \
-H 'Authorization: Bearer <api_key>' \
-H 'Cache-Control: no-cache' \
-H 'Content-Type: application/json' \
-d '{
"load_source": "SOFTLAYER",
"load_action": "INSERT",
"schema": "MKT_ATBTN",
"table": "TRANSMISSIN_TABLE1",
"max_row_count": 0,
"max_warning_count": 0,
"cloud_source": {
"endpoint": "https://tor01.objectstorage.softlayer.net/auth/v1.0",
"path": "woctestdata/data_example.csv",
"auth_id": "<auth_id>",
"auth_secret": "<auth_secret>"
},
"server_source": {
"file_path": "string"
},
"stream_source": {
"file_name": "string"
},
"file_options": {
"code_page": "1208",
"column_delimiter": ";",
"string_delimiter": "",
"date_format": "YYYY-MM-DD",
"time_format": "HH:MM:SS",
"timestamp_format": "YYYY-MM-DD HH:MM:SS",
"cde_analyze_frequency": 0
}
}'
I also try to use db2 load command to load data from IBM Cloud object storage. But also no luck:
db2 load from Softlayer::https://tor01.objectstorage.softlayer.net/auth/v1.0::IBM:<ibm_email_address>::<password>::woctestdata::data_example.csv of del insert into MKT_ATBTN.TRANSMISSIN_TABLE1;
Result:
Agent Type Node SQL Code Result
_______________________________________________________________________
PRE_PARTITION 000 -00003025 Error.
_______________________________________________________________________
RESULTS: 0 of 0 LOADs completed successfully.
_______________________________________________________________________
Summary of LOAD Agents:
Number of rows read = 0
Number of rows skipped = 0
Number of rows loaded = 0
Number of rows rejected = 0
Number of rows deleted = 0
Number of rows committed = 0
SQL3025N A parameter specifying a filename or path is not valid.
To download or access to the file you need to get an X-Auth-Token or set its container with an static URL through the Web page.
X-Auth-Token
I recommend to review Managing the Object Storage and softlayer-object-storage-auth-endpoint
When you run the command
curl -i -H "X-Auth-User: SLOS300001-10:rcuellar" -H "X-Auth-Key: 231222489e90646678364kjsdfhytwterd0259" https://tor01.objectstorage.softlayer.net/auth/v1.0
The response is something like this:
X-Auth-Token: AUTH_tkb26239d441d6401d9482b004d45f7259 – the token we need
X-Storage-Url: https://tor01.objectstorage.softlayer.net/v1/AUTH_df0de35c-d00a-40aa-b697-2b7f1b9331a6
And now you should be able to access to the file with an URL similar like below:
https://tor01.objectstorage.softlayer.net/v1/AUTH_df0de35c-d00a-40aa-b697-2b7f1b9331a6/woctestdata/data_example.csv
Static URL through Web Page
In the portal page go to:
Storage >> Object Storage >> Select Object Storage >> Select Cluster
(e.g. Toronto) >> Select your container
And check the Enable Static Site checkbox, see image below.
you also can use endpoints + object path, the path format is bucketname::filename

SolrCloud in production - querying q=* gives numFound=0

So I have a three-node cluster deployed using a zookeeper. And successfully created test collection (3 shards). Then after I have
curl -X POST -H 'Content-Type: application/json' 'ec2FirstNodeIP:8983/solr/test/update' --data-binary ' [ { "f1" : "1", "f2" : "2", "f3" : "3" } ]'
I got
{"responseHeader":{"status":0,"QTime":38} ...
However when I have curl "sameIP:8983/solr/test/select?wt=json&indent=true&q=*:*"
I am getting
NumFound:0
But using the admin UI for updating the document, the query now returns the document
image for admin UI
What am I missing?
To make document searchable we should commit. use commit=true
ec2FirstNodeIP:8983/solr/test/update?commit=true this should work.

Resources