Is there a way to send a GET request with data using Restangular? I'm using Parse as a backend for my AngularJS app and to login it requires a GET with data:
curl -X GET \
-H "X-Parse-Application-Id: ..." \
-H "X-Parse-REST-API-Key: ..." \
-G \
--data-urlencode 'username=cooldude6' \
--data-urlencode 'password=p_n7!-e8' \
https://api.parse.com/1/login
From: https://parse.com/docs/rest#users-login
I've tried a customOperation but it doesn't send the data
var login = Restangular.all('login');
login.customOperation('get', '', {}, {}, {username: 'user'}); // ignore encoding for now
This just sends a GET request to /login but without any data.
I misunderstood the Parse.com documentation. I thought the credentials had to be send as body of the request because of the -G in the curl command. But apparently they need to be send as normal parameters.
Related
What would be the correct HTTP get request call syntax to fetch saved search data from Splunk if we're accessing it through an access token?
My curl command is working but http.get is not.
curl command:
#os.system('curl -H "Authorization: Bearer <token>"
<baseurl>:8089/services/search/jobs/export --data search="savedsearch abc_backup_status" -d output_mode=csv')
request call ::::
BASE_URL = '<baseurl>:8089/services/search/jobs/export'
data = {"search":"savedsearch abc_backup_status"}
headers = {'Authorization': "Bearer <token>"}
auth_response = requests.get(BASE_URL, headers=headers, data = data, verify=False)
this is giving 400 errors.
The curl options -d OR --data imply a POST method by default.
From: https://man7.org/linux/man-pages/man1/curl.1.html
-d, --data <data>
(HTTP MQTT) Sends the specified data in a POST request to
the HTTP server, in the same way that a browser does when
a user has filled in an HTML form and presses the submit
button. This will cause curl to pass the data to the
server using the content-type application/x-www-form-
urlencoded. Compare to -F, --form.
It is interesting that Splunk Docs claim that search/jobs/export takes a GET, but you're creating a job to immediately export, which feels like a POST type of operation.
Also I notice that your search starts with the savedsearch command, if that's a regularly scheduled savedsearch, you may want to GET saved/searches/{name}/history to get the last execution SID, followed either by the results or events endpoint of that already executed job, instead of a new search.... but that's a use case question
I am trying to save historical context data in Mongo, but without success. Only the first payload sent to Draco is saved to MongoDB for historical data, but Mongo does not react to attribute updates.
Versions used for the test: Orion-LD version 0.8.0, Mongo version 4.4, Draco version 1.3.6.
I tested it also with the 3.4 version of Mongo and the behavior is the same.
Can you, please, help me to fix a problem?
Below are the steps I performed:
Create a Draco subscription:
curl --location --request POST 'http://localhost:1026/v2/subscriptions' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw '{
"description": "Notify Draco of all context changes",
"subject": {
"entities": [
{
"idPattern": ".*"
}
]
},
"notification": {
"http": {
"url": "http://10.0.0.5:5050/v2/notify"
}
},
"throttling": 0
}'
Create an entity:
curl --location --request POST 'http://localhost:1026/v2/entities' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw ' {
"id":"urn:ngsi-ld:Product:0102", "type":"Product",
"name":{"type":"Text", "value":"Lemonade"},
"size":{"type":"Text", "value": "S"},
"price":{"type":"Integer", "value": 99}
}'
Overwrite the value of an attribute value:
curl --location --request PUT 'http://localhost:1026/v2/entities/urn:ngsi-ld:Product:0102/attrs' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw '{
"price":{"type":"Integer", "value": 110}
}'
LISTEN_HTTP PROCESSOR:
LISTEN_HTTP
NGSITOMONGO PROCESSOR:NGSITOMONGO
Template: Template
MongoDB: mongo
We do not use that precise stack, but we have got many production deployments keeping context historical data on MongoDb by using FIWARE Orion (v2 API) with FIWARE Cygnus (NGSIMongo Sink for historical raw data, and NGSISTH Sink for aggregated data at MongoDB).
https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/flume_extensions_catalogue/ngsi_mongo_sink.md
https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/flume_extensions_catalogue/ngsi_sth_sink.md
Maybe this help.
In the new version of Draco 2.1.0 this bug is fixed. You can check the code in the official repository. The release link is https://github.com/ging/fiware-draco/releases/tag/2.1.0
Additionally, you can use the docker image available for this release by pulling it using docker pull ging/fiware-draco:2.1.0.
You can also use the Mongo-Tutorial template available inside of Draco where you have preconfigured the processors needed to persist in MongoDB.
One thing that you have to consider is that the new version of Draco is aligned with the 1.15.3 version of NiFi in where you need first to log in for access to the Web UI using the default credentials (user: admin, password: pass1234567890). You can check the official documentation for more information about it https://fiware-draco.readthedocs.io/en/latest/
I have a postdata presigned URL of Amazon S3. I want to use it in a Karate feature file to upload a file (say: pdf)
Here is a sample Curl request that I need to perform Using Karate POST request
curl --location --request POST '<s3bucketURL>' \
--form 'key=some_key_fileName' \
--form 'x-amz-meta-payload={JsonObject}' \
--form 'Content-Type=application/pdf' \
--form 'bucket=<BucketName>' \
--form 'X-Amz-Algorithm=AWS4-HMAC-SHA256' \
--form 'X-Amz-Credential=<AWS_Credential>' \
--form 'X-Amz-Date=<Date>' \
--form 'Policy=<Policy_Hash>' \
--form 'X-Amz-Signature=<Signature_Hash>' \
--form 'file=#/Users/sahildua/validfile.pdf'
I got a response (having the preSignedUrl) from a server and used using below code in a feature-file
"url": "<s3bucketURL>",
"fields": {
"key": "some_key_fileName",
"x-amz-meta-payload": "{JsonObject}",
"Content-Type": "application/pdf",
"bucket": "<BucketName>",
"X-Amz-Algorithm": "AWS4-HMAC-SHA256",
"X-Amz-Credential": "<AWS_Credential>",
"X-Amz-Date": "<Date>",
"Policy": "<Policy_Hash>",
"X-Amz-Signature": "<Signature_Hash>"
I tried
Given url response.url
* def fieldData = response.fields
* print fieldData
* form fields fieldData
And multipart file file = { read: '../testData/validPdfFile.pdf'}
When method post
Then match responseStatus == 204
But I get a validation XML error from Amazon S3 for incorrect field values
<Error>
<Code>InvalidArgument</Code>
<Message>Bucket POST must contain a field named 'key'. If it is specified, please check the order of the fields.</Message>
<ArgumentName>key</ArgumentName>
<ArgumentValue></ArgumentValue>
<RequestId><id></RequestId>
<HostId><someid></HostId>
</Error>
I expect 204 No Content and the file to be uploaded the S3 bucket
Try this change:
And multipart file file = { read: '../testData/validPdfFile.pdf'}
Read this for a little more explanation: https://github.com/intuit/karate/tree/develop#multipart-file
Other than that you seem to be doing everything right. So it is up to your de-bugging skills now. Or give us a way to replicate: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
When I send a PATCH request to the method: apps.services.versions.patch I got an error response:
{
"error": {
"code": 400,
"message": "At least one field must be specified for this operation.",
"status": "INVALID_ARGUMENT"
}
I have used Try this API tools for testing.
My CURL
curl --request PATCH \
'https://appengine.googleapis.com/v1/apps/{APP_ID}/services/{SERVICE_ID}/versions/{VERSION}' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"envVariables":{"TEST_PARAM":"test_value"},"servingStatus":"STOPPED"}' \
--compressed
This API doesn't have any required arguments. https://cloud.google.com/appengine/docs/admin-api/reference/rest/v1/apps.services.versions/patch
Like #JohnHanley mentioned, you need to add the updateMask parameter that will specify which fields should be updated.
I have deployed a random cut forest model endpoint on AWS sagemaker. I am trying to test the inference endpoint with POST man. I am successfully able to authenticate into the endpoint with access and secret key.
Can someone confirm if the way I am sending the csv payload is correct ? It seems something is not working since whatever the third column value, I get the same score from the endpoint.
'1530000000000,E39E4F5CFFA2CA4A84099D2415583C1C,433190.06640625'
Pasting the curl for the POSTman generated code:
curl --request POST \
--url https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/randomcutforest-2018-06-05-01-08-02-956/invocations \
--header 'authorization: AWS4-HMAC-SHA256 Credential=/20180713/us-east-1/sagemaker/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=d51371b2549e132c21a3402824b57258a74e6fa9f078d91a44bf54b0d110ea57' \
--header 'cache-control: no-cache' \
--header 'content-type: text/csv' \
--header 'host: runtime.sagemaker.us-east-1.amazonaws.com' \
--header 'postman-token: cb7cdfa5-025b-e4f4-c033-a4fb685133c4' \
--header 'x-amz-date: 20180713T190238Z' \
--data '1530000000000,E39E4F5CFFA2CA4A84099D2415583C1C,433190.06640625'
{
"scores": [
{
"score": 7.6438561895// This value never changes
}
]
}
The fact that your second column is not numeric is suspicious. RandomCutForest is only supposed to work with numbers.
I'd recommend you use the AWS forum: https://forums.aws.amazon.com/forum.jspa?forumID=285
Would you be able to share the feature_dim you used to train the the forest?
Thanks.