We want to filter noification sent to Draco based on an attribute an "devicetime". Only when this attribute is not blank, we want to send data to Draco. How do we achieve this in Draco Subscription. I tried many combinations in expression, but nothing worked. Can you please help here:
curl -iX POST \
'http://52.172.34.29:1026/v2/subscriptions?options=skipInitialNotification' \
-H 'Content-Type: application/json' \
-H 'fiware-service: openiot' \
-H 'fiware-servicepath: /' \
-d '{
"description": "Subscription",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": [],
"expression":{"q":"devicetime==.*"}
}
},
"notification": {
"http": {
"url": "http://52.172.34.29:3003/v2/notify"
},
"attrs": [],
"onlyChangedAttrs":true,
"throttling": 5
}
}'
as per the NGSIv2 specification: Unary negatory statements use the unary operator !, while affirmative unary statements use no operator at all. The unary statements are used to check for the existence of the target property. E.g. temperature matches entities that have an attribute called 'temperature' (no matter its value), while !temperature matches entities that do not have an attribute called 'temperature'.
so you should fix it with
"expression":{"q":"devicetime"}
Related
I am trying to save historical context data in Mongo, but without success. Only the first payload sent to Draco is saved to MongoDB for historical data, but Mongo does not react to attribute updates.
Versions used for the test: Orion-LD version 0.8.0, Mongo version 4.4, Draco version 1.3.6.
I tested it also with the 3.4 version of Mongo and the behavior is the same.
Can you, please, help me to fix a problem?
Below are the steps I performed:
Create a Draco subscription:
curl --location --request POST 'http://localhost:1026/v2/subscriptions' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw '{
"description": "Notify Draco of all context changes",
"subject": {
"entities": [
{
"idPattern": ".*"
}
]
},
"notification": {
"http": {
"url": "http://10.0.0.5:5050/v2/notify"
}
},
"throttling": 0
}'
Create an entity:
curl --location --request POST 'http://localhost:1026/v2/entities' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw ' {
"id":"urn:ngsi-ld:Product:0102", "type":"Product",
"name":{"type":"Text", "value":"Lemonade"},
"size":{"type":"Text", "value": "S"},
"price":{"type":"Integer", "value": 99}
}'
Overwrite the value of an attribute value:
curl --location --request PUT 'http://localhost:1026/v2/entities/urn:ngsi-ld:Product:0102/attrs' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw '{
"price":{"type":"Integer", "value": 110}
}'
LISTEN_HTTP PROCESSOR:
LISTEN_HTTP
NGSITOMONGO PROCESSOR:NGSITOMONGO
Template: Template
MongoDB: mongo
We do not use that precise stack, but we have got many production deployments keeping context historical data on MongoDb by using FIWARE Orion (v2 API) with FIWARE Cygnus (NGSIMongo Sink for historical raw data, and NGSISTH Sink for aggregated data at MongoDB).
https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/flume_extensions_catalogue/ngsi_mongo_sink.md
https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/flume_extensions_catalogue/ngsi_sth_sink.md
Maybe this help.
In the new version of Draco 2.1.0 this bug is fixed. You can check the code in the official repository. The release link is https://github.com/ging/fiware-draco/releases/tag/2.1.0
Additionally, you can use the docker image available for this release by pulling it using docker pull ging/fiware-draco:2.1.0.
You can also use the Mongo-Tutorial template available inside of Draco where you have preconfigured the processors needed to persist in MongoDB.
One thing that you have to consider is that the new version of Draco is aligned with the 1.15.3 version of NiFi in where you need first to log in for access to the Web UI using the default credentials (user: admin, password: pass1234567890). You can check the official documentation for more information about it https://fiware-draco.readthedocs.io/en/latest/
I have a collection in Postman which loads "payload objects" from a json file and want to make it run in newman from command like.
POST request
Body: of POST request I have got {{jsonBody}}
Pre-request Script: logically pm.globals.set("jsonBody", JSON.stringify(pm.iterationData.toObject()));
and a file.json file with this kind of "objects":
[
{
"data": {
"propert1": 24,
"property2": "24__DDL_VXS",
...
},
{
"data": {
"propert1": 28,
"property2": "28__HDL_VDS",
...
}
...
]
Works like a charm in Postman.
Here is what I'm trying to run in cmd.
newman run \
-d file.json \
--global-var access_token=$TOK4EN \
--folder '/vlanspost' \
postman/postman_collection_v2.json
Based on the results I am getting - it looks like that newman is not resolving flag:
-d, --iteration-data <path> Specify a data file to use for iterations (either JSON or CSV)
and simply passes as payload literally this string from Body section: {{jsonBody}}
Anyone has got the same issue ?
Thx
I did that way and it worked.
Put collection and data file into a same directory. For example:
C:\USERS\DUNGUYEN\DESKTOP\SO
---- file.json
\___ SO.postman_collection.json
From this folder, make newman command.
newman run .\SO.postman_collection.json -d .\file.json --folder 'vlanspost'
This is the result:
I had originally created in my solr schema 3 copy fields:
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-copy-field": {"source":"company_name","dest":"_text_"}}' http://my-instance/solr/listing/schema
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-copy-field": {"source":"address","dest":"_text_"}}' http://my-instance/solr/listing/schema
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-copy-field": {"source":"city","dest":"_text_"}}' http://my-instance/solr/listing/schema
However, I have recently removed these from the schema and are now composing queries in a slightly different format. More advanced queries we have the need for edismax.
However, even by turning on edismax I'm receiving an error from the solr query parser as per below. Did I break something by deleting the copy fields?
/solr/listing/select?debugQuery=on&defType=edismax&q=%3A&stopwords=true
{
"responseHeader": {
"zkConnected": true,
"status": 400,
"QTime": 1,
"params": {
"q": "*:*",
"defType": "edismax",
"debugQuery": "on",
"stopwords": "true"
}
},
"error": {
"metadata": [
"error-class",
"org.apache.solr.common.SolrException",
"root-error-class",
"org.apache.solr.common.SolrException"
],
"msg": "org.apache.solr.search.SyntaxError: Query Field '_text_' is not a valid field name",
"code": 400
}
}
As per the comments the 'text' field remains in 3 places in the config:
"/update/extract":{
"startup":"lazy",
"name":"/update/extract",
"class":"solr.extraction.ExtractingRequestHandler",
"defaults":{
"lowernames":"true",
"fmap.content":"_text_"}}
"spellchecker":{
"name":"default",
"field":"_text_",
"initParams":[{
"path":"/update/**,/query,/select,/tvrh,/elevate,/spell,/browse",
"defaults":{"df":"_text_"}}]
As per the comment on my question (I'm still on the learning path of solr):
Although they have been deprecated for quite some time, Solr still has
support for Schema based configuration of a <defaultSearchField/>
(which is superseded by the df parameter) and <solrQueryParser defaultOperator="OR"/> (which is superseded by the q.op parameter.
If you have these options specified in your Schema, you are strongly
encouraged to replace them with request parameters (or request
parameter defaults) as support for them may be removed from future
Solr release.
For our purposes and as we are using the edismax query parser we needed to specify the query fields that we wanted to use.
2+ year old post, not sure this will help.
Since you are using "defType": "edismax"
try "q.alt": "*:*" instead of "q": "*:*". This should fix the issue.
I have setup apache solr 7.1 and using postman tool to query it. But when I am trying to delete indexed data using postman I get following error.
Request:
GET http://localhost:8983/solr/solr-sample3/update?stream.body={
"delete": {
"query": "*:*"
},
"commit": { }
}
Body:
{
"error": {
"metadata": [
"error-class",
"org.apache.solr.common.SolrException",
"root-error-class",
"org.apache.solr.common.SolrException"
],
"msg": "Stream Body is disabled. See http://lucene.apache.org/solr/guide/requestdispatcher-in-solrconfig.html for help",
"code": 400
}
}
It was working in previous solr version solr 6.6. I went through the lucene documentation but I am not able to figure it out.
You don't need to enable the stream body. Just use a curl POST request specifying the data type as text/xml
curl http://localhost:8983/solr/solr-sample3/update?commit=true -H "Content-Type: text/xml" --data-binary '<delete><query>*:*</query></delete>'
Or if you're using the Post Tool included in solr:
bin/post -c core_name -type text/xml -out yes -d $'<delete><query>*:*</query></delete>'
I went though the documentation, it says i need to enable stream body as it has been disabled in solr 7.1 .
to enable use :
curl http://localhost:8983/solr/solr-sample3/config -H 'Content-type:application/json' -d'{
"set-property" : {"requestDispatcher.requestParsers.enableRemoteStreaming":true},
"set-property" : {"requestDispatcher.requestParsers.enableStreamBody":true}
}'
Here is what worked for me, using cURL and avoiding to enable stream body:
curl http://localhost:8983/solr/solr-sample3/update?commit=true -X POST -H "Content-Type: text/xml" --data-binary "<delete><query>*:*</query></delete>"
I am sending the following json:
{
"components": [
{
"guid": "com.mycompany.MyPlugin",
"duration": 60,
"metrics": {
"Component/Memory/Heap Used[bytes]": 146990608,
"Component/Processor/GC[percent]": 0.5555555555555556,
"Component/Memory/Heap Max[bytes]": 39387136,
"Component/Processor/CPU[percent]": 66.66666666666667,
"Component/Memory/Heap Committed[bytes]": 279714288
},
"name": "MyPlugin"
}
],
"agent": {
"host": "host",
"pid": 0,
"version": "1.0.0"
}
}
The Component/Memory/* metrics are properly recognized by new relic and i am able to create dashboards.
But, the Component/Processor/* metrics dont seem to acknowledged at all. I cant see them in the dropdown list of metric names when i create a new dashboard and even typing the name manually doesn't work since new relic says no such metric exists.
It is recommended that GUID values be all lower case. At present GUIDs with differing case are treated as unique.
EDIT:
Try curling with your data as a test:
curl -vi https://platform-api.newrelic.com/platform/v1/metrics
-H "X-License-Key: <LICENSE_KEY>"
-H "Content-Type: application/json"
-H "Accept: application/json"
-X POST -d '<JSON_DATA>'