Error while adding new field in Solr with default value - solr

I am trying to add new field in Solr with default value but getting below error. I am sure that I am using wrong key for default but not able to get correct key in Google.
Command
curl -H 'Content-type: application/json' -d '{"name": "ALL_MODE_INDC","stored": true,"indexed": true,"field_type": "string" ,"default":"N"}' http://<solrServer>:<solrIP>/api/collections/ALGORITHM_AU_TEST/fields
Error message
{"errors":[{"message":"Unknown or dissallowed key found:default","code":"error.forbidden.key","key":"default"}],"http_status_name":"Unprocessable Entity","http_status_code":422}

Just got the solution (change key name to "default_value") -
curl -H 'Content-type: application/json' -d '{"name": "ALL_MODE_INDC","stored": true,"indexed": true,"field_type": "string" ,"default_value":"N"}' http://<solrServer>:<solrIP>/api/collections/ALGORITHM_AU_TEST/fields

Related

Update the value of a specific field in solr

I am trying to update the value of existing field in solr document to a new value using curl.Below is the query and response. But I do not see the value for code getting reflected in solr. Please help to resolve the issue.
curl -X POST -H 'Content-Type: application/json' 'http://localhost:8390/solr/collection/update' -d '[{"id" : “12345”,”code” : {"set" : “500”}}]’
{
"responseHeader":{
"rf":1,
"status":0,
"QTime":11}}
Looks like you have missed the commit=true in your request and hence the issue.
Try the below command and hopefully it should work for you.
curl -X POST -H 'Content-Type: application/json' 'http://localhost:8390/solr/collection/update?commit=true' -d '[{"id" : “12345”,”code” : {"set" : “500”}}]’
{
"responseHeader":{
"rf":1,
"status":0,
"QTime":11}}

Fiware Draco - Subscription works only for the first payload. Only the first payload is saved in historical mongodb

I am trying to save historical context data in Mongo, but without success. Only the first payload sent to Draco is saved to MongoDB for historical data, but Mongo does not react to attribute updates.
Versions used for the test: Orion-LD version 0.8.0, Mongo version 4.4, Draco version 1.3.6.
I tested it also with the 3.4 version of Mongo and the behavior is the same.
Can you, please, help me to fix a problem?
Below are the steps I performed:
Create a Draco subscription:
curl --location --request POST 'http://localhost:1026/v2/subscriptions' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw '{
"description": "Notify Draco of all context changes",
"subject": {
"entities": [
{
"idPattern": ".*"
}
]
},
"notification": {
"http": {
"url": "http://10.0.0.5:5050/v2/notify"
}
},
"throttling": 0
}'
Create an entity:
curl --location --request POST 'http://localhost:1026/v2/entities' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw ' {
"id":"urn:ngsi-ld:Product:0102", "type":"Product",
"name":{"type":"Text", "value":"Lemonade"},
"size":{"type":"Text", "value": "S"},
"price":{"type":"Integer", "value": 99}
}'
Overwrite the value of an attribute value:
curl --location --request PUT 'http://localhost:1026/v2/entities/urn:ngsi-ld:Product:0102/attrs' \
--header 'Fiware-Service: test' \
--header 'Fiware-ServicePath: /openiot' \
--header 'Content-Type: application/json' \
--data-raw '{
"price":{"type":"Integer", "value": 110}
}'
LISTEN_HTTP PROCESSOR:
LISTEN_HTTP
NGSITOMONGO PROCESSOR:NGSITOMONGO
Template: Template
MongoDB: mongo
We do not use that precise stack, but we have got many production deployments keeping context historical data on MongoDb by using FIWARE Orion (v2 API) with FIWARE Cygnus (NGSIMongo Sink for historical raw data, and NGSISTH Sink for aggregated data at MongoDB).
https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/flume_extensions_catalogue/ngsi_mongo_sink.md
https://github.com/telefonicaid/fiware-cygnus/blob/master/doc/cygnus-ngsi/flume_extensions_catalogue/ngsi_sth_sink.md
Maybe this help.
In the new version of Draco 2.1.0 this bug is fixed. You can check the code in the official repository. The release link is https://github.com/ging/fiware-draco/releases/tag/2.1.0
Additionally, you can use the docker image available for this release by pulling it using docker pull ging/fiware-draco:2.1.0.
You can also use the Mongo-Tutorial template available inside of Draco where you have preconfigured the processors needed to persist in MongoDB.
One thing that you have to consider is that the new version of Draco is aligned with the 1.15.3 version of NiFi in where you need first to log in for access to the Web UI using the default credentials (user: admin, password: pass1234567890). You can check the official documentation for more information about it https://fiware-draco.readthedocs.io/en/latest/

Flink REST API POST error while trying to start a new job using the uploaded jar

I am trying to hit the /jars/:jarid/run endpoint to start a Flink job as follows after reading up this SO post -
curl -k -v -X POST -H "Content-Type: application/json" --data '
{
"programArgsList": [
"--runner",
"FlinkRunner",
"--inputTopicName",
"inputTopicNameValue",
"--Argument",
"Value",
"--streaming",
"true"]
}
' http://<JobManager-hostname>:<port>/jars/MyApplication.jar/run
I get the following error when I try the above command -
{"errors":["Internal server error.","<Exception on server side:\norg.apache.flink.client.program.ProgramInvocationException: The main method
caused an error: Argument 'FlinkRunner' does not begin with '--'\n\tat
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)\n\tat
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)\n\tat
org.apache.flink.client.program.OptimizerPlanEnvironment.getOptimizedPlan(OptimizerPlanEnvironment.java:83)\n\tat
org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:80)
Argument 'FlinkRunner' does not begin with '--' leads me to think that argument values are not being provided correctly in my example. I understand that the Flink documentation provides the JSON schema definition and not the sample request in the REST API docs. What is the correct way to provide argument values? My example is following what the accepted solution suggested in this post.
The following POST request worked for me so I am documenting it here -
curl -k -v -X POST -H "Content-Type: application/json" --data '
{
"programArgsList": [
"--runner=FlinkRunner",
"--inputTopicName=inputTopicNameValue",
"--Argument=Value",
"--streaming=true"]
}
' http://<JobManager-hostname>:<port>/jars/MyApplication.jar/run

sagemaker calling curl with inference endpoint with csv

I have deployed a random cut forest model endpoint on AWS sagemaker. I am trying to test the inference endpoint with POST man. I am successfully able to authenticate into the endpoint with access and secret key.
Can someone confirm if the way I am sending the csv payload is correct ? It seems something is not working since whatever the third column value, I get the same score from the endpoint.
'1530000000000,E39E4F5CFFA2CA4A84099D2415583C1C,433190.06640625'
Pasting the curl for the POSTman generated code:
curl --request POST \
--url https://runtime.sagemaker.us-east-1.amazonaws.com/endpoints/randomcutforest-2018-06-05-01-08-02-956/invocations \
--header 'authorization: AWS4-HMAC-SHA256 Credential=/20180713/us-east-1/sagemaker/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-date, Signature=d51371b2549e132c21a3402824b57258a74e6fa9f078d91a44bf54b0d110ea57' \
--header 'cache-control: no-cache' \
--header 'content-type: text/csv' \
--header 'host: runtime.sagemaker.us-east-1.amazonaws.com' \
--header 'postman-token: cb7cdfa5-025b-e4f4-c033-a4fb685133c4' \
--header 'x-amz-date: 20180713T190238Z' \
--data '1530000000000,E39E4F5CFFA2CA4A84099D2415583C1C,433190.06640625'
{
"scores": [
{
"score": 7.6438561895// This value never changes
}
]
}
The fact that your second column is not numeric is suspicious. RandomCutForest is only supposed to work with numbers.
I'd recommend you use the AWS forum: https://forums.aws.amazon.com/forum.jspa?forumID=285
Would you be able to share the feature_dim you used to train the the forest?
Thanks.

org.apache.solr.common.SolrException Stream Body is disabled

I have setup apache solr 7.1 and using postman tool to query it. But when I am trying to delete indexed data using postman I get following error.
Request:
GET http://localhost:8983/solr/solr-sample3/update?stream.body={
"delete": {
"query": "*:*"
},
"commit": { }
}
Body:
{
"error": {
"metadata": [
"error-class",
"org.apache.solr.common.SolrException",
"root-error-class",
"org.apache.solr.common.SolrException"
],
"msg": "Stream Body is disabled. See http://lucene.apache.org/solr/guide/requestdispatcher-in-solrconfig.html for help",
"code": 400
}
}
It was working in previous solr version solr 6.6. I went through the lucene documentation but I am not able to figure it out.
You don't need to enable the stream body. Just use a curl POST request specifying the data type as text/xml
curl http://localhost:8983/solr/solr-sample3/update?commit=true -H "Content-Type: text/xml" --data-binary '<delete><query>*:*</query></delete>'
Or if you're using the Post Tool included in solr:
bin/post -c core_name -type text/xml -out yes -d $'<delete><query>*:*</query></delete>'
I went though the documentation, it says i need to enable stream body as it has been disabled in solr 7.1 .
to enable use :
curl http://localhost:8983/solr/solr-sample3/config -H 'Content-type:application/json' -d'{
"set-property" : {"requestDispatcher.requestParsers.enableRemoteStreaming":true},
"set-property" : {"requestDispatcher.requestParsers.enableStreamBody":true}
}'
Here is what worked for me, using cURL and avoiding to enable stream body:
curl http://localhost:8983/solr/solr-sample3/update?commit=true -X POST -H "Content-Type: text/xml" --data-binary "<delete><query>*:*</query></delete>"

Resources