FIWARE QuantumLeap Sanity Check failed: QuantumLeap can't get changed data from Orion - subscription

I checked quantumLeap Sanity Check in both virtualMachine(virtualBox with Ubuntu) and production server(CentOS). With the same docker-compose.yml, the sanity check is sucessfully on virtualManchine, but failed on Production server, it's really strange, could anyone can help me? Thank you very much.#Jason Fox
The steps of sanity check is followed as https://quantumleap.readthedocs.io/en/latest/admin/check/, I pasted below.
The difference of results is only at step 7. On virtualMachine, I can get the updated data from quantumLeap:
{
"attrName": "precipitation",
"entityId": "air_quality_observer_be_001",
"index": [
"2020-05-03T11:18:14.000",
"2020-05-03T11:18:55.000"
],
"values": [
0.0,
100.0
]
}
But on production server, the result is:
{
"description": "No records were found for such query.",
"error": "Not Found"
}
Note:
These commands in the sanity check steps are copied to terminal, so there will be no typos.
the difference of result is only at step 7. I deleted the old images on both virtualMachine and production server to let the latested images are pulled.
There was a fiware-base system(orion,mongoDB,IoTAgents,quantumLeap,CrateDB,Grafana) runing on production server and everything was fine. But one week ago, the hard disk on the server was full and exhausted, and all containers are down. I cleaned some huge log files and free some space, then I bring up the fiware-base system and found that there is no data shown on Grafana. After checked, I found the reason is QuantumLeap did not store data into crateDB, but I'm not sure the root cause is orion or quantumLeap. So I decide to do the sanity check as the quantumLeap documents suggested. That's the story.
BTW, the subscription of quantumLeap can be get from orion, but I can't get the changed data from quantumLeap. Why the newest changed data is not synchronized to quantumLeap?
Sanity check steps:
1.Check Orion version
curl -X GET http://0.0.0.0:1026/version -H 'Accept: application/json'
2.Check QuantumLeap version
curl -X GET http://0.0.0.0:8668/version -H 'Accept: application/json'
3.Create an Orion Subscription for "QuantumLeap"
curl -X POST \
'http://0.0.0.0:8668/v2/subscribe?orionUrl=http://orion:1026/v2&quantumleapUrl=http://quantumleap:8668/v2&entityType=AirQualityObserved' \
-H 'Accept: application/json'
4.Check you cat get such subscription from Orion
curl -X GET http://0.0.0.0:1026/v2/subscriptions \
-H 'Accept: application/json'
5.Insert an entity of AirQualityObserved into Orion
curl -X POST \
'http://0.0.0.0:1026/v2/entities?options=keyValues' \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"id": "air_quality_observer_be_001",
"type": "AirQualityObserved",
"address": {
"streetAddress": "IJzerlaan",
"postOfficeBoxNumber": "18",
"addressLocality": "Antwerpen",
"addressCountry": "BE"
},
"dateObserved": "2017-11-03T12:37:23.734827",
"source": "http://testing.data.from.smartsdk",
"precipitation": 0,
"relativeHumidity": 0.54,
"temperature": 12.2,
"windDirection": 186,
"windSpeed": 0.64,
"airQualityLevel": "moderate",
"airQualityIndex": 65,
"reliability": 0.7,
"CO": 500,
"NO": 45,
"NO2": 69,
"NOx": 139,
"SO2": 11,
"CO_Level": "moderate",
"refPointOfInterest": "null"
}'
6.Update the precipitation value of the same entity in Orion.
curl -X PATCH \
http://0.0.0.0:1026/v2/entities/air_quality_observer_be_001/attrs \
-H 'Accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"precipitation": {
"value": 100,
"type": "Number"
}
}'
7.Query the changed record of precipitation from quantumLeap for the same entity.
curl -X GET \
'http://0.0.0.0:8668/v2/entities/air_quality_observer_be_001/attrs/precipitation?type=AirQualityObserved' \
-H 'Accept: application/json'
Parts of docker-compose.yml:
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
depends_on:
- mongo-db
expose:
- "1026"
ports:
- "1026:1026"
networks:
- default
command: -dbhost mongo-db -logLevel ERROR -corsOrigin __ALL
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
expose:
- "27017"
ports:
- "27017:27017"
networks:
- default
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
cratedb:
image: crate:3.1.2
hostname: cratedb
container_name: db-crate
expose:
- "4200"
- "4300"
- "5432"
ports:
- "4200:4200"
- "4300:4300"
- "5432:5432"
networks:
- default
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- crate-db:/data
quantumleap:
image: smartsdk/quantumleap
hostname: quantumleap
container_name: fiware-quantumleap
expose:
- "8668"
ports:
- "8668:8668"
depends_on:
- cratedb
environment:
- CRATE_HOST=cratedb # host name of CrateDB
NOT solved, but have a little clue.
I got some log from quantumLeap. There is a crate client in quantumLeap, and the reason may be that the crate client in quantumLeap is not working properly.
I paste the log here:
crate.client.exceptions.ProgrammingError: SQLActionException[ClusterBlockException: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
172.18.1.1 - - [04/May/2020 17:00:33] "POST /v2/notify HTTP/1.1" 500 -
INFO:werkzeug:172.18.1.1 - - [04/May/2020 17:00:33] "POST /v2/notify HTTP/1.1" 500 -
INFO:translators.factory:Backend selected for tenant 'iothouse' is: crate
ERROR:app:Exception on /v2/notify [POST]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/decorator.py", line 48, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/uri_parsing.py", line 143, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 172, in wrapper
response = function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/validation.py", line 347, in wrapper
return function(request)
File "/usr/local/lib/python3.6/site-packages/connexion/decorators/parameter.py", line 126, in wrapper
return function(**kwargs)
File "/src/ngsi-timeseries-api/src/reporter/reporter.py", line 189, in notify
trans.insert(payload, fiware_s, fiware_sp)
File "/src/ngsi-timeseries-api/src/translators/crate.py", line 189, in insert
fiware_servicepath)
File "/src/ngsi-timeseries-api/src/translators/crate.py", line 297, in _insert_entities_of_type
self.cursor.executemany(stmt, entries)
File "/usr/local/lib/python3.6/site-packages/crate/client/cursor.py", line 67, in executemany
self.execute(sql, bulk_parameters=seq_of_parameters)
File "/usr/local/lib/python3.6/site-packages/crate/client/cursor.py", line 54, in execute
bulk_parameters)
File "/usr/local/lib/python3.6/site-packages/crate/client/http.py", line 328, in sql
content = self._json_request('POST', self.path, data=data)
File "/usr/local/lib/python3.6/site-packages/crate/client/http.py", line 448, in _json_request
_raise_for_status(response)
File "/usr/local/lib/python3.6/site-packages/crate/client/http.py", line 187, in _raise_for_status
error_trace=error_trace)

Solved. The root cause is at crateDB. When the hard disk exausted, the crateDB is down and every table are set with readonly. The read-only blocks are not automatically removed from the tables even after the disk space is freed and the threshold is undershot.
Everything is OK after I set the readonly flag to false with the following command in crateDB:
SHOW CREATE TABLE <tableName>;
ALTER TABLE <tableName> SET ("blocks.read_only_allow_delete" = FALSE);

Related

Not able to register the schema for kafka snowflake connector

Distributed Services Got Started Successfully:
[2021-10-17 18:04:29,693] INFO Started o.e.j.s.ServletContextHandler#1422ac7f{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler:916)
[2021-10-17 18:04:29,693] INFO REST resources initialized; server is started and ready to handle requests (org.apache.kafka.connect.runtime.rest.RestServer:319)
[2021-10-17 18:04:29,693] INFO Kafka Connect started (org.apache.kafka.connect.runtime.Connect:57)
But not able to make schema registry:
curl -X PUT -H "Content-Type:application/json" --data '{"name": "file-stream-demo-distributed","config":{"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector","topic":"demo-2-distributed","file":"/home/ramakrishnakonda/kafka_2.13-2.8.0/config/connect-distributed.properties"}}' http://localhost:8083/connectors
{"error_code":405,"message":"HTTP 405 Method Not Allowed"}
Please help
Use POST rather than PUT, as in this example:
$ curl -X POST -H "Content-Type: application/json" --data '{"name": "local-file-sink", "config": {"connector.class":"FileStreamSinkConnector", "tasks.max":"1", "file":"test.sink.txt", "topics":"connect-test" }}' http://localhost:8083/connectors
curl -X POST "http://localhost:8083/connectors" -H "Content- type:application/json" --data
"{
"name":"file-stream-demo-distributed",
"config":{
"connector.class":"com.snowflake.kafka.connector.SnowflakeSinkConnector",
"tasks.max":"1",
"topics":"demo-2-distributed",
"buffer.count.records":"10000",
"buffer.flush.time":"60",
"buffer.size.bytes":"5000000",
"snowflake.url.name":"XXXXXXXX.XXXXXXXXX.snowflakecomputing.com:443",
"snowflake.user.name":"kafka_connector_user_1",
"snowflake.private.key":"XXXXXXXXXXXXXXXXXXX+XXXXXXXXXXXXXXXXXXXXXXXXXXXXX,
"snowflake.database.name":"KAFKA_DB",
"snowflake.schema.name":"KAFKA_SCHEMA",
"key.converter":"org.apache.kafka.connect.storage.StringConverter",
"key.converter.schemas.enable":"true", "value.converter":"com.snowflake.kafka.connector.records.SnowflakeJsonConverter",
"value.converter:schemas.enable":"true",
"value.converter.schema.registry.url":"http://localhost:8081"
}
}"

Add content of a text file to array in Bash

I am trying to execute a curl for 300 at a same time and using array. I do not know how to bring the content of my file in array. The code I write is bellow.
array=();
for i in {1..300}; do
array+=( file.txt ) ;
done;
curl "${array[#]}";
The file.text include the following code
--next 'https://d16.server.com/easy/api/OmsOrder' -H 'Connection: keep-
alive' - H 'Pragma: no-cache' -H 'Cache-Control: no-cache' -H 'Accept:
application/json,
text/plain, */*' -H 'Sec-Fetch-Dest: empty' -H 'User-Agent: Mozilla/5.0
(Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/80.0.3987.132 Safari/537.36' -H 'Content-Type: application/json' -H
'Origin: https://d.server.com' -H 'Sec-Fetch-Site: same-site' -H
'Sec-Fetch-Mode: cors' -H 'Referer: https://d.server.com/' -H 'Accept-
Language: en-US,en;q=0.9,fa;q=0.8' --data-binary
'{"isin":"IRO3TPEZ0001","financeId":1,"quantity":50000,"price":5400}' --
compressed"
array=();
for i in {1..300}; do
array+=( $(cat file.txt|head -$i | tail -1) );
done;
curl "${array[#]}";
You have a file with shell formatted words that you are trying to repeat over and over in a command.
Since the words are shell formatted, you'll need to interpret them using e.g. eval:
contents=$(< file.txt)
eval "words=( $contents )"
arguments=()
for i in {1..300}
do
arguments+=( "${words[#]}" )
done
curl "${arguments[#]}"
A more robust design would be to not use shell quoting and instead format one argument per line:
--next
https://d16.server.com/easy/api/OmsOrder
-H
Connection: keep-alive
-H
Pragma: no-cache
You can then use the above code and replace the eval line with:
mapfile -t words < file.txt
The answer to this question should have been "put each request into a file, one option per line, and use -K/--config to include the file into the command line." That certainly should allow for 300 requests in a single curl command without exceeding the limit on the size of a shell command. (By "request" here, I mean "a URL with associated options". If you only want to use 300 URLs without modifying any other option, you can easily do that by just listing the URLs, on the command line if they aren't too long or otherwise in a file.)
Unfortunately, it doesn't work. I believe that it is supposed to work, and the fact that it doesn't is a bug. If you specify multiple -K options and each of them refers to a file which includes one request and the --next option, then curl will execute only the first and last file. If you instead put the --next options on the command-line in between the -K options, all the request options will be merged, and in addition curl will complain about a missing URL.
However, you can use the -K option by concatenating all 300 requests and passing them through stdin, using -K - to read from stdin. To test that, I created the file containing a single request:
$ cat post-req
--next
-H "Connection: keep-alive"
-H "Pragma: no-cache"
-H "Cache-Control: no-cache"
-H "Accept: application/json, text/plain, */*"
-H "Sec-Fetch-Dest: empty"
-H "User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36"
-H "Content-Type: application/json"
-H "Origin: https://d.server.com"
-H "Sec-Fetch-Site: same-site"
-H "Sec-Fetch-Mode: cors"
-H "Referer: https://d.server.com/"
-H "Accept-Language: en-US,en;q=0.9,fa;q=0.8"
--data-binary "{\"isin\":\"IRO3TPEZ0001\",\"financeId\":1,\"quantity\":50000,\"price\":5400}"
--compressed
--url "http://localhost/foo"
and then set up a little webserver that just returns the requested path, and invoked curl with:
for i in $(seq 300); do cat post-req; done | curl -K -
Indeed, all three hundred requests are passed through.
For what it's worth, I reported the bug as https://github.com/curl/curl/issues/5120, and many thanks to Daniel Stenberg for being incredibly responsive by committing a fix in less than two days. So probably the issue will be resolved in the next curl release.

Cannot execute learning task. : Unable to create retraining task - previous training data not present

I am trying to update a classifier. In zip folder, I have more than 10 images. But still, not able to update.
Tried via swagger URl: https://watson-api-explorer.ng.bluemix.net/apis/visual-recognition-v3#!/Custom/updateClassifier
URL: https://gateway.watsonplatform.net/visual-recognition/api/v3/classifiers/sports_cars_1042527461?version=2018-03-19&api_key=xxxxxxxxxxxxxx
CURL: curl -X POST --header 'Content-Type: multipart/form-data' --header 'Accept: application/json' {"type":"formData"} 'https://gateway.watsonplatform.net/visual-recognition/api/v3/classifiers/sports_cars_1042527461?version=2018-03-19&api_key=xxxxx'
RESPONSE: {
"error": {
"code": 400,
"error_id": "input_error",
"description": "Cannot execute learning task. : Unable to create retraining task - previous training data not present."
}
}
Tried it with Node JS code too but got the same error.
Is there anything I missed or tried something wrong?
One thing I noticed is that you have a mix of URL and authentication for older and newer classifiers.
For classifiers created before May 23, you use the URL gateway-a. and &api_key=....
For classifiers created afterward, you use the URL gateway. and IAM authentication (-u "apikey:{apikey}").
so
curl -X POST \
-F "sportscars_positive_examples=#sc.zip" \
-F "negative_examples=#suvs.zip" \
"https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classifiers
/sports_cars_1042527461?version=2018-03-19&api_key=xxxxxxxxxxxxxx"
or
curl -X POST -u "apikey:yyyyyyyyyyyyyyyyyyy" \
-F "sportscars_positive_examples=#sc.zip" \
-F "negative_examples=#suvs.zip" \
"https://gateway.watsonplatform.net/visual-recognition/api/v3/classifiers
/sports_cars_1042527461?version=2018-03-19"
For details, look at the API reference.

IBM dashDb stop_on_error blocker

I am trying to call POST request POST /sql_jobs (dashdbAPI_v2), but I can’t figure out what should be in stop_on_error. It requires string and always ends with error "Parameter stop_on_error is required".
Could you help me solve this issue?
The only valid values for stop_on_error are yes and no. If the value doesn't match one of them, it returns the error message you're seeing.
This is a working sample:
curl \
-X POST "https://host.bluemix.net/dashdb-api/v2/sql_jobs" \
-H "accept: application/json" \
-H "Authorization: Bearer eyJhbGciOiJSUzA ..." \
-H "content-type: application/json" \
-d '{ "commands": "SELECT 1", "limit": 0, "stop_on_error": "yes", "separator": ";" }'

cURL Cloudant attachment example please

Please forgive me for the potentially basic question but I am a z/OS person trying to learn cURL and Cloudant. I have gotten the following example to work to add a record to a database (using DOS from Windows) :
curl -X POST -b /tmp/cloudant.cookie -H "Content-Type: application/json" -d "{\"_id\":\"2\",\"empName\":\"John Doe\",\"phone\":\"646-598-4133\",\"age\":\"28\"}" --url https://xxxxxxxxxx-bluemix.cloudant.com/rcdb
Now I would like to add a _attachment image1.jpg dile to that record...
Could anyone please tell me what the syntax on windows would be...trying a few combinations but nothing so far works.
To add an attachment follow the instructions in the Cloudant documentation at https://docs.cloudant.com/attachments.html
Example:
Assuming you have already created a document with ID "2" and revision number "1-954695fb9642f02975d76b959d0b5e98" in database rcdb, run the following command:
curl -X PUT -H "Content-Type: image/jpeg" --data-binary "#image1.jpg" --url https://xxxxxxxxxx-bluemix.cloudant.com/$DATABASE/$DOCUMENT_ID/$ATTACHMENT?rev=$REV
replacing $DATABASE with rcdb, $DOCUMENT_ID with 2, $REV with 1-954695fb9642f02975d76b959d0b5e98 and $ATTACHMENT with the desired attachment property name, e.g. mypic.

Resources