appharbor JustOneDB - "error":"Table creation needs a table name - database

I plan on using JustOneDB on AppHarbor.
I tried the rest request below w/ curl and got
{"error":"Table creation needs a table name"}
I'm a noob when it comes to Curl and json.
Does anyone have experience w/ JustOneDB and creating a table.
What am I doing wrong?
TIA
FxM :)
"error":"Table creation needs a table name" justoneDB
I tried
curl -k -XPOST 'https://zn0lvkpdhdxb70l2ub4:iy59bj7rh0z6uurNA1lb3fiwuh#77.92.68.105:31415/justonedb/database/n10lvkpNA2uja/session/1946301333393883/table' -d '{
"name" : "tbl1",
"column": "bob",
"type" : "string"
}'

It is a problem with the JSON syntax you are using. The JSON string needs to be in the form
.../table {"name":"tbl1", "column":[ {"name":"bob","type":"string"} ]}
If you are unfamiliar with the syntax, it may help to cut/paste/edit from the examples from the JustOneDB REST reference guide. See here http://www.justonedb.com/appharbor

Related

SOLR backup status request not found

I have been trying to figure out a way that I know when the SOLR backup is done and its status. We have a lot of collections that we are trying to backup. The request has an error
status={state=notfound,msg=Did not find [requestId123] in any tasks queue}
When I looked at the SOLR source code, I realized that the status is reported from the request status in the overseer queue i.e. COMPLETED,FAILED,RUNNING,SUBMITTED is based on the overseer queue. When the request in not found in the overseer queue or when the queue is cleared then we get this error.
My question is there any other way to get the SOLR backup status reliably.
Thanks
Taking backup
I am not sure how you are running the process for a backup (nor where you can see that error). My assumption is that you are checking logs (because it looks like a similar message which will appear in logs).
Additionally you did not mention which solr version you are using. I will elaborate the answer bellow for 8.9 (but any version which supports v2 AND v1 api should work similar).
If you want to run backup asynchronously you can use following:
curl -X POST http://localhost:8983/api/collections -H 'Content-Type: application/json' -d '
{
"backup-collection": {
"name": "openaccess-v26-backup",
"collection": "openaccess-v26",
"location": "/var/solr/mounted-efs-backup",
"async": "1000"
}
}
'
This will start async process for a backup with track id 1000.
Checking action status
You can use following to check the status of the process:
curl 'http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1000'
This will return response like this:
{
"responseHeader":{
"status":0,
"QTime":3},
"status":{
"state":"running",
"msg":"found [1000] in running tasks"}}
Additionally, this is the way to check all of the actions (not only backup). For example, using the same way you can check status of RESTORE action if you are restoring your backup into the solr collection.
Listing backups
It seems that relevant info as well can be to try to list backups from time to time and see is the backup within your list (if above approach is not working for you).
Please make a note that I am not 100% sure is there a possibility for backup to be listed if its not completed, but based on my testing and pure empirical approach seems that this is not the case.
So, if I start backup, and try to execute an api which is going to give me a list of all of the backups, I will get empty list for example:
curl -X POST http://localhost:8983/v2/collections/backups -H 'Content-Type: application/json' -d '
{
"list-backups" : {
"name": "openaccess-v26-backup",
"location": "/var/solr/mounted-efs-backup"
}
}'
{
"responseHeader":{
"status":0,
"QTime":165},
"backups":[]
}
}
However, if you execute this after a while (when backup is completed), the response will be in a following format:
{
"responseHeader":{
"status":0,
"QTime":14},
"collection":"openaccess-v26",
"backups":[{
"indexFileCount":0,
"indexSizeMB":0.0,
"shardBackupIds":{
"shard2":"md_shard2_0.json",
"shard3":"md_shard3_0.json",
"shard1":"md_shard1_0.json"},
"collection.configName":"openaccess-v26",
"backupId":0,
"collectionAlias":"openaccess-v26",
"startTime":"2022-07-05T08:34:53.703175Z",
"indexVersion":"8.9.0"}]}
This kind of approach works fine for the 8.9 version of solr im using with apiv2.
I was able to restore and use backups without any kind of issues after they are listed.

solr 8.11 Field Types docs contradiction. Any guidance?

I'm setting up my first Solr server via docker using solr:8.11.1-slim. I am gonna use the schema API to set up the schema for my core whose name is 'products'.
While reading the docs there seems to be false info on the docs for field types:
https://solr.apache.org/guide/8_11/field-types-included-with-solr.html
vs.
https://solr.apache.org/guide/8_11/schema-api.html
I followed the first guide to get info on what field types I can specify and am trying to send requests based on the second doc such as this:
{ 'add-field': { "name":"latlong", "type":"LatLongPointSpatialField", "multiValued":False, "stored":True, 'indexed': True } },
but Solr gives me back errors such as:
org.apache.solr.api.ApiBag$ExceptionWithErrObject: error processing commands, errors: [{add-field={name=latlong, type=LatLongPointSpatialField, multiValued=false, stored=true, indexed=true}, errorMessages=[Field 'latlong': Field type 'LatLongPointSpatialField' not found
So what gives? Am I misreading the docs or are they wrong or is something wrong with the solr 8.11.1 image in docker? Why does it not accept the field types I'm providing?
Thanks for your help ahead of time.

Solr - How to fix "Error adding field ... msg=For input string" when post data to core

I am new to Solr.
I created a Solr(8.1.0) core using SolrCloud for testing, and try to post data as a json file.
When an object has a value with float like "spalte412": "35.5" or with special characters, it throws an error in the in the console:
SimplePostTool: WARNING: Response: {
"responseHeader":{
"rf":2,
"status":400,
"QTime":223},
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","java.lang.NumberFormatException"],
"msg":"ERROR: [doc=52] Error adding field 'spalte421'='156.6' msg=For input string: \"156.6\"",
"code":400}}
I tried to edit core Schema by adding the field, in the Admin UI, without success.
Thanks for you help !
If you're not pre-defining your fields, the field types determined for the field will depend on the first document submitted that has that field present. Solr uses this field type to guess the type of the field, and in this case the guessed field type differs from the format you're sending in later documents.
The schemaless mode is neat for prototyping, but when moving to production you should always add the fields up front with the correct types so you don't suddenly get any surprises (as above) when the documents are submitted in a different order (or different documents) than when developing.
You can define fields in schema.xml or through the SchemaAPI.
You should post the schema.xml an an short description, what you did before.
"root-error-class","java.lang.NumberFormatException"
Sounds like solr war unable to understand that number format while your are trying to put a document with an stringt ( =For input string: \"156.6\"")
Sounds like you have a mismatch between a delivered and expected format.
Thanks guys.
indeed, I solved it by deleting the fields in the admin UI and defining with
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' http://localhost:8983/solr/films/schema

Apache Solr JSON Querying Local Params

In the Solr documentation it focuses explanations on how to use the GET parameters to define queries, but gives very little information on how to accomplish the same tasks using the better structured JSON POST support. I have been unable to find any documentation that goes deeper than very surface-level explanation.
Particularly, I'm trying to utilize local params in my queries and would like to know how to accomplish the following using a JSON POST instead of GET params:
http://localhost:8983/solr/city/query?sort={!sfield=location pt=35.5514,-97.4075}geodist() asc&q={!geofilt sfield=location pt=35.5514,-97.4075 d=5}
According to JSON Request API / Parameters Mapping your query would map to:
{
"sort": "{!sfield=location pt=35.5514,-97.4075}geodist() asc",
"query": "{!geofilt sfield=location pt=35.5514,-97.4075 d=5}"
}
Just to complete #MatsLindh answer, you can use usual parameter names as long as you wrap them in params (no mapping needed), for example :
file.json
{
"params": {
"q":"{!geofilt sfield=location pt=35.5514,-97.4075 d=5}",
"sort":"{!sfield=location pt=35.5514,-97.4075}geodist() asc",
"wt": "json",
"indent": "true"
}
}
Request example using curl :
curl -H "Content-Type: application/json" -X "POST" --data #file.json http://localhost:8983/solr/city/query

SolrCloud in production - querying q=* gives numFound=0

So I have a three-node cluster deployed using a zookeeper. And successfully created test collection (3 shards). Then after I have
curl -X POST -H 'Content-Type: application/json' 'ec2FirstNodeIP:8983/solr/test/update' --data-binary ' [ { "f1" : "1", "f2" : "2", "f3" : "3" } ]'
I got
{"responseHeader":{"status":0,"QTime":38} ...
However when I have curl "sameIP:8983/solr/test/select?wt=json&indent=true&q=*:*"
I am getting
NumFound:0
But using the admin UI for updating the document, the query now returns the document
image for admin UI
What am I missing?
To make document searchable we should commit. use commit=true
ec2FirstNodeIP:8983/solr/test/update?commit=true this should work.

Resources