How to disable casefolding using field-type string in vespa.ai?
search post {
document post {
field token type string {
indexing: index
match: word
rank: filter
rank-type: empty
stemming: none
normalizing: none
indexing-rewrite: none
}
}
}
Fill database:
curl -X POST -H "Content-Type:application/json"
--data-binary '{"fields":{"token":"TeSt"}}'
http://localhost:8080/document/v1/post/post/docid/TeSt
Query matches even case is different (due casefolding):
curl -s -H "Content-Type: application/json"
--data '{"yql" : "select * from post where token contains \"test\";"}'
http://localhost:8080/search/ | jq .
Vespa does not support case sensitive search even with match:word.
This is asked every few years but nobody has really actually needed it yet. It is easy to add, feel free to create an issue for it on github.com/vespa-engine/vespa if you really need it.
Related
If I have the transaction hash of an RSK transaction, how can I get its internal transactions - i.e. when the smart contracts invoke functions on other contracts or do RBTC transfers?
I'm able to get the main transaction using web3.js, however once obtaining this,
I'm unable to parse it to extract the internal transactions that occur.
Another thing that I've tried was to use web3.js to query the block that the transaction occurred in, however was unable to parse this either to obtain internal transactions.
To reiterate my original comment:
The RSK virtual machine (like the EVM) does not define "internal transaction", and hence there's no RPC to query them. You will need to "debug" the transaction execution in order to reconstruct these internals - which is quite difficult to do. Block explorers typically do this for you.
Fortunately the RSK Block Explorer
exposes an API, and thus is programmatically queryable.
So while you won't be able to use web3.js for this,
as you've asked for in your question,
you will be able to get internal transactions nonetheless.
Let's use an example, with the following transaction 0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8, which happens to have a lot of internal transactions.
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8"
The above command retrieves the internal transactions of this particular transaction.
If you wish to do this for a different transaction,
simply change the value of the hash query parameter in the request URL.
This gives you a fairly large JSON response,
which I will not copy in full here.
You can then parse this using your JS code (since you're already using web3.js).
On the command line, you can explore the data a bit more using
the response filters available in the jq command line utility:
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8" \
| jq -c '.data[].action.callType'
The above pipes the output of the curl command into jq which then
applies a filter that:
looks at the data property, and returns all items in the array
within each item drills down into the action object, and returns its callType value
This results in the following output:
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"call"
So this transaction contains 18 internal transactions,
with a mix of delegatecall, staticcall, and call...
a fairly complex transaction indeed!
Now let's very the jq command to use a different filter,
such that we gets the full details on only the final internal transaction,
which happens to be the only call internal transaction:
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8" \
| jq -c '.data[17].action'
Note that the only difference from the previous command is that now the filter
is .data[17].action.
This results in the following output:
{
"callType": "call",
"from": "0x3f7ec3a190661db67c4907c839d8f1b0c18f2fc4",
"to": "0xa288319ecb63301e21963e21ef3ca8fb720d2672",
"gas": "0x20529",
"input": "0xcbf83a040000000000000000000000000000000000000000000000000000000000000003425443555344000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000086f36650548d5c400000000000000000000000000003f7ec3a190661db67c4907c839d8f1b0c18f2fc4000000000000000000000000000000000000000000000000000000000036430c000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000002800000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001b0000000000000000000000000000000000000000000000000000000000000005d6328b4db96469d968348a852e6978d18b7dc9bda776727991b83f171abe4a4040ebab67dee8e9711683af91e05c3970bcb6a29502f9b35b14b7a9225d43f6e3e0cf4ae577be626ae350d8e103df88f55205167eaad7267fdbf247e4b35ec674457ac87e13451d2fa9985c854b2f84982e3b611c3b48f5045f2cdc3c6acff44d1735d2771581dc2cc7477fc846767ad088182fc317424d468477cf3a54724543000000000000000000000000000000000000000000000000000000000000000516a3d4cf7e73d17e2230c87f6ef48f38d82885c64d47fef646987f8d6fbb86405515760c786315cac84d7df048e2ba054868f2b9e2afeec0b63ebf2dcac59c8848f254382abf73cf6ce2d5134b5bc065c0706fb7a2f7886a15e79a8953ed11006c5a7d14b4fbf1bb6ff8d687a82a548dcdbd823ebec4b10e331bee332df1a7ae0e45fdac4f6648e093b90a6b56f33e31f36d4079526f871f51cafa710cdde4c3",
"value": "0x0"
}
Currently I'm writing a React App and struggling with a simple reading from a SQlite database.
Edit because of unclear question:
***The goal is to read from the database without any backend, because it needs to read from the database even when it is offline.
***I'm aiming for a ONE TIME file conversion, then just pouchdb queries offline. But I don't want to do it manually because there are around 6k+ registries.
***Or SQL queries from the browser without any APIs, but I need to support Internet Explorer, so WebSQL is not an option. I've tried sqlite3 library, but I can't make it work with Create React App.
The solution I tried was to use PouchDB for reading the file, but I'm coming to a conclusion that it is NOT possible to PRELOAD a SQlite file with PouchDB without using cordova (I'm not comfortable with it, I don't want any servers running), or even with some kind of adapter.
So is this the right way of doing things?
Is there any way that I would not loose my .db data, and have to convert it all of it manually?
Should I forget about supporting this features on IE?
Thanks :)
Try this:
sqlite3 example "DROP TABLE IF EXISTS some_table;";
sqlite3 example "CREATE TABLE IF NOT EXISTS some_table (id INTEGER PRIMARY KEY AUTOINCREMENT, anattr VARCHAR, anotherattr VARCHAR);";
sqlite3 example "INSERT INTO some_table VALUES (NULL, '1stAttr', 'AttrA');";
sqlite3 example "INSERT INTO some_table VALUES (NULL, '2ndAttr', 'AttrB');";
## Create three JSON fragment files
sqlite3 example ".output result_prefix.json" "SELECT '{ \"docs\": ['";
sqlite3 example ".output rslt.json" "SELECT '{ \"_id\": \"someTable_' || SUBSTR(\"000000000\" || id, LENGTH(\"000000000\" || id) - 8, 9) || '\", \"anattr\": \"' || anattr || '\", \"anotherattr\": \"' || anotherattr || '\" },' FROM some_table;";
sqlite3 example ".output result_suffix.json" "SELECT '] }'";
## strip trailing comma of last record
sed -i '$ s/.$//' rslt.json;
## concatenate to a single file
cat result_prefix.json rslt.json result_suffix.json > result.json;
cat result.json;
You should be able simply to paste the above lines onto the (unix) command line, seeing output:
{ "docs": [
{ "_id": "someTable_000000001", "anattr": "1stAttr", "anotherattr": "AttrA" },
{ "_id": "someTable_000000002", "anattr": "2ndAttr", "anotherattr": "AttrB" }
] }
If you have jq installed you can do instead ...
cat result.json | jq .
... obtaining:
{
"docs": [
{
"_id": "someTable_000000001",
"anattr": "1stAttr",
"anotherattr": "AttrA"
},
{
"_id": "someTable_000000002",
"anattr": "2ndAttr",
"anotherattr": "AttrB"
}
]
}
You'll find an example of how quickly to initialize PouchDB from JSON files in part 2 of the blog post Prebuilt databases with PouchDB.
So, if you have a CouchDB server available you can do the following;
export COUCH_DB=example;
export COUCH_URL= *** specify yours here ***;
export FILE=result.json;
## Drop database
curl -X DELETE ${COUCH_URL}/${COUCH_DB};
## Create database
curl -X PUT ${COUCH_URL}/${COUCH_DB};
## Load database from JSON file
curl -H "Content-type: application/json" -X POST "${COUCH_URL}/${COUCH_DB}/_bulk_docs" -d #${FILE};
## Extract database with meta data to PouchDB initialization file
pouchdb-dump ${COUCH_URL}/${COUCH_DB} > example.json
## Inspect PouchDB initialization file
cat example.json | jq .
Obviously you'll need some adaptations, but the above should give you no problems.
Since Couch/Pouch-DB are document-oriented DBs all records aka docs there are just JSON aka JS-objects. In my RN app when I met similar task I just put all docs I wanted to be "prepopulated" in PouchDB in an array of JS-objects, import it as module in my app and then write them during app init to PDB as necessarry docs. That's all prepopulation. How to export your SQL DB records to JSON - you decide, surely it depends on source DB structure and data logic you want to be in PDB.
When I send a query xml doc like this
<query><text><![CDATA[
let $facts := fn:collection("factbook/factbook.xml")/mondial
let $c := ("Antarktika", "Atlantis")
for $name at $id in $c
return
insert node (<continent id="f0_aaa{$id}" name="{$name}" />) into $facts
]]></text></query>
to the REST API using
curl -i --data '...' 'http://localhost:8984/rest'
BaseX will report the following error:
[XPST0003] Incomplete FLWOR expression: expecting 'return'.
If I execute the same query on the web admin query page, the query is accepted and the nodes are inserted.
Why is the REST call rejected? Is there any further restriction that does not apply to the admin interface?
If I remove the lets and expand the corresponding variables, the query is accepted by REST API:
<query><text><![CDATA[
for $name at $id in ("Antarktika", "Atlantis")
return
insert node (<continent id="f0_aaa{$id}" name="{$name}" />) into fn:collection("factbook/factbook.xml")/mondial
]]></text></query>
The REST user has write permission. I'm using BaseX 9.0.2.
It turned out that the problem was not the query, but the --data option of curl in combination with # to send file content. This option strips linebreaks (CR and LF) before sending. With --data-binary '#...' the query works as expected.
We are trying to return all user information from a LDAP query made to a Microsoft Active Directory 2012 server.
First, we get all attributes from the schema (including msds-memberOfTransitive and msds-memberTransitive), then we make a query requesting all attributes.
We have narrowed down this problem to executing a LDAP search with the following parameters:
- Scope: Next level (if there are elements inside the container) or Subtree
- msds-memberOfTransitive or msds-memberTransitive attributes are requested
Sample query reproducing the error:
ldapsearch -D "CN=Administrator,CN=Users,DC=my,DC=dom" -W -b "CN=Users,DC=my,DC=dom" -h 10.0.1.100 -p 389 msds-memberTransitive
Sample query avoiding the error:
ldapsearch -D "CN=Administrator,CN=Users,DC=my,DC=dom" -W -b "CN=Administrator,CN=Users,DC=my,DC=dom" -h 10.0.1.100 -p 389 msds-memberTransitive -s one
I assume this is some mechanism to avoid excessive calculations of "transitive" attributes, but I have not found anything .
How could I make this search (appart from removing these attributes from the search)?
Looks like the msds-memberOfTransitive and msds-memberTransitive have Search Flags of searchFlags: 2048 set that limit the search to a base Scope.
If we look at msds-memberOfTransitive, we see the setting searchFlags: fBASEONLY. Lookin at Search Flags, we see:
(fBASEONLY, 0x00000800): Specifies that the attribute is not to be returned by search operations that are not scoped to a single object. Read operations that would otherwise return an attribute that has this search flag set instead fail with operationsError / ERROR_DS_NON_BASE_SEARCH.
(Same is true for msds-memberTransitive)
So these attributes will only be return when the scope of the search is BASE.
The only method around this condition would be to loop through each result with one of the attributes and do a second search which would be a baseDN of the entry and a scope of BASE.
I have been using a command similar to the following to query for group membership:
ldapsearch -H ldap://999.999.999.99\
-LLL -D \
"CN=BindCN,OU=Group,OU=Functional,OU=Users,DC=domain,DC=com" \
-x -w password \
-b "OU=GroupName,OU=Shares,DC=domain,DC=com" \
"cn=groupCN" \
-s sub member
This will list all members of the group by DN:
member;range=0-1499: CN=Last\, First (F),OU=Employees,OU=Users
,DC=domain,DC=com
member;range=0-1499: CN=Last\, First (F),OU=Employees,OU=Users
,DC=domain,DC=com
member;range=0-1499: CN=Last\, First (F),OU=Employees,OU=Users
,DC=domain,DC=com
...
Which is alright, but say that I have a list of sAMAccountName's that is 5000 lines, and I want to see if any are in the above group (which has 5000 members.) Is there any way for me to query group members by sAMAccountName?
Yes the DN in LDAP is always unique. It is not possible for the RDN (CN) to be the same within the same container.
The DN is the Fully Distinguished Name (ie CN=somecn,OU=Employees,OU=Users,DC=domain,DC=com)
You will need to query for the samAccountName(s) which will return the DNs and then resolve the group members from the DNs.
-jim