If I have the transaction hash of an RSK transaction, how can I get its internal transactions - i.e. when the smart contracts invoke functions on other contracts or do RBTC transfers?
I'm able to get the main transaction using web3.js, however once obtaining this,
I'm unable to parse it to extract the internal transactions that occur.
Another thing that I've tried was to use web3.js to query the block that the transaction occurred in, however was unable to parse this either to obtain internal transactions.
To reiterate my original comment:
The RSK virtual machine (like the EVM) does not define "internal transaction", and hence there's no RPC to query them. You will need to "debug" the transaction execution in order to reconstruct these internals - which is quite difficult to do. Block explorers typically do this for you.
Fortunately the RSK Block Explorer
exposes an API, and thus is programmatically queryable.
So while you won't be able to use web3.js for this,
as you've asked for in your question,
you will be able to get internal transactions nonetheless.
Let's use an example, with the following transaction 0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8, which happens to have a lot of internal transactions.
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8"
The above command retrieves the internal transactions of this particular transaction.
If you wish to do this for a different transaction,
simply change the value of the hash query parameter in the request URL.
This gives you a fairly large JSON response,
which I will not copy in full here.
You can then parse this using your JS code (since you're already using web3.js).
On the command line, you can explore the data a bit more using
the response filters available in the jq command line utility:
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8" \
| jq -c '.data[].action.callType'
The above pipes the output of the curl command into jq which then
applies a filter that:
looks at the data property, and returns all items in the array
within each item drills down into the action object, and returns its callType value
This results in the following output:
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"staticcall"
"delegatecall"
"call"
So this transaction contains 18 internal transactions,
with a mix of delegatecall, staticcall, and call...
a fairly complex transaction indeed!
Now let's very the jq command to use a different filter,
such that we gets the full details on only the final internal transaction,
which happens to be the only call internal transaction:
curl \
-X GET \
-H "accept: application/json" \
"https://backend.explorer.rsk.co/api?module=internalTransactions&action=getInternalTransactionsByTxHash&hash=0x01fbd670ea2455d38e83316129765376a693852eca296b3469f18d2a8dde35d8" \
| jq -c '.data[17].action'
Note that the only difference from the previous command is that now the filter
is .data[17].action.
This results in the following output:
{
"callType": "call",
"from": "0x3f7ec3a190661db67c4907c839d8f1b0c18f2fc4",
"to": "0xa288319ecb63301e21963e21ef3ca8fb720d2672",
"gas": "0x20529",
"input": "0xcbf83a040000000000000000000000000000000000000000000000000000000000000003425443555344000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000086f36650548d5c400000000000000000000000000003f7ec3a190661db67c4907c839d8f1b0c18f2fc4000000000000000000000000000000000000000000000000000000000036430c000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000002800000000000000000000000000000000000000000000000000000000000000005000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001c000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001b000000000000000000000000000000000000000000000000000000000000001b0000000000000000000000000000000000000000000000000000000000000005d6328b4db96469d968348a852e6978d18b7dc9bda776727991b83f171abe4a4040ebab67dee8e9711683af91e05c3970bcb6a29502f9b35b14b7a9225d43f6e3e0cf4ae577be626ae350d8e103df88f55205167eaad7267fdbf247e4b35ec674457ac87e13451d2fa9985c854b2f84982e3b611c3b48f5045f2cdc3c6acff44d1735d2771581dc2cc7477fc846767ad088182fc317424d468477cf3a54724543000000000000000000000000000000000000000000000000000000000000000516a3d4cf7e73d17e2230c87f6ef48f38d82885c64d47fef646987f8d6fbb86405515760c786315cac84d7df048e2ba054868f2b9e2afeec0b63ebf2dcac59c8848f254382abf73cf6ce2d5134b5bc065c0706fb7a2f7886a15e79a8953ed11006c5a7d14b4fbf1bb6ff8d687a82a548dcdbd823ebec4b10e331bee332df1a7ae0e45fdac4f6648e093b90a6b56f33e31f36d4079526f871f51cafa710cdde4c3",
"value": "0x0"
}
Related
I want to send a transaction in RSK network and I get this message in logs: Not enough gas for transaction execution.
I got the gas limit parameter from my testing environment, using web3.eth.estimateGas.
RSK nodes have a JSON-RPC for eth_estimateGas,
which is the most reliable way to perform gas estimations.
You can do this from the terminal using curl:
curl \
-X POST \
-H "Content-Type:application/json" \
--data '{"jsonrpc":"2.0","method":"eth_estimateGas","params":[{"from": "0x560e6c06deb84dfa84dac14ec08ed093bdd1cb2c", "to": "0x560e6c06deb84dfa84dac14ec08ed093bdd1cb2c", "gas": "0x76c0", "gasPrice": "0x3938700", "value": "0x9184e72a", "data": "" }],"id":1}' \
http://localhost:4444
{"jsonrpc":"2.0","id":1,"result":"0x5208"}
Alternatively, using web3.js:
web3.eth.estimateGas({"to": "0x391ec8a27d29a42c7601651d2f38b1e1895e27a1", "data": "0xe26e496319a16c8ccae126f4aac7e3010123927a4739288cd1ace12feafae9a2"})
23176
While this is the same JSON-RPC found in geth (Ethereum) and other Ethereum-compatible nodes,
note that the gas calculations in RSK and Ethereum are different.
Thus their implementations differ.
For example, the price of certain VM opcodes are different.
Another notable point of difference related to gas estimation,
is that Ethereum implements EIP-150,
whereas RSK does not.
This means that the 1/64 reduction in gas estimation does not apply to RSK.
(The detailed implications of this on gas estimation are perhaps beyond the scope of this question.)
This means that you will expect incorrect values when running against ganache-cli (previously testrpc),
which is used by default in common developer tools such as Truffle.
To get the correct gas,
using the RSK-specific calculations,
the best way is to use RSK Regtest
when invoking eth_estimateGas
for local development and testing.
In other scenarios you may also use
RSK Testnet and Mainnet.
The following other scenarios are also relevant, but not directly related to your question, but are also good to know:
When invoking smart contract functions
that have the pure or view modifiers,
no gas (and therefore gas estimation) is necessary.
When performing certain transactions that have a define invariant gas price,
simply you may use that as a hard-coded constant.
For example the transfer of the native currency (RBTC in this case),
the invariant gas price is 21000.
This assumes that no data (sometimes referred to as "message")
was sent with the transaction.
How to disable casefolding using field-type string in vespa.ai?
search post {
document post {
field token type string {
indexing: index
match: word
rank: filter
rank-type: empty
stemming: none
normalizing: none
indexing-rewrite: none
}
}
}
Fill database:
curl -X POST -H "Content-Type:application/json"
--data-binary '{"fields":{"token":"TeSt"}}'
http://localhost:8080/document/v1/post/post/docid/TeSt
Query matches even case is different (due casefolding):
curl -s -H "Content-Type: application/json"
--data '{"yql" : "select * from post where token contains \"test\";"}'
http://localhost:8080/search/ | jq .
Vespa does not support case sensitive search even with match:word.
This is asked every few years but nobody has really actually needed it yet. It is easy to add, feel free to create an issue for it on github.com/vespa-engine/vespa if you really need it.
Currently I'm writing a React App and struggling with a simple reading from a SQlite database.
Edit because of unclear question:
***The goal is to read from the database without any backend, because it needs to read from the database even when it is offline.
***I'm aiming for a ONE TIME file conversion, then just pouchdb queries offline. But I don't want to do it manually because there are around 6k+ registries.
***Or SQL queries from the browser without any APIs, but I need to support Internet Explorer, so WebSQL is not an option. I've tried sqlite3 library, but I can't make it work with Create React App.
The solution I tried was to use PouchDB for reading the file, but I'm coming to a conclusion that it is NOT possible to PRELOAD a SQlite file with PouchDB without using cordova (I'm not comfortable with it, I don't want any servers running), or even with some kind of adapter.
So is this the right way of doing things?
Is there any way that I would not loose my .db data, and have to convert it all of it manually?
Should I forget about supporting this features on IE?
Thanks :)
Try this:
sqlite3 example "DROP TABLE IF EXISTS some_table;";
sqlite3 example "CREATE TABLE IF NOT EXISTS some_table (id INTEGER PRIMARY KEY AUTOINCREMENT, anattr VARCHAR, anotherattr VARCHAR);";
sqlite3 example "INSERT INTO some_table VALUES (NULL, '1stAttr', 'AttrA');";
sqlite3 example "INSERT INTO some_table VALUES (NULL, '2ndAttr', 'AttrB');";
## Create three JSON fragment files
sqlite3 example ".output result_prefix.json" "SELECT '{ \"docs\": ['";
sqlite3 example ".output rslt.json" "SELECT '{ \"_id\": \"someTable_' || SUBSTR(\"000000000\" || id, LENGTH(\"000000000\" || id) - 8, 9) || '\", \"anattr\": \"' || anattr || '\", \"anotherattr\": \"' || anotherattr || '\" },' FROM some_table;";
sqlite3 example ".output result_suffix.json" "SELECT '] }'";
## strip trailing comma of last record
sed -i '$ s/.$//' rslt.json;
## concatenate to a single file
cat result_prefix.json rslt.json result_suffix.json > result.json;
cat result.json;
You should be able simply to paste the above lines onto the (unix) command line, seeing output:
{ "docs": [
{ "_id": "someTable_000000001", "anattr": "1stAttr", "anotherattr": "AttrA" },
{ "_id": "someTable_000000002", "anattr": "2ndAttr", "anotherattr": "AttrB" }
] }
If you have jq installed you can do instead ...
cat result.json | jq .
... obtaining:
{
"docs": [
{
"_id": "someTable_000000001",
"anattr": "1stAttr",
"anotherattr": "AttrA"
},
{
"_id": "someTable_000000002",
"anattr": "2ndAttr",
"anotherattr": "AttrB"
}
]
}
You'll find an example of how quickly to initialize PouchDB from JSON files in part 2 of the blog post Prebuilt databases with PouchDB.
So, if you have a CouchDB server available you can do the following;
export COUCH_DB=example;
export COUCH_URL= *** specify yours here ***;
export FILE=result.json;
## Drop database
curl -X DELETE ${COUCH_URL}/${COUCH_DB};
## Create database
curl -X PUT ${COUCH_URL}/${COUCH_DB};
## Load database from JSON file
curl -H "Content-type: application/json" -X POST "${COUCH_URL}/${COUCH_DB}/_bulk_docs" -d #${FILE};
## Extract database with meta data to PouchDB initialization file
pouchdb-dump ${COUCH_URL}/${COUCH_DB} > example.json
## Inspect PouchDB initialization file
cat example.json | jq .
Obviously you'll need some adaptations, but the above should give you no problems.
Since Couch/Pouch-DB are document-oriented DBs all records aka docs there are just JSON aka JS-objects. In my RN app when I met similar task I just put all docs I wanted to be "prepopulated" in PouchDB in an array of JS-objects, import it as module in my app and then write them during app init to PDB as necessarry docs. That's all prepopulation. How to export your SQL DB records to JSON - you decide, surely it depends on source DB structure and data logic you want to be in PDB.
When trying to create a warehouse from the Cloudant dashboard, sometimes the process fails with an error dialog. Other times, the warehouse extraction stays in a state of triggered even after hours.
How can I debug this? For example is there an API I can call to see what is going on?
Take a look inside the document inside the _warehouser database, and look for the warehouser_error_message element. For example:
"warehouser_error_message": "Exception occurred while creating table.
[SQL0670N The statement failed because the row size of the
resulting table would have exceeded the row size limit. Row size
limit: \"\". Table space name: \"\". Resulting row size: \"\".
com.ibm.db2.jcc.am.SqlException: DB2 SQL Error: SQLCODE=-670,
SQLSTATE=54010, SQLERRMC=32677;;34593, DRIVER=4.18.60]"
The warehouser error message usually gives you enough information to debug the problem.
You can view the _warehouser document in the Cloudant dashboard or use the API, e.g.
export cl_username='<your_cloudant_account>'
curl -s -u $cl_username -p \
https://$cl_username.cloudant.com/_warehouser/_all_docs?include_docs=true \
| jq [.warehouse_error_code]
We are trying to return all user information from a LDAP query made to a Microsoft Active Directory 2012 server.
First, we get all attributes from the schema (including msds-memberOfTransitive and msds-memberTransitive), then we make a query requesting all attributes.
We have narrowed down this problem to executing a LDAP search with the following parameters:
- Scope: Next level (if there are elements inside the container) or Subtree
- msds-memberOfTransitive or msds-memberTransitive attributes are requested
Sample query reproducing the error:
ldapsearch -D "CN=Administrator,CN=Users,DC=my,DC=dom" -W -b "CN=Users,DC=my,DC=dom" -h 10.0.1.100 -p 389 msds-memberTransitive
Sample query avoiding the error:
ldapsearch -D "CN=Administrator,CN=Users,DC=my,DC=dom" -W -b "CN=Administrator,CN=Users,DC=my,DC=dom" -h 10.0.1.100 -p 389 msds-memberTransitive -s one
I assume this is some mechanism to avoid excessive calculations of "transitive" attributes, but I have not found anything .
How could I make this search (appart from removing these attributes from the search)?
Looks like the msds-memberOfTransitive and msds-memberTransitive have Search Flags of searchFlags: 2048 set that limit the search to a base Scope.
If we look at msds-memberOfTransitive, we see the setting searchFlags: fBASEONLY. Lookin at Search Flags, we see:
(fBASEONLY, 0x00000800): Specifies that the attribute is not to be returned by search operations that are not scoped to a single object. Read operations that would otherwise return an attribute that has this search flag set instead fail with operationsError / ERROR_DS_NON_BASE_SEARCH.
(Same is true for msds-memberTransitive)
So these attributes will only be return when the scope of the search is BASE.
The only method around this condition would be to loop through each result with one of the attributes and do a second search which would be a baseDN of the entry and a scope of BASE.