How to deploy a smart contract on KADENA - pact-lang

Please how do I deploy a smart contract to the testnet or mainnet WITHOUT Chainweaver web UI? I know I need a YAML file for that, but what do I do with it and where exactly do I send it?
Do I need to run a pact server, chainweb api or...? I couldn't find any guide for that

Step 0: Install the Prerequisites
Install Pact
Step 1: Create the Pact Module
We will be deploying the following pact module. For simplicity's sake the pact code we are deploying is not using a transaction's data field (read-keyset is one such pact function that makes use of this field). Otherwise, the accompanying YAML file will have to change. We also assume that this pact code is saved as test.pact.
(namespace 'free)
(module someModuleName AUTONOMOUS
(defcap AUTONOMOUS ()
true)
(defun dummy ()
(+ 1 2)
)
)
Step 2: Create the YAML file
The following YAML file will be used along with pact -a to sign and produce the escaped JSON needed to submit a transaction to Testnet.
codeFile: /Users/linda.ortega.cordoves/pact/test.pact
networkId: testnet04
publicMeta:
chainId: "0"
gasLimit: 1000
ttl: 28000
creationTime: 1585056536
sender: "testing"
gasPrice: 0.00001
keyPairs:
- public: 1d877a7b4524b6724a6ae708cf9ea7396d6ee9d17b10098b7793800177669c1d
secret: 33fcd94b8a42057bd4e3190f8983e3a73ec96c3f60df95c9e2aa3f13602c714f
nonce: step02
This file makes a couple of assumptions that might change depending on your specific implementation:
The full path of the pact we want to upload is: /Users/linda.ortega.cordoves/pact/test.pact
We want to submit a transaction to Testnet, whose network id is testnet04
We want to submit to the zero'th chain on Testnet, which has a chain id of "0"
That the current creation time in UNIX Epoch time is 1585056536 seconds. This value MUST CHANGE, so calculate it by either navigating to this website or running date +%s on the command line.
That "testing" is the account paying for gas (aka the "sender") on the Testnet network. To create a Testnet account and fund it some coins, navigate to the Testnet Coin Faucet. You will need to have generated an ED22519 public-private key pair to use the faucet. You can use pact -g to generate this key pair. Make sure to save it somewhere save.
That the key pair specified in "keyPairs" corresponds to the key pair used to create the gas payer account, which in this example is "testing". This must change from the defaults provided.
That we saved this YAML file as /Users/linda.ortega.cordoves/pact/test.yaml.
Step 3: Submit Transaction to Testnet
We will now submit the example pact module we created by hitting the /send endpoint of a Testnet node. In the command line, run the following command:
pact -a /Users/linda.ortega.cordoves/pact/test.yaml | curl -H "Content-Type: application/json" -d #- https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/send
Some of the assumptions we made when creating the YAML file become important here:
The network id must match the node endpoint we submit to. Since the network id we chose is testnet04, we must submit to /chainweb/0.0/testnet04/. And the node we submit to (in this case us1.testnet.chainweb.com) must have this network id.
The chain id must also match. We chose chain id of "0", so we must submit to /chain/0/.
That we saved the yaml file to /Users/linda.ortega.cordoves/pact/test.yaml.
If we submitted the transaction successfully we will see the following:
{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}
This means that our transaction was successfully added to the blockchain's mempool and is waiting to be mined. Make note of the request key returned from /send as we will use it when polling for the result of the transaction.
It is also possible that our transaction will fail node validation when we attempt to submit it. If this happens, you will receive a validation failure message instead of the request key.
Step 4: Verify the Result of the Transaction
We will now try to get the results of the transaction we submitted to the Testnet network by hitting the /poll endpoint. In the command line, run the following command:
curl -H "Content-Type: application/json" -d '{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}' -X POST https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/poll
Again, we make a couple of assumptions in this step:
That the Testnet node we want to poll from is us1.testnet.chainweb.com.
That the network id is testnet04. Note that part of the endpoint is /chainweb/0.0/testnet04/.
That the chain id we are polling from is chain "0". Note that part of the endpoint is /chain/0/.
That the request key we are polling for is Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek.
If the transaction was successfully mined and thus added to the blockchain, then /poll will return the following JSON object:
{
"Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek": {
"gas": 58,
"result": {
"status": "success",
"data": "Loaded module free.linda-test, hash n0g99JhWnO2F7X7f8o_zcAiSHBAWS_QSAfn4yUaqpps"
},
"reqKey": "Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek",
"logs": "0KzZQDJmEgnAKvPnO20UeGoE7KGCIN22nhjraeyp1aw",
"metaData": {
"blockTime": 1585056990071469,
"prevBlockHash": "dIYmpjBQge9yw0Yzhn0Sau-wJFwsLOFBmGbV3_0xYeE",
"blockHash": "yULpC5C-7tzRcc9sWm-f1bOC3JDvtxwT61hruW0aXrA",
"blockHeight": 261712
},
"continuation": null,
"txId": 266084
}
}
Please note that it is possible that a transaction fails at the pact level, but still gets added to the blockchain and gas gets charged. If this happens the result.status field will be failure.
If a transaction has not be mined yet, /poll will return {}. Keep retrying until you receive the JSON object shown above.
source: https://gist.github.com/LindaOrtega/1c219f887d9782c6745dbd827bdbfb4d

Related

SOLR backup status request not found

I have been trying to figure out a way that I know when the SOLR backup is done and its status. We have a lot of collections that we are trying to backup. The request has an error
status={state=notfound,msg=Did not find [requestId123] in any tasks queue}
When I looked at the SOLR source code, I realized that the status is reported from the request status in the overseer queue i.e. COMPLETED,FAILED,RUNNING,SUBMITTED is based on the overseer queue. When the request in not found in the overseer queue or when the queue is cleared then we get this error.
My question is there any other way to get the SOLR backup status reliably.
Thanks
Taking backup
I am not sure how you are running the process for a backup (nor where you can see that error). My assumption is that you are checking logs (because it looks like a similar message which will appear in logs).
Additionally you did not mention which solr version you are using. I will elaborate the answer bellow for 8.9 (but any version which supports v2 AND v1 api should work similar).
If you want to run backup asynchronously you can use following:
curl -X POST http://localhost:8983/api/collections -H 'Content-Type: application/json' -d '
{
"backup-collection": {
"name": "openaccess-v26-backup",
"collection": "openaccess-v26",
"location": "/var/solr/mounted-efs-backup",
"async": "1000"
}
}
'
This will start async process for a backup with track id 1000.
Checking action status
You can use following to check the status of the process:
curl 'http://localhost:8983/solr/admin/collections?action=REQUESTSTATUS&requestid=1000'
This will return response like this:
{
"responseHeader":{
"status":0,
"QTime":3},
"status":{
"state":"running",
"msg":"found [1000] in running tasks"}}
Additionally, this is the way to check all of the actions (not only backup). For example, using the same way you can check status of RESTORE action if you are restoring your backup into the solr collection.
Listing backups
It seems that relevant info as well can be to try to list backups from time to time and see is the backup within your list (if above approach is not working for you).
Please make a note that I am not 100% sure is there a possibility for backup to be listed if its not completed, but based on my testing and pure empirical approach seems that this is not the case.
So, if I start backup, and try to execute an api which is going to give me a list of all of the backups, I will get empty list for example:
curl -X POST http://localhost:8983/v2/collections/backups -H 'Content-Type: application/json' -d '
{
"list-backups" : {
"name": "openaccess-v26-backup",
"location": "/var/solr/mounted-efs-backup"
}
}'
{
"responseHeader":{
"status":0,
"QTime":165},
"backups":[]
}
}
However, if you execute this after a while (when backup is completed), the response will be in a following format:
{
"responseHeader":{
"status":0,
"QTime":14},
"collection":"openaccess-v26",
"backups":[{
"indexFileCount":0,
"indexSizeMB":0.0,
"shardBackupIds":{
"shard2":"md_shard2_0.json",
"shard3":"md_shard3_0.json",
"shard1":"md_shard1_0.json"},
"collection.configName":"openaccess-v26",
"backupId":0,
"collectionAlias":"openaccess-v26",
"startTime":"2022-07-05T08:34:53.703175Z",
"indexVersion":"8.9.0"}]}
This kind of approach works fine for the 8.9 version of solr im using with apiv2.
I was able to restore and use backups without any kind of issues after they are listed.

Logic App not able to deserialized Azure data factory pipeline error message

I am facing the below issue with Azure Data Factory using Logic App.
I am using the Azure Data Factory pipeline for migration and Logic App for sending "Success & Failure" notification to the technical team.
Now success is working fine as the message is hardcoded, but failure is not as the Logic App web activity is not able to parse data factory pipeline error.
Here is the input that is going to Logic App web activity
Input
{
"url": "https://xxxxxxxxxxxxxxxxx",
"method": "POST",
"headers": {},
"body": "{\n \"title\": \"PIPELINE RUN FAILED\",\n \"message\":\"Operation on target Migration Validation failed: Execution fail against sql server. Sql error number: 50000. Error Message: The DELETE statement conflicted with the REFERENCE constraint \"FK_cmclientapprovedproducts_cmlinkclientchannel\". The conflict occurred in database \"Core7\", table \"dbo.cmClientApprovedProducts\", column 'linkclientchannelid'.\",\n \"color\": \"Red\",\n \"dataFactoryName\": \"LFC-TO-MCP-ADF\",\n \"pipelineName\": \"LFC TO MCP MIGRATION\",\n \"pipelineRunId\": \"f4f84365-58f0-4da1-aa00-64c3a4daa9e1\",\n \"time\": \"2020-07-31T22:44:01.6477435Z\"\n}"
}
Here is the error logic app is throwing
failures
{
"errorCode": "2108",
"message": "{\"error\":{\"code\":\"InvalidRequestContent\",\"message\":\"The request content is not valid and could not be deserialized: 'After parsing a value an unexpected character was encountered: F. Path 'message', line 3, position 202.'.\"}}",
"failureType": "UserError",
"target": "Send Failed Notification",
"details": []
}
I have tried various options, like set variable and convert by using various existing methods (string, json, replace etc), but no luck
e.g #string(activity('LOS migration').Error.Message)
Struggling almost all day this...please suggest if anyone faced a similar issue...
Below is the data flow activity
now it is working...
Pasting body content into the body text field box WITHOUT clicking on 'Add Dynamic Content' in web activity calling Logic App.
For the failure case, pass the error output use #{activity('LOS migration').error.message.
For sending email, it doesn't know if it's going to send a failure or success email. We have to adapt the body so the activity can use parameters, which we'll define later:
{
"DataFactoryName": "#{pipeline().DataFactory}",
"PipelineName": "#{pipeline().Pipeline}",
"Subject": "#{pipeline().parameters.Subject}",
"ErrorMessage": "#{pipeline().parameters.ErrorMessage}",
"EmailTo": "#pipeline().parameters.EmailTo"
}
We can reference this variables in the body by using the following format: #pipeline().parameters.parametername. For more details, you could refer to this article.
If you want to use the direct error message of the data factory activity as an input to the logic app email expression, you could try.
"ErrorMessage": "#{string(replace(activity('activity_name').Error.Message, '"',''''))}"
Replace 'activity_name' with your failing activity name.

Google AppEngine application log assigned to the wrong request log

When I look at the logs in the Google Log Viewer for my GAE project, I see that often the logs that I write myself in the code are assigned to the wrong request. Most of the time the log is assigned to the request directly after the request that produced the log entry.
As the root of every application log in GAE must be a request, this means that the wrong request is sometimes marked as error, because another request before produced an error, but the log is somehow assigned to the request after that.
I don't really do anything special, I use Ktor as my servlet and have an interceptor that creates a log when an exception occurs before returning status 500.
I use Java logging via SLF4J with the google cloud logging handler, but before that I used logback via SLf4J and had the same problem.
The content of the logs itself is also correct, the returned status of the request, the level of the log entry, the message, everything is ok.
I thought that it may be because I use kotlin and switch coroutine contexts during a single request, but in some cases the point where I write the log and where I send the response are exactly next to each other, so I'm not sure if kotlin has anything to do with it.
My logging.properties:
# To use this configuration, add to system properties : -Djava.util.logging.config.file="/path/to/file"
#
.level = INFO
# it is recommended that io.grpc and sun.net logging level is kept at INFO level,
# as both these packages are used by Stackdriver internals and can result in verbose / initialization problems.
io.grpc.netty.level=INFO
sun.net.level=INFO
handlers=com.google.cloud.logging.LoggingHandler
# default : java.log
com.google.cloud.logging.LoggingHandler.log=custom_log
# default : INFO
com.google.cloud.logging.LoggingHandler.level=INFO
# default : ERROR
com.google.cloud.logging.LoggingHandler.flushLevel=WARNING
# default : auto-detected, fallback "global"
#com.google.cloud.logging.LoggingHandler.resourceType=container
# custom formatter
com.google.cloud.logging.LoggingHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n
#optional enhancers (to add additional fields, labels)
#com.google.cloud.logging.LoggingHandler.enhancers=com.example.logging.jul.enhancers.ExampleEnhancer
My logging relevant dependencies:
implementation "org.slf4j:slf4j-jdk14:1.7.30"
implementation "com.google.cloud:google-cloud-logging:1.100.0"
An example logging call:
exception<Throwable> { e ->
logger().error("Error", e)
call.respondText(e.message ?: "", ContentType.Text.Plain, HttpStatusCode.InternalServerError)
}
with logger() being:
import org.slf4j.Logger
import org.slf4j.LoggerFactory
inline fun <reified T : Any> T.logger(): Logger = LoggerFactory.getLogger(T::class.java)
Edit:
An example of the log in Google cloud. The first request has the query parameter GAID=cdda802e-fb9c-47ad-0794d394c913, but as you can see the error log for that request is in the one below, marked in red.

How can I read the blcokfile_xxxx

I tried to read the fabric produced blockfile_00000 which is at the directory /var/hyperledger/production/ledgersData/chains/chains/mychannel of the peer node.
But I can't read it through the method like:
configtxgen -profile TwoOrgsChannel -inspectBlock ./channel-artifacts/blockfile_000000
the error is
[common/tools/configtxgen] main -> CRIT 004 Error on inspectBlock: Could not read block ./channel-artifacts/blockfile_000000
using confitxlator
configtxlator proto_decode --input ./channel-artifacts/blockfile_000000 --type common.Block
the error is
configtxlator: error: Error decoding: error unmarshaling: proto: can't skip unknown wire type 6 for common.Block
I know the blockfile actually is chunk which is the collection of blocks, how to handle it?
configtxlator version
configtxlator:
Version: 1.2.0
Commit SHA: f6e72eb
Go version: go1.10
OS/Arch: linux/amd64
Any help would be greatly appreciated.
I use docker exec command into peer node and get block through peer channel fetch . Then, read the block by configtxlator. But how to read the transaction information.
the part log is(block 6):
"header": {
"data_hash": "kVFRQLFjY7+6l6QsL+jOgt5ICoCUlRG4VedgmBXv/mE=",
"number": "6",
"previous_hash": "GQ4w7x7MQB+Jvsa3neJcTNdU7aXdKVHySA7Va3SktOs="
},
There are APIs which can be used to query blocks for any given channel:
GetChainInfo returns the current block height for a given channel
GetBlockByNumber returns individual blocks by number (you get the latest block from the GetChainInfo API work backwards from there)
All of the SDKs have methods to invoke these APIs
configtxgen and configtxlator are meant for generating configuration transactions and translating configuration transactions into readable format respectively. Configuration Transactions include creation of channel, update anchor peers in the channel, setting the reader/writer of a channel etc. They are not meant for normal transactions which gets stored in blockfile_xxx.
You can use Hyperledger Explorer to view your block data.

How to understand log files downloaded from App Engine?

What is the format of Google App Engine log files, as downloaded by appcfg.sh request_logs?
As far as I've been able to determine, the format of the log file is as follows:
CLIENT_IP_ADDRESS - USERNAME [DATE:TIME TIMEZONE] "METHOD URL HTTP/VERSION" RESPONSE_CODE ??? URL USER_AGENT
LOG_LEVEL:TIMESTAMP MESSAGE
:
LOG_LEVEL:TIMESTAMP MESSAGE
:
LOG_LEVEL:TIMESTAMP MESSAGE
:
Each of the indented lines is associated with the non-indented line above it - that's how you determine which request each of them relates to, I think. The solitary colons are used to separate one log message from the next.
The non-indented lines are in reverse chronological order, but the groups of indented lines below them are in chronological order.
A strange client IP address like 0.1.0.2 indicates a within-App-Engine post from your app to your app's task queue. Actually, though, you can also see that it is within-App-Engine by looking at the user agent for that request.

Resources