I'm following https://github.com/thomashoneyman/real-world-pact/ to deploy my contract on local devnet.
I've updated the deployment script as
const deployK = async () => {
const detailArgs = ["--local", "k-contract-details"];
const contractDetails = await parseArgs(detailArgs).then(runRequest);
if (contractDetails.status === "failure") {
console.log(
"K contract not found on local Chainweb node. Deploying contract..."
);
const deployArgs = [
"--send",
"deploy-k-contract",
"--signers",
"kazora",
];
const deployResult = await parseArgs(deployArgs).then(runRequest);
if (deployResult.status === "success") {
console.log(`Deployed! Cost: ${deployResult.gas} gas.`);
} else {
throw new Error(
`Failed to deploy contract: ${JSON.stringify(
deployResult.error,
null,
2
)}`
);
}
}
};
The deploy-k-contracty.yaml is
# This YAML file describes a transaction that, when executed, will deploy the
# faucet contract to Chainweb.
#
# To execute this request (you must have funded the faucet account):
# faucet-request --send deploy-faucet-contract --signers k
#
# Alternately, to fund the faucet account _and_ deploy the contract:
# faucet-deploy
networkId: "development"
type: "exec"
# To deploy our contract we need to send its entire contents to Chainweb as a
# transaction. When a Chainweb node receives a module it will attempt to
# register it in the given namespace.
codeFile: "../../k.pact"
# The 'data' key is for JSON data we want to include with our transaction. As a
# general rule, any use of (read-msg) or (read-keyset) in your contract
# indicates data that must be included here.
#
# Our contract reads the transaction data twice:
# - (read-keyset "k-keyset")
# - (read-msg "upgrade")
data:
k-admin-keyset:
# On deployment, our contract will register a new keyset on Chainweb named
# 'k-keyset. We'll use this keyset to govern the faucet
# contract, which means the contract can only be upgraded by this keyset.
#
# We want the contract to be controlled by our faucet account, which means
# our keyset should assert that the k.yaml keys were used to
# sign the transaction. The public key below is from the k.yaml
# key pair file.
keys:
- "1b54c9eac0047b10f7f6a6f270f7156fb519ef02c9bb96dc28a4e50c48a468f4"
pred: "keys-all"
# Next, our contract looks for an 'upgrade' key to determine whether it should
# initialize data (for example, whether it should create tables). This request
# deploys the contract, so we'll set this to false.
upgrade: false
signers:
# We need the Goliath faucet account to sign the transaction, because we want
# the faucet to deploy the contract. This is the Goliath faucet public key. It
# should match the keyset above.
- public: "1b54c9eac0047b10f7f6a6f270f7156fb519ef02c9bb96dc28a4e50c48a468f4"
publicMeta:
# The faucet contract only works on chain 0, so that's where we'll deploy it.
chainId: "0"
# The contract should be deployed by the faucet account, which means the
# faucet account is responsible for paying the gas for this transaction. You
# must have used the 'fund-faucet-account.yaml' request to fund the faucet
# account before you can use this deployment request file.
sender: "k"
# To determine the gas limit for most requests you can simply execute the Pact
# code in the REPL, use (env-gaslog) to measure consumption, and round up the
# result. However, deployment is different; you can't simply measure a call to
# (load "faucet.pact") as it will provide an inaccurate measure.
#
# Instead, I first set the gas limit to 150000 (the maximum) and deploy the
# contract to our local simulation Chainweb. Then, I recorded the gas
# consumption that the node reported and round it up.
gasLimit: 65000
gasPrice: 0.0000001
ttl: 600
It complains about validate-principal function, however it's defined as pact built-in function.
https://pact-language.readthedocs.io/en/stable/pact-functions.html?highlight=validate-principal#validate-principal
./kazora/run-deploy-contract.js
-----
executing 'local' request: kazora-details.yaml
-----
Kazora account 1b54c9eac0047b10f7f6a6f270f7156fb519ef02c9bb96dc28a4e50c48a468f4 found with 999.9935 in funds.
-----
executing 'local' request: kazora-contract-details.yaml
-----
Kazora contract not found on local Chainweb node. Deploying contract...
-----
executing 'send' request: deploy-kazora-contract.yaml
-----
Received request key: vm4O3YKKj7Ea9nR8D8nPSHuVI7OtHPJzQjk7RA7XZLI
Sending POST request with request key to /poll endpoint.
May take up to 1 minute and 30 seconds to be mined into a block.
Polling every 5 seconds until the transaction has been processed...
Waiting (15 seconds elapsed)...
Waiting (30 seconds elapsed)...
Waiting (45 seconds elapsed)...
/home/ripple/git/web3/kazora/run-deploy-contract.js:66
throw new Error(
^
Error: Failed to deploy contract: {
"callStack": [
"<interactive>:0:102: module"
],
"type": "EvalError",
"message": "Cannot resolve \"validate-principal\"",
"info": "<interactive>:0:8052"
}
at deployKazora (/home/ripple/git/web3/kazora/run-deploy-contract.js:66:13)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async main (/home/ripple/git/web3/kazora/run-deploy-contract.js:81:3)
Make sure you are using version 4.3.1 of pact or later.
The build-in function was only added at that point:
https://github.com/kadena-io/pact/releases/tag/v4.3.1
Related
This is my first ever question.
I'm using react/vite and Rails 7 to build a firehouse management web app. I originally set up rails as an api with --api. Right now, I can log in but when the user clicks home, or any other link on the page, I loose the authorization(or thats what I'm thinking). I'm using the Bcrypt gem. The console.log(user) on my other pages is returning null, but on the inital login it returns the user object. Now, I have another issue with the logging in all together.
I'm getting a 422 'Unprocessable entity' where my request.base_url doesnt match the localhost:3000. I'm assuming thats because vite is running on 5173?
Here is the error
{status: 422, error: 'Unprocessable Entity', exception: '#<ActionController::InvalidAuthenticityToken: HTTP…t match request.base_url (http://localhost:3000)>', traces: {…}}
error
:
"Unprocessable Entity"
exception
:
"#<ActionController::InvalidAuthenticityToken: HTTP Origin header (http://127.0.0.1:5173) didn't match request.base_url (http://localhost:3000)>"
status
:
422
puma.rb
# Specifies the `port` that Puma will listen on to receive requests; default is 3000.
#
port ENV.fetch("PORT") { 3000 }
# Specifies the `environment` that Puma will run in.
#
environment ENV.fetch("RAILS_ENV") { "development" }
# Specifies the `pidfile` that Puma will use.
pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
I tried to convert rails to the full framework because I thought it was something with the session and cookies. I added a cookie serializer and a session_store.
application.rb
class Application < Rails::Application
# Adding cookies and session middleware
config.middleware.use ActionDispatch::Cookies
config.middleware.use ActionDispatch::Session::CookieStore
config.api_only = false
# Initialize configuration defaults for originally generated Rails version.
config.load_defaults 7.0
# This will allow any origin to make requests to any resource on your server, using any HTTP method.
config.middleware.insert_before 0, Rack::Cors do
allow do
origins '*'
resource '*',
headers: :any,
methods: %i[get post put patch delete options head]
end
end
end
end
cookie_serializer.rb
Rails.application.config.action_dispatch.cookies_serializer = :hybrid
session_store.rb
if Rails.env === 'production'
Rails.application.config.session_store :cookie_store, key: '_fire-sphere', domain: '_fire-sphere-json-api'
else
Rails.application.config.session_store :cookie_store, key: '_fire-sphere'
end
Here is my application_controller.rb
class ApplicationController < ActionController::Base
include ActionController::Cookies
rescue_from ActiveRecord::RecordNotFound, with: :render_not_found
rescue_from ActiveRecord::RecordInvalid, with: :render_unprocessable_entity
def authorized
return render json: {error: "Not Authorized"}, status: :unauthorized unless session.include? :current_user
end
private
def render_unprocessable_entity(invalid)
render json: {errors: invalid.record.errors.full_messages}, status: :unprocessable_entity
end
def render_not_found(error)
# byebug
render json: {error: "#{error.model} Not Found"}, status: :not_found
end
end
show method in users_controller.rb
def show
# using session to find user in question. sessions are in user browser
# if session for user currently happening, set our user to that user and render json
# byebug
current_user = User.find_by(id: session[:current_user])
render json: current_user
end
I think somehow the user isn't getting stored in the session. I was able to check the params on my initial problem and the user was in there but not when I navigated away. I think I've shnaged somthething somewhere and caused a whole other problem now. Thank you for taking a look! I hope it is something simple..
Please how do I deploy a smart contract to the testnet or mainnet WITHOUT Chainweaver web UI? I know I need a YAML file for that, but what do I do with it and where exactly do I send it?
Do I need to run a pact server, chainweb api or...? I couldn't find any guide for that
Step 0: Install the Prerequisites
Install Pact
Step 1: Create the Pact Module
We will be deploying the following pact module. For simplicity's sake the pact code we are deploying is not using a transaction's data field (read-keyset is one such pact function that makes use of this field). Otherwise, the accompanying YAML file will have to change. We also assume that this pact code is saved as test.pact.
(namespace 'free)
(module someModuleName AUTONOMOUS
(defcap AUTONOMOUS ()
true)
(defun dummy ()
(+ 1 2)
)
)
Step 2: Create the YAML file
The following YAML file will be used along with pact -a to sign and produce the escaped JSON needed to submit a transaction to Testnet.
codeFile: /Users/linda.ortega.cordoves/pact/test.pact
networkId: testnet04
publicMeta:
chainId: "0"
gasLimit: 1000
ttl: 28000
creationTime: 1585056536
sender: "testing"
gasPrice: 0.00001
keyPairs:
- public: 1d877a7b4524b6724a6ae708cf9ea7396d6ee9d17b10098b7793800177669c1d
secret: 33fcd94b8a42057bd4e3190f8983e3a73ec96c3f60df95c9e2aa3f13602c714f
nonce: step02
This file makes a couple of assumptions that might change depending on your specific implementation:
The full path of the pact we want to upload is: /Users/linda.ortega.cordoves/pact/test.pact
We want to submit a transaction to Testnet, whose network id is testnet04
We want to submit to the zero'th chain on Testnet, which has a chain id of "0"
That the current creation time in UNIX Epoch time is 1585056536 seconds. This value MUST CHANGE, so calculate it by either navigating to this website or running date +%s on the command line.
That "testing" is the account paying for gas (aka the "sender") on the Testnet network. To create a Testnet account and fund it some coins, navigate to the Testnet Coin Faucet. You will need to have generated an ED22519 public-private key pair to use the faucet. You can use pact -g to generate this key pair. Make sure to save it somewhere save.
That the key pair specified in "keyPairs" corresponds to the key pair used to create the gas payer account, which in this example is "testing". This must change from the defaults provided.
That we saved this YAML file as /Users/linda.ortega.cordoves/pact/test.yaml.
Step 3: Submit Transaction to Testnet
We will now submit the example pact module we created by hitting the /send endpoint of a Testnet node. In the command line, run the following command:
pact -a /Users/linda.ortega.cordoves/pact/test.yaml | curl -H "Content-Type: application/json" -d #- https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/send
Some of the assumptions we made when creating the YAML file become important here:
The network id must match the node endpoint we submit to. Since the network id we chose is testnet04, we must submit to /chainweb/0.0/testnet04/. And the node we submit to (in this case us1.testnet.chainweb.com) must have this network id.
The chain id must also match. We chose chain id of "0", so we must submit to /chain/0/.
That we saved the yaml file to /Users/linda.ortega.cordoves/pact/test.yaml.
If we submitted the transaction successfully we will see the following:
{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}
This means that our transaction was successfully added to the blockchain's mempool and is waiting to be mined. Make note of the request key returned from /send as we will use it when polling for the result of the transaction.
It is also possible that our transaction will fail node validation when we attempt to submit it. If this happens, you will receive a validation failure message instead of the request key.
Step 4: Verify the Result of the Transaction
We will now try to get the results of the transaction we submitted to the Testnet network by hitting the /poll endpoint. In the command line, run the following command:
curl -H "Content-Type: application/json" -d '{"requestKeys":["Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek"]}' -X POST https://us1.testnet.chainweb.com/chainweb/0.0/testnet04/chain/0/pact/api/v1/poll
Again, we make a couple of assumptions in this step:
That the Testnet node we want to poll from is us1.testnet.chainweb.com.
That the network id is testnet04. Note that part of the endpoint is /chainweb/0.0/testnet04/.
That the chain id we are polling from is chain "0". Note that part of the endpoint is /chain/0/.
That the request key we are polling for is Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek.
If the transaction was successfully mined and thus added to the blockchain, then /poll will return the following JSON object:
{
"Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek": {
"gas": 58,
"result": {
"status": "success",
"data": "Loaded module free.linda-test, hash n0g99JhWnO2F7X7f8o_zcAiSHBAWS_QSAfn4yUaqpps"
},
"reqKey": "Vetli41gi_S4-dZlro0visI8QT15brHoPe4vxMmhdek",
"logs": "0KzZQDJmEgnAKvPnO20UeGoE7KGCIN22nhjraeyp1aw",
"metaData": {
"blockTime": 1585056990071469,
"prevBlockHash": "dIYmpjBQge9yw0Yzhn0Sau-wJFwsLOFBmGbV3_0xYeE",
"blockHash": "yULpC5C-7tzRcc9sWm-f1bOC3JDvtxwT61hruW0aXrA",
"blockHeight": 261712
},
"continuation": null,
"txId": 266084
}
}
Please note that it is possible that a transaction fails at the pact level, but still gets added to the blockchain and gas gets charged. If this happens the result.status field will be failure.
If a transaction has not be mined yet, /poll will return {}. Keep retrying until you receive the JSON object shown above.
source: https://gist.github.com/LindaOrtega/1c219f887d9782c6745dbd827bdbfb4d
When I look at the logs in the Google Log Viewer for my GAE project, I see that often the logs that I write myself in the code are assigned to the wrong request. Most of the time the log is assigned to the request directly after the request that produced the log entry.
As the root of every application log in GAE must be a request, this means that the wrong request is sometimes marked as error, because another request before produced an error, but the log is somehow assigned to the request after that.
I don't really do anything special, I use Ktor as my servlet and have an interceptor that creates a log when an exception occurs before returning status 500.
I use Java logging via SLF4J with the google cloud logging handler, but before that I used logback via SLf4J and had the same problem.
The content of the logs itself is also correct, the returned status of the request, the level of the log entry, the message, everything is ok.
I thought that it may be because I use kotlin and switch coroutine contexts during a single request, but in some cases the point where I write the log and where I send the response are exactly next to each other, so I'm not sure if kotlin has anything to do with it.
My logging.properties:
# To use this configuration, add to system properties : -Djava.util.logging.config.file="/path/to/file"
#
.level = INFO
# it is recommended that io.grpc and sun.net logging level is kept at INFO level,
# as both these packages are used by Stackdriver internals and can result in verbose / initialization problems.
io.grpc.netty.level=INFO
sun.net.level=INFO
handlers=com.google.cloud.logging.LoggingHandler
# default : java.log
com.google.cloud.logging.LoggingHandler.log=custom_log
# default : INFO
com.google.cloud.logging.LoggingHandler.level=INFO
# default : ERROR
com.google.cloud.logging.LoggingHandler.flushLevel=WARNING
# default : auto-detected, fallback "global"
#com.google.cloud.logging.LoggingHandler.resourceType=container
# custom formatter
com.google.cloud.logging.LoggingHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS %4$-6s %2$s %5$s%6$s%n
#optional enhancers (to add additional fields, labels)
#com.google.cloud.logging.LoggingHandler.enhancers=com.example.logging.jul.enhancers.ExampleEnhancer
My logging relevant dependencies:
implementation "org.slf4j:slf4j-jdk14:1.7.30"
implementation "com.google.cloud:google-cloud-logging:1.100.0"
An example logging call:
exception<Throwable> { e ->
logger().error("Error", e)
call.respondText(e.message ?: "", ContentType.Text.Plain, HttpStatusCode.InternalServerError)
}
with logger() being:
import org.slf4j.Logger
import org.slf4j.LoggerFactory
inline fun <reified T : Any> T.logger(): Logger = LoggerFactory.getLogger(T::class.java)
Edit:
An example of the log in Google cloud. The first request has the query parameter GAID=cdda802e-fb9c-47ad-0794d394c913, but as you can see the error log for that request is in the one below, marked in red.
Background
In my database I have some uniqueness constraints. If the data breaks one of this conditions, I get an error message like Violation of UNIQUE KEY constraint.
I use tryCatch in my code, to capture this error and return a meaningful message to the user. So far so good.
However, if I try to run any new transaction on the server after having captured this error, I get another error message saying that I Cannot begin a nested transaction.
My findings
I traced the error down, and I figured that when dbRollback is called (either explicitly, or within withTransaction) one cannot submit any new dbBegin anymore (either explicitly or implicitly via dbWriteTable and friends).
What I need to get unstuck, is to run a dbCommit, to be allowed to run another dbBegin.
Looking at the code of dbCommit and dbRollback I see that in the former case
setAutoCommit is set to true, which signals dbBegin that we are not nesting transactions. This is not the case for dbRollback:
getMethod("dbCommit", "SQLServerConnection")
# Method Definition:
#
# function (conn, ...)
# {
# rJava::.jcall(conn#jc, "V", "commit")
# rJava::.jcall(conn#jc, "V", "setAutoCommit", TRUE)
# TRUE
# }
# <environment: namespace:RSQLServer>
getMethod("dbRollback", "SQLServerConnection")
# Method Definition:
#
# function (conn, ...)
# {
# rJava::.jcall(conn#jc, "V", "rollback")
# TRUE
# }
# <environment: namespace:RSQLServer>
Question
So my question is: is this the supposed behavior? That is, am I suppose to run a manual dbCommit after an operation was rolled back, or is this a bug?
Code
library(DBI)
library(RSQLServer)
db <- dbConnect(...)
dbBegin(db)
dbCommit(db)
dbBegin(db) # works
dbRollback(db)
dbBegin(db) # does not work
dbCommit(db) # my workaround
dbBegin(db) # works again
Is there a way to set all public links to have 'no-cache' in Google Cloud Storage?
I've seen solutions to use gsutil to set the "Cache-Control" upon file-upload, but I'm looking for a more permanent solution.
There was a conversation about providing a cache invalidation feature but I didn't quite follow the reasoning. Any explanations would be greatly appreciated!
it would be difficult to provide a cache invalidation feature because once served with a non-0 cache TTL any cache on the Internet (not just those under Google's control) is allowed (per HTTP spec) to cache the data
Thanks!
For a more permanent one-time-effort solution, with the current offerings on GCP, you can do this with Cloud Functions.
Create a new Funciton, set the Event type to "On (finalizing/creating) file in the selected bucket" - google.storage.object.finalize. Make sure to select the bucket you want this on. In the body of the function, set the cacheControl / Cache-Control attribute for the blob. The attribute name depends on the language. Here's my version in Python, using cache_control:
main.py:
match the function name below to the Entry point
from google.cloud import storage
def set_file_uncached(event, context):
file = event # auto-generated
print(f"Processing file: {file=}") # logging, if you want it
storage_client = storage.Client()
# we expect just one with that name
blob = storage_client.bucket(file["bucket"]).get_blob(file["name"])
if not blob:
# in case the blob is deleted before this executes
print(f"blob not found")
return None
blob.cache_control = "public, max-age=0" # or whatever you need
blob.patch()
requirements.txt
google-cloud-storage
From the logs: Function execution took 1712 ms, finished with status: 'ok'. This could have been faster but I've set the minimum to 0 instances so it needs to spin-up for each upload. Depending on your usage and cost constraints, you can set it to 1 or something higher.
Other settings:
Retry on failure: No/False
Region: [wherever your bucket is]
Memory allocated: 128 MB (smallest available currently)
Timeout: 5 seconds (smallest available currently, function shouldn't take longer)
Minimum instances: 0
Maximum instances: 1