Format does not except Amount:Decimal - pact-lang

I was writing over a simple payment contract and noticed that I was receiving the warning:
We can only analyze calls to format formatting {string,integer,bool}
(not amount)
Below is my code, I realized if I remove the amount parameter on the bottom of my code I no longer receive the warning.. is there a way to adjust it?
(defun pay (from:string to:string amount:decimal)
(with-read payments-table from { "balance":= from-bal, "keyset":= keyset }
(enforce-keyset keyset)
(with-read payments-table to { "balance":= to-bal }
(enforce (> amount 0.0) "Negative Transaction Amount")
(enforce (>= from-bal amount) "Insufficient Funds")
(update payments-table from
{ "balance": (- from-bal amount) })
(update payments-table to
{ "balance": (+ to-bal amount) })
(format "{} paid {}" [from to] ))))
)

The Pact property checking system doesn't currently support analysis of formatting decimals. Your example as written should actually be okay, but if we look at the simple payment example it includes this line: (format "{} paid {} {}" [from to amount]), where amount is a decimal.
If you need to check properties of code like this, the easiest way would be to use integer instead of decimal, since we can analyze the formatting of integers.
We can't currently analyze formatting of integers for a technical reason that should be fixable.

Related

How do I create a KDA wallet guard that will allow for an arbitrary number of people to withdraw from it, if they meet specific conditions?

Question: How do I create a wallet guard that will allow for an arbitrary number of people to withdraw from it, if they meet specific conditions?
I am trying to make a wallet with a guard that lets a user withdraw a specific amount from it depending on how long they've owned an NFT, among other factors. Right now I'm doing the most basic check: seeing if the "recipient" address passed into the claim function is the owner of a specific NFT. If they own that NFT, they can withdraw as much as they want.
(defcap WITHDRAW (recipient:string nft-id:string)
(with-read mledger nft-id
{'owner-address := owner-address }
(enforce (= recipient owner-address) "not the owner of this NFT")
)
(compose-capability (BANK_DEBIT))
)
(defun require-WITHDRAW (recipient:string nft-id:string)
(require-capability (WITHDRAW recipient nft-id))
)
(defun create-WITHDRAW-guard (recipient:string nft-id:string)
(create-user-guard (require-WITHDRAW recipient nft-id))
)
(defun create-simple-user-guard (funder:string nft-id:string BANK_KDA_ACCT:string amount:decimal recipient:string)
(coin.transfer-create funder BANK_KDA_ACCT
(create-WITHDRAW-guard recipient nft-id) amount)
)
With my current code, only the very first inputs that I pass into (create-simple-user-guard) impact who can withdraw, but I do not know how to allow the guard to accept many different recipients and NFT-ids. Any advice would be appreciated.
I'm following this "tutorial" https://medium.com/kadena-io/deprecation-notice-for-module-guards-and-pact-guards-2efbf64f488f but it loses any amount of detail after it gets to making more robust debit capabilities
Assuming that your nft-id can reference a collection, and not just an individual NFT, this code should work to allow any individual who owns one to claim something from the contract.
(defcap WITHDRAW (nft-id:string)
(with-read mledger nft-id
{'owner-address := owner-address }
(enforce (= (read-msg "recipient") owner-address) "not the owner of this NFT")
)
(compose-capability (BANK_DEBIT))
)
(defun require-WITHDRAW (nft-id:string)
(require-capability (WITHDRAW nft-id))
)
(defun create-WITHDRAW-guard (nft-id:string)
(create-user-guard (require-WITHDRAW nft-id))
)
(defun create-simple-user-guard (funder:string nft-id:string BANK_KDA_ACCT:string amount:decimal)
(coin.transfer-create funder BANK_KDA_ACCT
(create-WITHDRAW-guard nft-id) amount)
)
Keep in mind that this code requires you to pass in a recipient into the env data of the transaction in this manner:
{ "recipient": "k:abcd..." }
Hope this helps!
Thanks #luzzotica for your answer! It was not the perfect solution to my use-case but it did help guide me where I needed to be.
Here's a snippet of my contract on the testnet, if you want to see the whole thing it is called free.simple-user-guard
Basically what I needed to do was create a wallet that is guarded by the BANK_DEBIT capability which always returns true, and then from there I can use regular capabilities to determine ownership of the NFTs. The tutorial linked in my question post gave the answer, I just didn't understand it without further discussion. Note: This code has only one restriction, and that is the calling wallet must own an NFT. This does not restrict how much can be taken from the wallet, additional capabilities would need to be implemented for that.
(defcap WITHDRAW (recipient:string nft-id:string)
(with-read mledger nft-id
{'owner-address := owner-address }
(enforce (= recipient owner-address) "not the owner of this NFT")
)
(compose-capability (BANK_DEBIT))
(compose-capability (ACCOUNT_GUARD recipient))
)
(defun create-simple-user-guard (funder:string BANK_KDA_ACCT:string amount:decimal)
(coin.transfer-create funder BANK_KDA_ACCT
(create-BANK_DEBIT-guard) amount)
)
(defun claim (recipient:string amount:decimal BANK_KDA_ACCT:string nft-id:string)
#doc "allows a user to withdraw x amount"
(with-capability (WITHDRAW recipient nft-id)
(coin.transfer BANK_KDA_ACCT recipient amount)
)
)
;; Capability user guard: capability predicate function
(defun require-BANK_DEBIT ()
(require-capability (BANK_DEBIT))
)
;; Capability user guard: guard constructor
(defun create-BANK_DEBIT-guard ()
(create-user-guard (require-BANK_DEBIT))
)
```

How do you estimate the amount of gas required for a smart contract invocation?

I want to send a transaction in RSK network and I get this message in logs: Not enough gas for transaction execution.
I got the gas limit parameter from my testing environment, using web3.eth.estimateGas.
RSK nodes have a JSON-RPC for eth_estimateGas,
which is the most reliable way to perform gas estimations.
You can do this from the terminal using curl:
curl \
-X POST \
-H "Content-Type:application/json" \
--data '{"jsonrpc":"2.0","method":"eth_estimateGas","params":[{"from": "0x560e6c06deb84dfa84dac14ec08ed093bdd1cb2c", "to": "0x560e6c06deb84dfa84dac14ec08ed093bdd1cb2c", "gas": "0x76c0", "gasPrice": "0x3938700", "value": "0x9184e72a", "data": "" }],"id":1}' \
http://localhost:4444
{"jsonrpc":"2.0","id":1,"result":"0x5208"}
Alternatively, using web3.js:
web3.eth.estimateGas({"to": "0x391ec8a27d29a42c7601651d2f38b1e1895e27a1", "data": "0xe26e496319a16c8ccae126f4aac7e3010123927a4739288cd1ace12feafae9a2"})
23176
While this is the same JSON-RPC found in geth (Ethereum) and other Ethereum-compatible nodes,
note that the gas calculations in RSK and Ethereum are different.
Thus their implementations differ.
For example, the price of certain VM opcodes are different.
Another notable point of difference related to gas estimation,
is that Ethereum implements EIP-150,
whereas RSK does not.
This means that the 1/64 reduction in gas estimation does not apply to RSK.
(The detailed implications of this on gas estimation are perhaps beyond the scope of this question.)
This means that you will expect incorrect values when running against ganache-cli (previously testrpc),
which is used by default in common developer tools such as Truffle.
To get the correct gas,
using the RSK-specific calculations,
the best way is to use RSK Regtest
when invoking eth_estimateGas
for local development and testing.
In other scenarios you may also use
RSK Testnet and Mainnet.
The following other scenarios are also relevant, but not directly related to your question, but are also good to know:
When invoking smart contract functions
that have the pure or view modifiers,
no gas (and therefore gas estimation) is necessary.
When performing certain transactions that have a define invariant gas price,
simply you may use that as a hard-coded constant.
For example the transfer of the native currency (RBTC in this case),
the invariant gas price is 21000.
This assumes that no data (sometimes referred to as "message")
was sent with the transaction.

Sell specifying the "total" instead of the "amount" parameter

According to the "Place sell order" documentation, you can either use amount as parameter, or total:
amount string Required Sell amount
total string Optional Sell amount with fees (alternative to amount)
Using the Ruby client, I made a call
client.sell(account.id,{
"total" => some_value,
"currency" => "ETH",
"payment_method" => fiat_payment_method_id
});
to sell some ETH back to USD. I got the following error message
.../lib/coinbase/wallet/api_client.rb:402:in `block in sell': Missing parameter: amount (Coinbase::Wallet::APIError)
Am I misreading the documentation ? Do I need to specify both account and total, and will the server use the total and ignore the amount ?
Or is the documentation just wrong ?
Didn't realise the error was throwing by the Ruby code and not by the Coinbase server.
Looks like the code enforced the presence of the amount parameter.
Submitted a pull request to get this fixed. Testing with my own local modified version of their gem indicates that the server works just fine with using "total" instead of "amount".

How much trust should I put in the validity of retrieved data from database?

Other way to ask my question is: "Should I keep the data types coming from database as simple and raw as I would ask them from my REST endpoint"
Imagine this case class that I want to store in the database as a row:
case class Product(id: UUID,name: String, price: BigInt)
It clearly isn't and shouldn't be what it says it is because The type signatures of nameand price are a lie.
so what we do is create custom data types that better represent what things are such as: (For the sake of simplicity imagine our only concern is the price data type)
case class Price(value: BigInt) {
require(value > BigInt(0))
}
object Price {
def validate(amount: BigInt): Either[String,Price] =
Try(Price(amount)).toOption.toRight("invalid.price")
}
//As a result my Product class is now:
case class Product(id: UUID,name: String,price: Price)
So now the process of taking user input for product data would look like this:
//this class would be parsed from i.e a form:
case class ProductInputData(name: String, price: BigInt)
def create(input: ProductInputData) = {
for {
validPrice <- Price.validate(input.price)
} yield productsRepo.insert(
Product(id = UUID.randomUUID,name = input.name,price = ???)
)
}
look at the triple question marks (???). this is my main point of concern from an entire application architecture perspective; If I had the ability to store a column as Price in the database (for example slick supports these custom data types) then that means I have the option to store the price as either price : BigInt = validPrice.value or price: Price = validPrice.
I see so many pros and cons in both of these decisions and I can't decide.
here are the arguments that I see supporting each choice:
Store data as simple database types (i.e. BigInt) because:
performance: simple assertion of x > 0 on the creation of Price is trivial but imagine you want to validate a Custom Email type with a complex regex. it would be detrimental upon retrieval of collections
Tolerance against Corruption: If BigInt is inserted as negative value it would't explode in your face every time your application tried to simply read the column and throw it out on to the user interface. It would however cause problem if it got retrieved and then involved in some domain layer processing such as purchase.
Store data as it's domain rich type (i.e. Price) because:
No implicit reasoning and trust: Other method some place else in the system would need the price to be valid. For example:
//two terrible variations of a calculateDiscount method:
//this version simply trusts that price is already valid and came from db:
def calculateDiscount(price: BigInt): BigInt = {
//apply some positive coefficient to price and hopefully get a positive
//number from it and if it's not positive because price is not positive then
//it'll explode in your face.
}
//this version is even worse. It does retain function totality and purity
//but the unforgivable culture it encourages is the kind of defensive and
//pranoid programming that causes every developer to write some guard
//expressions performing duplicated validation All over!
def calculateDiscount(price: BigInt): Option[BigInt] = {
if (price <= BigInt(0))
None
else
Some{
//Do safe processing
}
}
//ideally you want it to look like this:
def calculateDiscount(price: Price): Price
No Constant conversion of domain types to simple types and vice versa: for representation, storage,domain layer and such; you simply have one representation in the system to rule them all.
The source of all this mess that I see is the database. if data was coming from the user it'd be easy: You simply never trust it to be valid. you ask for simple data types cast them to domain types with validation and then proceed. But not the db. Does the modern layered architecture address this issue in some definitive or at least mitigating way?
Protect the integrity of the database. Just as you would protect the integrity of the internal state of an object.
Trust the database. It doesn't make sense to check and re-check what has already been checked going in.
Use domain objects for as long as you can. Wait till the very last moment to give them up (raw JDBC code or right before the data is rendered).
Don't tolerate corrupt data. If the data is corrupt, the application should crash. Otherwise it's likely to produce more corrupt data.
The overhead of the require call when retrieving from the DB is negligible. If you really think it's an issue, provide 2 constructors, one for the data coming from the user (performs validation) and one that assumes the data is good (meant to be used by the database code).
I love exceptions when they point to a bug (data corruption because of insufficient validation on the way in).
That said, I regularly leave requires in code to help catch bugs in more complex validation (maybe data coming from multiple tables combined in some invalid way). The system still crashes (as it should), but I get a better error message.

DbUnit: Setting tolerance value - compare SQL Server vs SAP HANA

Most important
DB Unit returns a difference for a double value in row 78:
Exception in thread "main" junit.framework.ComparisonFailure: value (table=dataset, row=78, col=DirtyValue) expected:<4901232.27291950[7]> but was:<4901232.27291950[6]>
So I assume that SQL Server returns 4901232.272919507 while HANA returns 4901232.272919506
(Based on the answer to JUnit assertEquals Changes String)
Then I tried to set the tolerated delta acording to the FAQ Is there an equivalent to JUnit's assertEquals(double expected, double actual, double delta) to define a tolerance level when comparing numeric values?
But I do still get the same error - any ideas?
Additional information
Maybe this is the reason:?
[main] WARN org.dbunit.dataset.AbstractTableMetaData - Potential problem found: The configured data type factory 'class org.dbunit.dataset.datatype.DefaultDataTypeFactory' might cause problems with the current database 'Microsoft SQL Server' (e.g. some datatypes may not be supported properly). In rare cases you might see this message because the list of supported database products is incomplete (list=[derby]). If so please request a java-class update via the forums.If you are using your own IDataTypeFactory extending DefaultDataTypeFactory, ensure that you override getValidDbProducts() to specify the supported database products.
[main] WARN org.dbunit.dataset.AbstractTableMetaData - Potential problem found: The configured data type factory 'class org.dbunit.dataset.datatype.DefaultDataTypeFactory' might cause problems with the current database 'HDB' (e.g. some datatypes may not be supported properly). In rare cases you might see this message because the list of supported database products is incomplete (list=[derby]). If so please request a java-class update via the forums.If you are using your own IDataTypeFactory extending DefaultDataTypeFactory, ensure that you override getValidDbProducts() to specify the supported database products.
DbUnit Version 2.5.4
DirtyValue is calculated from 3 double vales in both systems
SQL Server
SELECT TypeOfGroup, Segment, Portfolio, UniqueID, JobId, DirtyValue, PosUnits, FX_RATE, THEO_Value
FROM DATASET_PL
order by JobId, TypeOfGroup, Segment, Portfolio, UniqueID COLLATE Latin1_General_bin
HANA
SELECT "TypeOfGroup", "Segment", "Portfolio", "UniqueID", "JobId", "DirtyValue", Pos_Units as "PosUnits", FX_RATE, THEO_Value as "THEO_Value"
FROM "_SYS_BIC"."meag.app.h4q.metadata.dataset.pnl/06_COMPARE_CUBES_AND_CALC_ATTR"
order by "JobId", "TypeOfGroup", "Segment", "Portfolio", "UniqueID"
Work-around
Use a diffhandler and handle the differences there:
DiffCollectingFailureHandler diffHandler = new DiffCollectingFailureHandler();
Assertion.assertEquals(expectedTable, actualTable);
List<Difference> diffList = diffHandler.getDiffList();
for (Difference diff: diffList) {
if (diff.getColumnName().equals("DirtyValue")) {
double actual = (double) diff.getActualValue();
double expected = (double) diff.getExpectedValue();
if (Math.abs(Math.abs(actual) - Math.abs(expected)) > 0.00001) {
logDiff(diff);
} else {
logDebugDiff(diff);
}
} else {
logDiff(diff);
}
}
private void logDiff(Difference diff) {
logger.error(String.format("Diff found in row:%s, col:%s expected:%s, actual:%s", diff.getRowIndex(), diff.getColumnName(), diff.getExpectedValue(), diff.getActualValue()));
}
private void logDebugDiff(Difference diff) {
logger.debug(String.format("Diff found in row:%s, col:%s expected:%s, actual:%s", diff.getRowIndex(), diff.getColumnName(), diff.getExpectedValue(), diff.getActualValue()));
}
The question was "Any idea?", so maybe it helps to understand why the difference occurrs.
HANA truncates if needed, see "HANA SQL and System Views Reference", numeric types. In HANA the following Statement results in 123.45:
select cast( '123.456' as decimal(6,2)) from dummy;
SQL-Server rounds if needed, at least if the target data type is numeric, see e.g. here at "Truncating and rounding results".
The same SQL statement as above results in 123.46 in SQL-Server.
And SQL-Standard seems to leave it open, whether to round or to truncate, see answer on SO .
I am not aware of any settings that change the rounding behavior in HANA, but maybe there is.

Resources