How do you make 2 or more keys to sign a transaction and verify the keyset? - pact-lang

When building keysets, they have three options: keys-all, keys-2, and keys-any. I wanted to create a keyset that required 3 keys to sign the transaction, but I was confused about how to force the keyset to take all three keys in the signature section.

The validity of the keyset is checked through the predicate (pred).
The predicate can be any function that is fed as arguments:
the number of keys in the keyset
the count of matching signatures
If the predicate returns true the transaction can continue. You can construct custom predicates matching this function signature:
(defun keys-3:bool(count:integer matched:integer)
(>= matched 3))
And then create a keyset using that custom predicate:
(env-data {
"my-keyset": {"keys": ["bob", "alice", "babena" ],
"pred": "my-module.keys-3"}
})

Related

How does a keyset get enforced on a module?

The Pact documentation describes governing a module with a keyset. Specifically, given the module (module foo 'foo-keyset ...), then if someone tries to upgrade the contract then the 'foo-keyset will be "enforced on the transaction signature set."
I am trying to understand what it means to enforce 'foo-keyset. How does this string relate to one or more specific signatures in the signature set? Does the string mean anything, or could I name it anything I want? If the string is arbitrary, then what exactly is being enforced?
I tried to resolve this with the Pact documentation, to see whether there was a way to relate this keyset string with one or more signatures in the keyset. The docs describe a keyset as a list of public keys and a keyset predicate, and gives some JSON examples such as these two:
{ "keys": ["abc6bab9b88e08d","fe04ddd404feac2"], "pred": "keys-2" }
{ "keys": ["abc6bab9b88e08d","fe04ddd404feac2"], "pred": "my-module.custom-pred" }
I thought perhaps the keyset predicate could be used to relate 'foo-keyset to a particular key or set of keys in the keyset, like requiring a predicate my-module.foo-keyset, but the documentation for keyset predicates seems to focus on counting a particular number of matches from a keyset.
A keyset reference is a unique keyset defined in the environment. Either repl or the blockchain.
In repl you can define a keyset like this:
(env-data {
"bob-guard": {"keys": ["bob"], "pred": "keys-all"}
,"alice-guard": {"keys": ["alice"], "pred": "keys-all"}
})
On the blockchain you need to choose a unique name and define it with define-keyset:
(define-keyset 'my-unique-keyset-name (read-keyset "keyset"))
read-keyset reads the keyset from the variable my-keyset defined in the transaction data.
If you define a governance keyset for your module the keyset will be validated upon upgrade through deployment of the same module name. The validity of the keyset is checked through the predicate (pred).
The predicate can be any function that is fed as arguments the number of keys in the keyset and the count of matching signatures. If the predicate returns true the transaction can continue. You can construct custom predicates matching this function signature:
(defun keys-majority:bool(count:integer matched:integer)
(>= matched (+ (/ count 2) 1)))
And then create a keyset using that custom predicate:
(env-data {
"my-module-guard": {"keys": ["bob", "alice", "babena" ],
"pred": "my-module.keys-majority"}
})

Ruby convert array of active records or objects into array of hashes

I have an object Persons which is an ActiveRecord model with some fields like :name, :age .etc.
Person has a 1:1 relationship with something called Account where every person has an account .
I have some code that does :
Account.create!(person: current_person)
where current_person is a specified existing Person active record object.
Note : The table Account has a field for person_id
and both of them have has_one in the model for each other.
Now I believe we could do something like below for bulk creation :
Account.create!([{person: person3},{person:: person2} ....])
I have an array of persons but am not sure of the best way to convert to an array of hashes all having the same key.
Basically the reverse of Convert array of hashes to array is what I want to do.
Why not just loop over your array of objects?
[person1, person2].each{|person| Account.create!(person: person)}
But if for any reason any of the items you loop over fail Account.create! you may be left in a bad state, so you may want to wrap this in an Active Record Transaction.
ActiveRecord::Base.transaction do
[person1, person2].each{|person| Account.create!(person: person)}
end
The create method actually persists each hash individually, as shown in the source code, so probably it's not what you are looking for. Either way the following code would do the job:
Account.create!(persons.map { |person| Hash[:person_id, person.id] })
If you need to create all records in the same database operation and are using rails 6+ you could use the insert_all method.
Account.insert_all(persons.map { |person| Hash[:person_id, person.id] })
For previous versions of rails you should consider using activerecord-import gem.
# Combination(1).to_a converts [1, 2, 3] to [[1], [2], [3]]
Account.import [:person_id], persons.pluck(:id).combination(1).to_a

How does LevelDB handle sequence number in bloom filter?

I have read the source code of LevelDB. I found that it uses internal key when calling AddKey() of filter_block. If we call Get() later, it will construct a lookup key using the last sequence number, and the key will be passed to function KeyMayMatch(). But the last sequence number is different from the sequence number used in AddKey(), so why can bloom filter return the right result?
In RocksDB, to create bloom you have to specify the number of bytes per key to be added to bloom. Although the Internal key is a combination of user key and sequenceNumber, the latter will be stripped off while creating filter. The sequence number will not be a part of the key used to construct the bloom.
So when you call Get() the given key is passed to KeyMayMatch(), if the bloom filter results true, then rocksdb scans the files to fetch the key (if it is present. Remember bloom filter can give false positives). If the bloom results false, the key is not present in the DB.
No. It's incorrect that it uses internal key when calling AddKey() of filter_block.
There is a wrapper class called InternalFilterPolicy above the BloomFilterPolicy, which extracts the "user-key" from the internal-key and passes to BloomFilterPolicy when create filter or test key match.

Manipulating Output from an Array of Nested Hashes in Ruby

I've been pulling data from an API in JSON, and am currently stumbling over an elmementary problem
The data is on companies, like Google and Facebook, and is in an array or hashes, like so:
[
{"id"=>"1", "properties"=>{"name"=>"Google", "stock_symbol"=>GOOG, "primary_role"=>"company"}},
{"id"=>"2", "properties"=>{"name"=>"Facebook", "stock_symbol"=>FB, "primary_role"=>"company"}}
]
Below are two operations I'd like to try:
For each company, print out the name, ID, and the stock symbol (i.e. "Google - 1 - GOOG" and "Facebook - 2 - FB")
Remove "primary role" key/value from Google and Facebook
Assign a new "industry" key/value for Google and Facebook
Any ideas?
Am a beginner in Ruby, but running into issues with some functions / methods (e.g. undefined method) for arrays and hashes as this looks to be an array OF hashes
Thank you!
Ruby provides a couple of tools to help us comprehend arrays, hashes, and nested mixtures of both.
Assuming your data looks like this (I've added quotes around GOOG and FB):
data = [
{"id"=>"1", "properties"=>{"name"=>"Google", "stock_symbol"=>"GOOG", "primary_role"=>"company"}},
{"id"=>"2", "properties"=>{"name"=>"Facebook", "stock_symbol"=>"FB", "primary_role"=>"company"}}
]
You can iterate over the array using each, e.g.:
data.each do |result|
puts result["id"]
end
Digging into a hash and printing the result can be done in a couple of ways:
data.each do |result|
# method 1
puts result["properties"]["name"]
# method 2
puts result.dig("properties", "name")
end
Method #1 uses the hash[key] syntax, and because the first hash value is another hash, it can be chained to get the result you're after. The drawback of this approach is that if you have a missing properties key on one of your results, you'll get an error.
Method #2 uses dig, which accepts the nested keys as arguments (in order). It'll dig down into the nested hashes and pull out the value, but if any step is missing, it will return nil which can be a bit safer if you're handling data from an external source
Removing elements from a hash
Your second question is a little more involved. You've got two options:
Remove the primary_role keys from the nested hashes, or
Create a new object which contains all the data except the primary_role keys.
I'd generally go for the latter, and recommend reading up on immutability and immutable data structures.
However, to achieve [1] you can do an in-place delete of the key:
data.each do |company|
company["properties"].delete("primary_role")
end
Adding elements to a hash
You assign new hash values simply with hash[key] = value, so you can set the industry with something like:
data.each do |company|
company["properties"]["industry"] = "Advertising/Privacy Invasion"
end
which would leave you with something like:
[
{
"id"=>"1",
"properties"=>{
"name"=>"Google",
"stock_symbol"=>"GOOG",
"industry"=>"Advertising/Privacy Invasion"
}
},
{
"id"=>"2",
"properties"=>{
"name"=>"Facebook",
"stock_symbol"=>"FB",
"industry"=>"Advertising/Privacy Invasion"
}
}
]
To achieve the first operation, you can iterate through the array of companies and access the relevant information for each company. Here's an example in Ruby:
companies = [ {"id"=>"1", "properties"=>{"name"=>"Google", "stock_symbol"=>"GOOG", "primary_role"=>"company"}}, {"id"=>"2", "properties"=>{"name"=>"Facebook", "stock_symbol"=>"FB", "primary_role"=>"company"}}]
companies.each do |company|
name = company['properties']['name']
id = company['id']
stock_symbol = company['properties']['stock_symbol']
puts "#{name} - #{id} - #{stock_symbol}"
end
This will print out the name, ID, and stock symbol for each company.
To remove the "primary role" key/value, you can use the delete method on the properties hash. For example:
companies.each do |company|
company['properties'].delete('primary_role')
end
To add a new "industry" key/value, you can use the []= operator to add a new key/value pair to the properties hash. For example:
companies.each do |company|
company['properties']['industry'] = 'Technology'
end
This will add a new key/value pair with the key "industry" and the value "Technology" to the properties hash for each company.

Optimize "= any" operator using index [duplicate]

I can't find a definite answer to this question in the documentation. If a column is an array type, will all the entered values be individually indexed?
I created a simple table with one int[] column, and put a unique index on it. I noticed that I couldn't add the same array of ints, which leads me to believe the index is a composite of the array items, not an index of each item.
INSERT INTO "Test"."Test" VALUES ('{10, 15, 20}');
INSERT INTO "Test"."Test" VALUES ('{10, 20, 30}');
SELECT * FROM "Test"."Test" WHERE 20 = ANY ("Column1");
Is the index helping this query?
Yes you can index an array, but you have to use the array operators and the GIN-index type.
Example:
CREATE TABLE "Test"("Column1" int[]);
INSERT INTO "Test" VALUES ('{10, 15, 20}');
INSERT INTO "Test" VALUES ('{10, 20, 30}');
CREATE INDEX idx_test on "Test" USING GIN ("Column1");
-- To enforce index usage because we have only 2 records for this test...
SET enable_seqscan TO off;
EXPLAIN ANALYZE
SELECT * FROM "Test" WHERE "Column1" #> ARRAY[20];
Result:
Bitmap Heap Scan on "Test" (cost=4.26..8.27 rows=1 width=32) (actual time=0.014..0.015 rows=2 loops=1)
Recheck Cond: ("Column1" #> '{20}'::integer[])
-> Bitmap Index Scan on idx_test (cost=0.00..4.26 rows=1 width=0) (actual time=0.009..0.009 rows=2 loops=1)
Index Cond: ("Column1" #> '{20}'::integer[])
Total runtime: 0.062 ms
Note
it appears that in many cases the gin__int_ops option is required
create index <index_name> on <table_name> using GIN (<column> gin__int_ops)
I have not yet seen a case where it would work with the && and #> operator without the gin__int_ops options
#Tregoreg raised a question in the comment to his offered bounty:
I didn't find the current answers working. Using GIN index on
array-typed column does not increase the performance of ANY()
operator. Is there really no solution?
#Frank's accepted answer tells you to use array operators, which is still correct for Postgres 11. The manual:
... the standard distribution of PostgreSQL includes a GIN operator
class for arrays, which supports indexed queries using these
operators:
<#
#>
=
&&
The complete list of built-in operator classes for GIN indexes in the standard distribution is here.
In Postgres indexes are bound to operators (which are implemented for certain types), not data types alone or functions or anything else. That's a heritage from the original Berkeley design of Postgres and very hard to change now. And it's generally working just fine. Here is a thread on pgsql-bugs with Tom Lane commenting on this.
Some PostGis functions (like ST_DWithin()) seem to violate this principal, but that is not so. Those functions are rewritten internally to use respective operators.
The indexed expression must be to the left of the operator. For most operators (including all of the above) the query planner can achieve this by flipping operands if you place the indexed expression to the right - given that a COMMUTATOR has been defined. The ANY construct can be used in combination with various operators and is not an operator itself. When used as constant = ANY (array_expression) only indexes supporting the = operator on array elements would qualify and we would need a commutator for = ANY(). GIN indexes are out.
Postgres is not currently smart enough to derive a GIN-indexable expression from it. For starters, constant = ANY (array_expression) is not completely equivalent to array_expression #> ARRAY[constant]. Array operators return an error if any NULL elements are involved, while the ANY construct can deal with NULL on either side. And there are different results for data type mismatches.
Related answers:
Check if value exists in Postgres array
Index for finding an element in a JSON array
SQLAlchemy: how to filter on PgArray column types?
Can IS DISTINCT FROM be combined with ANY or ALL somehow?
Asides
While working with integer arrays (int4, not int2 or int8) without NULL values (like your example implies) consider the additional module intarray, that provides specialized, faster operators and index support. See:
How to create an index for elements of an array in PostgreSQL?
Compare arrays for equality, ignoring order of elements
As for the UNIQUE constraint in your question that went unanswered: That's implemented with a btree index on the whole array value (like you suspected) and does not help with the search for elements at all. Details:
How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
It's now possible to index the individual array elements. For example:
CREATE TABLE test (foo int[]);
INSERT INTO test VALUES ('{1,2,3}');
INSERT INTO test VALUES ('{4,5,6}');
CREATE INDEX test_index on test ((foo[1]));
SET enable_seqscan TO off;
EXPLAIN ANALYZE SELECT * from test WHERE foo[1]=1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------
Index Scan using test_index on test (cost=0.00..8.27 rows=1 width=32) (actual time=0.070..0.071 rows=1 loops=1)
Index Cond: (foo[1] = 1)
Total runtime: 0.112 ms
(3 rows)
This works on at least Postgres 9.2.1. Note that you need to build a separate index for each array index, in my example I only indexed the first element.

Resources