Do all modules inherit the Coin contract automatically or is there a required import? - pact-lang

When accepting payment for an NFT I am making, how do I ensure I can call coin.transfer and coin.details from my module?

So the only per-requisites are
Reference to the coin contract
The transfer capability
You can reference (call) functions on the coin contract by simple calling them like normal functions by doing coin.transfer etc.
You can also "import" the whole coin module by doing (use coin) in your contract. This imports all the functions in the coin contract so you can call them like transfer instead of coin.transfer .
But this can cause unexpected bugs if not used with caution, so I recommend always calling them like coin.transfer etc
You need to make sure the "TRANSFER" capability is granted. This can be passed in / specified by the caller when calling the contract.
If your contract is the "owner" of the account (i.e. there's a balance for the contract) you can use the install-capability which will let contract grant itself the capability to do transfers for its account

Related

Dropbox's ATF - How functions/callbacks are stored in database?

I am reading about dropbox's Async. Task Framework and its architecture from dropbox tech blog: https://dropbox.tech/infrastructure/asynchronous-task-scheduling-at-dropbox
The architecture seems to be clear to me but what I can't understand is how the callbacks (or lambda in their terminology) can be stored in the database for later execution? Because they are just normal programming language functions right? Or am I missing something here?
Also,
It would need to support nearly 100 unique async task types from the start, again with room to grow.
It seems that here they are talking about types of lambda here. But how that is even possible when the user can provide arbitrary code in the callback function?
Any help would be appreciated. Thanks!
Let me share with how this is done in case of Hangfire, which is a popular job scheduler in .NET world. I use this as an example, because I have some experience with it and its source code is publicly available on github.
Enqueueing a recurring job
RecurringJob.AddOrUpdate(() => Console.WriteLine("Transparent!"), Cron.Daily);
The RecurringJob class defines several overloads for AddOrUpdate to accept different methodCall parameters:
Expression<Action>: Synchronous code without any parameter
Expression<Action<T>>: Synchronous code with a single parameter
Expression<Func<Task>>: Asynchronous code without any parameter
Expression<Func<T, Task>>: Asynchronous code with a single parameter
The overloads are anticipating not just a delegate (a Func or an Action) rather an Expression, because it allows to Hangfire to retrieve meta information about
the type on which
the given method should be called
with what parameter(s)
Retrieving meta data
There is a class called Job which exposes several FromExpression overloads. All of them are calling this private method which does all the heavy lifting. It retrieves the type, method and argument meta data.
From the above example this FromExpression retrieves the following data:
type: System.Console, mscorlib
method: WriteLine
parameter type: System.String
argument: "Transparent!"
These information will be stored inside the Job's properties: Type, Method and Args.
Serializing meta info
The RecurringJobManager receives this job and passes to a transaction via a RecurringJobEntity wrapper to perform an update if the definition of the job has changed or it was not registered at all.
Inside its GetChangedFields method is where the serialization is done via a JobHelper and a InvocationData classes. Under the hood they are using Newtonsoft's json.net to perform the serialization.
Back to our example, the serialized job (without the cron expression) looks something like this
{
"t":"System.Console, mscorlib",
"m":"WriteLine",
"p":[
"System.String"
],
"a":[
"Transparent!"
]
}
This is what persisted inside the database and read from it whenever the job needs to be triggered.
I found the answer from the article itself. The core ATF framework just defines the type of tasks/callbacks it supports (e.g. Send email is a type of task) and creates corresponding SQS queues for them (for each task, there are multiple queues for different priorities).
The user (who schedules the task) does not provide function definition while scheduling the task. It only provides details of the function/callback that it wants to schedule. Those details will be pushed to the SQS queue and it's user's responsibility to create worker machines which listens for the specific type of tasks on SQS and also has the function/callback definition (e.g. the actual logic of sending email).
Therefore, there is no need to store the function definition in the database. Here's the exact section from the article that describes this: https://dropbox.tech/infrastructure/asynchronous-task-scheduling-at-dropbox#ownership-model
Ownership model
ATF is designed to be a self-serve framework for developers at Dropbox. The design is very intentional in driving an ownership model where lambda owners own all aspects of their lambdas’ operations. To promote this, all lambda worker clusters are owned by the lambda owners. They have full control over operations on these clusters, including code deployments and capacity management. Each executor process is bound to one lambda. Owners have the option of deploying multiple lambdas on their worker clusters simply by spawning new executor processes on their hosts.

How can I create an associated token address with phantom JS?

#solana/spl-token has two methods:
getAssociatedTokenAddress
getOrCreateAssociatedTokenAccount
Context, one only has the public address but access to phantom.
If the associated token already exists, getAssociatedTokenAddress works well but getOrCreateAssociatedTokenAccount requires the secret key.
Using Phantom, how can one generate that token address via a signature mechanism?
Concrete use case: one wants to send USDT to a public that does not have the USDT associated address. I would like phantom to somehow sign the action and create that address
So, if this is all you want to do:
Concrete use case: one wants to send USDT to a public that does not have the USDT associated address. I would like phantom to somehow sign the action and create that address
You don't need to worry about creating the account directly, since you can just send the token to the wallet, and fund the account creation from the signer. So just a normal token::transfer should suffice IIRC.
But to answer your first question about how to do some operation that requires a private key using Phantom, the general approach is create a Transaction in JS, then use the wallet adapter signTransaction to sign it, and then send/confirm the signed transaction. (Depending on how you send and confirm it, you might have to add in a recent blockhash and payer to the Transaction as well)
This is similar to what createAssociatedTokenAccount does under the hood -- https://github.com/solana-labs/solana-program-library/blob/48fbb5b7/token/js/src/actions/createAssociatedTokenAccount.ts#L30 -- with the added twist of signing via wallet adapter.

Can I query a Near contract for its method signatures?

Is there a way to query what methods are offered by a given NEAR contract? (So that one could do autodiscovery of some standard interface, for instance.) Or do you have to just know the method signatures already before you can interact with a contract?
No not yet. Currently all contract methods have the same signature. () -> () No arguments and nothing is returned. Each method has a wrapper function that deserializes the input bytes from a host; calls the method; and serializes the return value and passes the bytes back to the host.
This is done with input and value_return. See input..
There are plans to include the actual signatures of the methods in the binary in a special section, which would solve this issue.
NEP-351 was recently approved, which provides a mechanism for contracts to expose all standards they implement. However, it is up to contract developers to follow this NEP. When integrated into the main SDK, I presume most will.
Alternatively, there is a proposal to create a global registry as a smart contract that provides this information.
Currently, there is not.
You will need to know what contract methods are available in order to interact with a smart contract deployed on NEAR. Hopefully, the ability to query available methods will be added in the near future.
I suppose you can just include a method in your own contracts that returns the other method signatures in some useful format: json or whatever
you would have to make sure that it stays current by maybe writing some unit tests that use this method to exercise all others
I suppose this interface (method and unit tests) can be standardized as an NEP in the short term until our interface becomes discoverable. any contracts that adhere to this NEP must include this "tested reflection method" or "documentation method" or whatever it would be called

Domain driven design database validation in model layer

I'm creating a design for a Twitter application to practice DDD. My domain model looks like this:
The user and tweet are marked blue to indicate them being a aggregate root. Between the user and the tweet I want a bounded context, each will run in their respective microservice (auth and tweet).
To reference which user has created a tweet, but not run into a self-referencing loop, I have created the UserInfo object. The UserInfo object is created via events when a new user is created. It stores only the information the Tweet microservice will need of the user.
When I create a tweet I only provide the userid and relevant fields to the tweet, with that user id I want to be able to retrieve the UserInfo object, via id reference, to use it in the various child objects, such as Mentions and Poster.
The issue I run into is the persistance, at first glance I thought "Just provide the UserInfo object in the tweet constructor and it's done, all the child aggregates have access to it". But it's a bit harder on the Mention class, since the Mention will contain a dynamic username like so: "#anyuser". To validate if anyuser exists as a UserInfo object I need to query the database. However, I don't know who is mentioned before the tweet's content has been parsed, and that logic resides in the domain model itself and is called as a result of using the tweets constructor. Without this logic, no mentions are extracted so nothing can "yet" be validated.
If I cannot validate it before creating the tweet, because I need the extraction logic, and I cannot use the database repository inside the domain model layer, how can I validate the mentions properly?
Whenever an AR needs to reach out of it's own boundary to gather data there's two main solutions:
You pass in a service to the AR's method which allows it to perform the resolution. The service interface is defined in the domain, but most likely implemented in the infrastructure layer.
e.g. someAr.someMethod(args, someServiceImpl)
Note that if the data is required at construction time you may want to introduce a factory that takes a dependency on the service interface, performs the validation and returns an instance of the AR.
e.g.
tweetFactory = new TweetFactory(new SqlUserInfoLookupService(...));
tweet = tweetFactory.create(...);
You resolve the dependencies in the application layer first, then pass the required data. Note that the application layer could take a dependency onto a domain service in order to perform some reverse resolutions first.
e.g.
If the application layer would like to resolve the UserInfo for all mentions, but can't because it doesn't know how to parse mentions within the text it could always rely on a domain service or value object to perform that task first, then resolve the UserInfo dependencies and provide them to the Tweet AR. Be cautious here not to leak too much logic in the application layer though. If the orchestration logic becomes intertwined with business logic you may want to extract such use case processing logic in a domain service.
Finally, note that any data validated outside the boundary of an AR is always considered stale. The #xyz user could currently exist, but not exist anymore (e.g. deactivated) 1ms after the tweet was sent.

Software License Project: how to avoid circular dependency

The title sounds like a bit unclear and I am not sure about the terminology. But my problem is: I am now implementing the license verification function of our software, which consists of several modules. For example, the function call is something like License.IsModuleEnabled(m As Module) (the code is in VB.NET)
Now the thing is, it is common for one module to require another module as a prerequisite, for example, for ModuleA to run, ModuleB must be enabled as well. So in Module class I have a public number called RequiredModules which is a list of Module. So the IsModuleEnabled() function would look something like this:
Public Function(m As Module)
...
For Each module In m.RequiredModules
If Not IsModuleEnabled(module) Then Return False
End For
...
End Function
The problem is obvious (but the solution is not to me): if ModuleA requires ModuleB, and ModuleB requires ModuleA, the function would go to a dead loop.
Those modules they are parallel to each other so I don't have a clue how to manage such a verification function. Our current solution is that only some "basic" modules can be listed in the RequiredModules, but for long term, to have all modules possible to appear in the list would be better.
Have a set listing all modules for that the license is already verified, and check in this set before calling potentially redundant verification. If there is no value in the set, do verify and add the name of the verified module there.
Have another similar set to list modules for that verification has knowingly failed so we would not get into the endless loop in such situation as well.
This should work also if verifications happen in parallel as the knowledge keep accumulating in the set till will be sufficient to break the loops.

Resources