How can I create an associated token address with phantom JS? - web3js

#solana/spl-token has two methods:
getAssociatedTokenAddress
getOrCreateAssociatedTokenAccount
Context, one only has the public address but access to phantom.
If the associated token already exists, getAssociatedTokenAddress works well but getOrCreateAssociatedTokenAccount requires the secret key.
Using Phantom, how can one generate that token address via a signature mechanism?
Concrete use case: one wants to send USDT to a public that does not have the USDT associated address. I would like phantom to somehow sign the action and create that address

So, if this is all you want to do:
Concrete use case: one wants to send USDT to a public that does not have the USDT associated address. I would like phantom to somehow sign the action and create that address
You don't need to worry about creating the account directly, since you can just send the token to the wallet, and fund the account creation from the signer. So just a normal token::transfer should suffice IIRC.
But to answer your first question about how to do some operation that requires a private key using Phantom, the general approach is create a Transaction in JS, then use the wallet adapter signTransaction to sign it, and then send/confirm the signed transaction. (Depending on how you send and confirm it, you might have to add in a recent blockhash and payer to the Transaction as well)
This is similar to what createAssociatedTokenAccount does under the hood -- https://github.com/solana-labs/solana-program-library/blob/48fbb5b7/token/js/src/actions/createAssociatedTokenAccount.ts#L30 -- with the added twist of signing via wallet adapter.

Related

pagination for the list secrets for logic apps

I am using List secrets activity to get all the secrets from key vault. I am only able to get first few values as pagination is not Woking for this activity. Is there any other way I can get all the secrets values from the logic apps.Right now I am only able to do for first page values only and as per Microsoft there is limitation of maximum 25 items.
I've managed to recreate the problem in my own tenant and yes, it is indeed an issue. There should be a paging option in the settings but there's not.
To get around this, I suggest calling the REST API's directly. The only consideration is how you authenticate and if it were me, I'd be using a managed identity to do so.
I've mocked up a small example for you ...
The steps are ...
Create a variable that stores the nextLink property. Initialise it with the initial URL for the first call to the REST API, it looks something like this ... https://my-test-kv.vault.azure.net/secrets?maxresults=25&api-version=7.3 ... and is straight out of the doco ... https://learn.microsoft.com/en-us/rest/api/keyvault/secrets/get-secrets/get-secrets?tabs=HTTP
In the HTTP call as shown, use the Next Link variable given that will contain the URL. As for authentication, my suggestion is to use a managed identity. If you're unsure how to do that, sorry but it's a whole other question. In simple terms, go to the Identity tab on the LogicApp and switch on the system managed status to on. You'll then need to assign it access in the KeyVault itself (Key Vault Secrets User or Officer will do the job).
Next, create an Until action and set the left hand side to be the Next Link variable with the equal to value being expression string('') which will check for a blank string (that's how I like to do it).
Finally, set the value of the Next Link value to the property in the response from the last call, the expression is ... body('HTTP')?['nextLink']
From here, you can choose what you do with the output, I'd suggest creating an array and appending all of the entries to that array so you can process it later. I haven't taken the answer that far given I don't know the exactness of how you want to process the results.
That should get you across the line.

Custom Entity UUID w/ embedded info

I follow clean arch / solid principles on my entire stack. I'm coming across a situation where I want to embed an UUID in some of my entity id fields in the domain logic, for example:
Create OrganizationEntity id=abc123
Create a ItemEntity and embed the OrganizationEntity 's id that owns that ItemEntity in the id field when it's created, ie: Item.id = itm-abc123-sdfnj344
I'm thinking of going this route so that I can reduce the amount of DB lookups to see if someone has access to a ItemEntity - if the client request belongs to OrganizationEntity then I can pattern match abc123 on both the client request session id and the requesting record for ItemEntity ... this would greatly improve performance.
Is this a known pattern/implementation? Are there any concerns or gotchas?
Try to keep your domain model as close to the language of the domain experts as you can. So if an Item belongs to an organization it is ok to have a reference id in the Item. But if an item belongs to another domain object and this one belongs to an organization you should not reference the organization in the item domain object because of performance (persistence) reasons.
You said that you want to check if someone has access to the ItemEntity. This means that there is a kind of context in which ItemEntity objects are accessible.
I see 3 options to implement such a context:
a repository api that has a organization id argument
public interface ItemRepository {
public List<ItemEntity> findItems(...., organizationId);
}
When you pass the organization id on every repository call, the repository is stateless. But it also means that you must pass the organization id from the controller to the use case and then to the repository.
a repository that is bound to an organization
public ItemRepository {
private UUID organizationId; // constructor omitted here
public List<ItemEntity> findItems(...){}
}
When you create a repository that is bound to an organization, you must create it when you need it (and also the use case), because it is stateful. But you can be sure that noone can get items that he is not allowed to see.
organization id in a call context
When the controller is invoked it takes the organization id from the session, puts it in the call context and calls the use case. In Java you would use a ThreadLocal. You can also implement this as an aspect and apply it to every controller (AOP). The repository implemetation can then access the call context and get the organization id and use it in it's queries or filter the items before returning them.
This option will allow you to access the organization id in every layer that is in the flow of control, e.g. in all use cases, entities, repositories or when you call an external service.
In all three cases you can avoid to put the organization id in the item just for database access reasons.

Domain driven design database validation in model layer

I'm creating a design for a Twitter application to practice DDD. My domain model looks like this:
The user and tweet are marked blue to indicate them being a aggregate root. Between the user and the tweet I want a bounded context, each will run in their respective microservice (auth and tweet).
To reference which user has created a tweet, but not run into a self-referencing loop, I have created the UserInfo object. The UserInfo object is created via events when a new user is created. It stores only the information the Tweet microservice will need of the user.
When I create a tweet I only provide the userid and relevant fields to the tweet, with that user id I want to be able to retrieve the UserInfo object, via id reference, to use it in the various child objects, such as Mentions and Poster.
The issue I run into is the persistance, at first glance I thought "Just provide the UserInfo object in the tweet constructor and it's done, all the child aggregates have access to it". But it's a bit harder on the Mention class, since the Mention will contain a dynamic username like so: "#anyuser". To validate if anyuser exists as a UserInfo object I need to query the database. However, I don't know who is mentioned before the tweet's content has been parsed, and that logic resides in the domain model itself and is called as a result of using the tweets constructor. Without this logic, no mentions are extracted so nothing can "yet" be validated.
If I cannot validate it before creating the tweet, because I need the extraction logic, and I cannot use the database repository inside the domain model layer, how can I validate the mentions properly?
Whenever an AR needs to reach out of it's own boundary to gather data there's two main solutions:
You pass in a service to the AR's method which allows it to perform the resolution. The service interface is defined in the domain, but most likely implemented in the infrastructure layer.
e.g. someAr.someMethod(args, someServiceImpl)
Note that if the data is required at construction time you may want to introduce a factory that takes a dependency on the service interface, performs the validation and returns an instance of the AR.
e.g.
tweetFactory = new TweetFactory(new SqlUserInfoLookupService(...));
tweet = tweetFactory.create(...);
You resolve the dependencies in the application layer first, then pass the required data. Note that the application layer could take a dependency onto a domain service in order to perform some reverse resolutions first.
e.g.
If the application layer would like to resolve the UserInfo for all mentions, but can't because it doesn't know how to parse mentions within the text it could always rely on a domain service or value object to perform that task first, then resolve the UserInfo dependencies and provide them to the Tweet AR. Be cautious here not to leak too much logic in the application layer though. If the orchestration logic becomes intertwined with business logic you may want to extract such use case processing logic in a domain service.
Finally, note that any data validated outside the boundary of an AR is always considered stale. The #xyz user could currently exist, but not exist anymore (e.g. deactivated) 1ms after the tweet was sent.

PKCS11, OBJECT PIN

I'm making pkcs11 module for web app. It's remote storage for certificates and it provides API for signing data. API for signing looks like this:
sign(int CertificateId, char* Password, void* data, int lenght)
In pkcs11 module, the whole storage is represented by one single token. In the C_Initialize section, I authenticate to the server. I find objects with another API call and everything is fine. The problem is, when I call C_SignInit or C_Sign function, I dont know how to get secondary password for my object.
Can anyone help me?
In PKCS#11 all objects are protected with a User PIN. They don't have their own PINs. So there's no standard way to ask for a different PIN for the particular object.
The idea of PKCS#11 is to have 1 password (PIN) to protect the whole token. Secondary authentication on keys located on the same token has been completly left out of the protocol. As stated in the 2.01 specification :
Using a private key protected by secondary authentication uses the same process, and call sequence, as using a private key that is only protected by the login PIN. In fact, applications written for Cryptoki Version 2.01 will use secondary authentication without modification.
Which translates into: "secondary authentication is not our problem. Such mechanisms must be implemented OUTSIDE of our protocol".
However, they describe a trick to expose several PINs when the keys are actually located on the same token here
Link to 2.11 specification: here
If you call password that protect private keys in .pfx or .pvk files as "secondary password" you are wrong. Those password is used to protect private keys in those files(.pfx or .pvk ) not the HSM one. There is no another password to protect keys in HSM. If you want to call api functions you have to login with user or admin PIN.
As Eugene Mayevski writes there is no such concept as "object PIN" in PKCS#11.
You may implement some variant of the following schema to get similar access control model:
Enrolling a key-pair:
Generate a key-pair via C_GenerateKeyPair and ensure the private key is generated as a session-only object (i.e. with CKA_TOKEN==FALSE). An alternative is to import the key pair somehow (not to be discussed here).
Generate a strong password (or use a user supplied one) and run it through some KDF to get "unlocking key". Keep this "unlocking key" in your application memory.
Generate a new persistent symmetric "derivation key" which allows key derivation only (i.e. CKA_TOKEN==TRUE and CKA_DERIVE==TRUE) using e.g. CKM_AES_KEY_GEN.
Derive a new "wrap key" session-only key object using e.g. CKM_AES_CBC_ENCRYPT_DATA with the "unblocking key" bytes as input diversification data and using the "derivation key" as a master key. The new key should be a session-only object and should allow only key wrapping (i.e. CKA_TOKEN==FALSE and CKA_WRAP==TRUE).
Wrap the private key object from step 1 using the "wrap key" into "key blob".
Store the "key blob" (inside token or outside of it).
Delete the private key from step 1 and the "wrap key". Nuke password and "unlocking key". (Do this step even if some of the previous steps fail)
The private key should not be accessible without knowing the password.
Using a key-pair:
Run the input password through the same KDF to get the "unlocking key".
Derive the "wrap key" in the same way as during the key enrollment but this time for unwrapping only (i.e. CKA_TOKEN==FALSE and CKA_UNWRAP==TRUE).
Unwrap the "key blob" into a new session-only private key object.
Delete the "wrap key". Nuke password and "unlocking key". (Do this step even if some of the previous steps fail)
Use the key-pair at your will.
Delete the private key. (Do this step even if some of the previous steps fail)
Wiping the key-pair:
Delete the associated "derivation key" and "key blob".
Some additional (random) notes:
The used AES mechanisms are just examples. You would have to store the used IV together with "key blob" if using CKM_AES_CBC_ENCRYPT_DATA.
Pay a lot of attention to all object attribute values (i.e. deny everything what is not needed). If your device supports some vendor defined extensions to control object usage then do use them (e.g. to enforce wrap/unwrap/derive mechanisms allowed).
Remember to wipe/delete passwords and temporary keys from memory/session.
Use vendor specific wrapping mechanisms as they probably provide better protection (if possible).
A convenient way to delete session objects is to close the session.
You may want to protect the integrity of the "key blob" if it is not provided by the wrapping mechanism.
Good luck!
Desclaimer: I am no crypto expert, so please do validate my thoughts.

User information in Nancy

I'm knocking together a demo app based upon Nancy.Demo.Authentication.Forms.
I'm implementing Claims and UserName in my UserIdentity:IUserIdentity class and, as per the demo, I've got a UserModel with UserName.
In the SecureModule class, I can see that the Context.CurrentUser can be used to see who it is that's logged on, but as per the interface, this only supplies the username and the claims. If I then need to get more data (say messages for the logged on user) for a view model, all I can see to use as a filter for a db query is the username, which feels, well, weird. I'd much rather be using the uniqueIdentifier of the user.
I think what I'm trying to get to the bottom of, if it is better to add the extra fields to my IUserIdentity implementation, or to the UserModel? And where to populate these?
Not sure my question is that clear (It's not clear in my head!), but some general basic architecture advice would go down a treat.
Sorry for the delayed reply.. bit hectic at the moment :)
The IUserIdentity is the minimum interface required to use Nancy's built in authentication helpers, you can implement that and add as much additional information as you like to your class; it's similar to the standard .net IPrincipal. If you do add your own info you'll obviously have to cast to your implementation type to access the additional fields. We could add a CurrentUser method to stop you having to do that, but it seems a little redundant.
You can stop reading here if you like, or you can read on if you're interested in how forms auth works..
FormsAuth uses an implementation of IUsernameMapper (which is probably named wrong now) to convert between the Guid user identifier that's stored in the client cookie and the actual user (the IUserIdentity). It's worth noting that this GUID needs to be mapped to the user/id somewhere, but it's not intended to be your database primary key, it is merely a layer of indirection between your (probably predictable) user id/names and the "token" stored on the client. Although the cookies are encrypted and HMACd (depending on your configuration), if someone does manage to crack open and reconstruct the auth cookie they would have to guess someone else's GUID in order to impersonate them, rather than changing a username (to "admin" or something smilar), or an id (to 1 for the first user).
Hope that makes sense :)

Resources