How to send a web3 or ethers transaction to bsc using javascript from back end? - web3js

I would like to send a transaction to the binance smart chain (BSC)that will buy an NFT for a crypto integrated game.
I've looked at their contract to figure out what the transaction should look like.
Does the data in the image below give me all the information I need to send my own transaction?
If so, can you provide a pattern or toolset that I should use to encode and send my transaction?
Here is the address of a transaction that I would like to emulate.
https://bscscan.com/tx/0x0430f9964ae70d8993eb45a4e0d89e72594ad109467d6ab8d39fea9115f0e1a3

Related

BlueWallet: How to find channels infos

I setup a LN wallet with BlueWallet. Then I use a faucet (https://lightningnetworkstores.com/faucet) to get 6 SAT.
In BlueWallet I can see I'm connected to lndhub.io
My question: Where are the channel's infos ?
To open a channel, there must be a tx in the blockchain !
Also: is there a way to see the invoice of the ln tx ?
In their website bluewallets' lightning experience is described as:
You can use our hosted Lightning wallets or connect to your own node.
Channel
So, if you don't have your own (I guess LND only) Lightning node, I guess you are using a custodial wallet, so your balance is just a database entry behind the balance of a large lightning node controlled by bluewallet team. Your deposit transaction probably did not cause a channel to be opened, but it's just an incoming LN transaction to the bluewallet node which uses their available incoming liquidity.
Invoice
The faucet you linked uses LNurl withdraw to send money. This means that your wallet decoded the lnurl string or QRcode, getting something like https://lightningnetworkstores.com/api/lnurl2?id=C9yU9yPVHW37783hoaDABd. Your wallet then proceeded to send a get request to that url, which returned some info and a callback URL. Finally your wallet generated an invoice (through the central bluewallet node) and sent it to the server (via callback URL) to be paid.
However I have no clue whether or where you can find invoices in bluewallet.

why should I use .Send({from:this.state.account}) in web3 interface to receive ether from the contract?

I have sent/deposited Ether to my smart contract using ReactJS as front-end and web3 as interface using the below code.
await this.state.Bank.methods.desposit().send({value:amountInEther.toString(),from:this.state.account})
and while trying to receive ether from the smart contract using frontend Reactjs i have used this code
(React code) await this.state.Bid.methods.withDraw(this.state.account).call()
dbank.sol(solidity below code):
function withDraw(address payable receiver)public{
receiver.transfer(accounts[receiver]);
accounts[receiver]=0;
}
when i tried this it didn't work but when I changed the react code from call to send with address it worked
await this.state.Bid.methods.withDraw().send({from:this.state.account})
But WHY? and also when i use this metamask is taking some gas fees to run this. Don't you think smart contract has to give the ether to us and it should pay the Gas fee.
We are paying Gas fee while sending the ether to the contarct Is okay acceptable but why should we pay the gas fee to receive our ether from the contract?
About your first question, because the web3 library was made that way,just that simple, there are other libraries that have different ways of handle that, but what does {from: this.state.account} is pass the information necessary to the library to sign the transaction and send it
About your second question, to the ethereum network currently (and probably never) does not exist a way to distinguish a "withdraw" from a transaction, because there are exactly the same, remember that you are calling a contract function that will modify the state of the blockchain, that call will become a transaction and the transaction has to pass the hole process, minting, verification and all of that, so as any transaction you have to pay gas

Uploading images to S3 with React and Elixir/Phoenix

I'm trying to scheme how I'm going to accomplish this and so far I have the following:
I grab a file in the front end and on submit send the file name and type to the back end where it generates a presigned URL. I send that to the FE. I then send the file on the front end.
The issue here is that when I generate the presign, I want to commit my UUID filename going to S3 in my database via the back end. I don't know if the front end will successfully complete this task. I can think of some janky ways to garbage collect this - but I'm wondering, is there a typically prescribed way to do this that doesn't introduce the possibility of failures the BE isn't aware of?
Yes there's an alternate way. You can configure your bucket so that it sends an event whenever an object is created/updated. You can either send this event to a SNS topic or AWS Lambda.
From there you can make a request to your Phoenix app webhook, that can insert it into the database.
The advantage is that the event will come only when the file has been created.
For more info, you can read the following: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
The way I'm currently handling this is as such:
Compress the image client side.
Send the image to the backend application server.
Create a UUID on the backend.
Send the image from s3 to the backend, using the UUID as the key.
On success, put the UUID into the database.
Respond to the client with the UUID so it can display the image.
By following these steps, you don't introduce error into your database.

Is Amazon SQS a good tool for handling analytics logging data to a database?

We have a few nodejs servers where the details and payload of each request needs to be logged to SQL Server for reporting and other business analytics.
The amount of requests and similarity of needs between servers has me wanting to approach this with an centralized logging service. My first instinct is to use something like Amazon SQS and let it act as a buffer with either SQL Server directly or build a small logging server which would make database calls directed by SQS.
Does this sound like a good use for SQS or am I missing a widely used tool for this task?
The solution will really depend on how much data you're working with, as each service has limitations. To name a few:
SQS
First off since you're dealing with logs, you don't want duplication. With this in mind you'll need a FIFO (first in first out) queue.
SQS by itself doesn't really invoke anything. What you'll want to do here is setup the queue, then make a call to submit a message via the AWS JS SDK. Then when you get the message back in your callback, get the message ID and pass that data to an invoked Lambda function (you can write those in NodeJS as well) which stores the info you need in your database.
That said it's important to know that messages in an SQS queue have a size limit:
The minimum message size is 1 byte (1 character). The maximum is
262,144 bytes (256 KB).
To send messages larger than 256 KB, you can use the Amazon SQS
Extended Client Library for Java. This library allows you to send an
Amazon SQS message that contains a reference to a message payload in
Amazon S3. The maximum payload size is 2 GB.
CloudWatch Logs
(not to be confused with the high level cloud watch service itself, which is more sending metrics)
The idea here is that you submit event data to CloudWatch logs
It also has a limit here:
Event size: 256 KB (maximum). This limit cannot be changed
Unlike SQS, CloudWatch logs can be automated to pass log data to Lambda, which then can be written to your SQL server. The AWS docs explain how to set that up.
S3
Simply setup a bucket and have your servers write out data to it. The nice thing here is that since S3 is meant for storing large files, you really don't have to worry about the previously mentioned size limitations. S3 buckets also have events which can trigger lambda functions. Then you can happily go on your way sending out logo data.
If your log data gets big enough, you can scale out to something like AWS Batch which gets you a cluster of containers that can be used to process log data. Finally you also get a data backup. If your DB goes down, you've got the log data stored in S3 and can throw together a script to load everything back up. You can also use Lifecycle Policies to migrate old data to lower cost storage, or straight remove it all together.

Is it possible to set up an IPN listener using client side scripting?

I'm integrating a web payment using angularjs.
My main goal are
to let the user be able to topup or pay via paypal
upon successful redirect him back to my site
If the transaction is successful i will then update our db records.
Glad to say that after 2days I'm done with the first 2 steps. Then I've read about using PDT (Payment Data Transfer) and I used this to get the transaction details of the payer but I had read many post saying using PDT isn't reliable enough that I also must use IPN (Instant Payment Notification). So I google about it and almost all sample/tutorial about IPN are made from using server side scripting. So is it possible to perform an IPN listener using javascript alone?
No, not on the client-side. You can use server-side Javascript (nodejs) to do this. The purpose of IPN is to let your server know that a payment is completed. The IPN request comes directly from paypal behind the scenes to a URL you give it. There's no way for a client to receive this signal instead, and if it could then there'd be a big security flaw because anyone could forge it.
However, you could update your backend using IPN, then use something like socket.io (websockets) or long-polling (plain old ajax) to let your client know that payment was successful. With long-polling, you'd basically be asking your back-end every second or two whether or not payment was succesful. With sockets, you have a more direct communication. I like socket.io because it falls back to long polling (or flash) if real web sockets aren't available.

Resources