Difference between JSON-RPC Endpoints of BSC, https://bsc-dataseed1.binance.org:443 vs https://data-seed-prebsc-1-s1.binance.org:8545/? - web3js

I was still doing well with JSON-RPC Endpoint (https://bsc-dataseed1.binance.org:443) from https://docs.binance.org/smart-chain/developer/rpc.html; normally it's only about 5X,XXX
var web3 = new Web3('https://bsc-dataseed1.binance.org:443');
var web3 = new Web3('https://data-seed-prebsc-1-s1.binance.org:8545/');
But today Gas Limit is too high (81,344), so I tried to lookup many and found some endpoints, one of them: https://data-seed-prebsc-1-s1.binance.org:8545/
With same contract, same data & nonce, I used web3.eth.estimateGas then Gas Limit of the new one is 22,848; it's just same to fee for BNB transfer.
Why they have too different fees like that, someone may help me understand?
It's safe to use with the new one?

Actually, data-seed-prebsc-1-s1.binance.org:8545 is testnet rpc !
Use it if you wanna test stuff without burning "real" BNB for gas (testnet bnb has no value lol).
Have a nice day :D

Related

Can I send an alert when a message is published to a pubsub topic?

We are using pubsub & a cloud function to process a stream of incoming data. I am setting up a dead letter topic to handle cases where a message cannot be processed, as described at Cloud Pub/Sub > Guides > Handling message failures.
I've configured a subscription on the dead-letter topic to retain messages for 7 days, we're doing this using terraform:
resource "google_pubsub_subscription" "dead_letter_monitoring" {
project = var.project_id
name = "var.dead_letter_sub_name
topic = google_pubsub_topic.dead_letter.name
expiration_policy { ttl = "" }
message_retention_duration = "604800s" # 7 days
retain_acked_messages = true
ack_deadline_seconds = 600
}
We've tested our cloud function robustly and consequently our expectation is that messages will appear on this dead-letter topic very very rarely, perhaps never. Nevertheless we're putting it in place just to make sure that we catch any anomalies.
Given the rarity of which we expect messages to appear on the dead-letter-topic we need to set up an alert to send an email when such a message appears. Is it possible to do this? I've taken a look through the alerts one can create at https://console.cloud.google.com/monitoring/alerting/policies/create however I didn't see anything that could accomplish this.
I know that I could write a cloud function to consume a message from the subscription and act upon it accordingly however I'd rather not have to do that, a monitoring alert feels like a much more elegant way of achieving this.
is this possible?
Yes, you can use Cloud Monitoring for that. Create a new policy and perform that configuration
Select PubSub Topic and Published message. Observe the value every minute and count them (aligner in the advanced option). Now, in the config, when it's above 0 from the most recent value, the alert is raised.
To filter on only your topic you can add a filter by topic_id on your topic name.
Then, configure your alert to send an email. It should work!

Are JWT-signed prices secure enough for PayPal transactions client-side?

I'm using NextJS with Firebase, and PayPal is 100x easier to implement client-side. The only worry I have is somebody potentially brute-forcing the amount before the token is sent to PayPal. If I JWT sign with a secret key, is that secure enough (within reason) to dissuade people from attempting to manipulate the prices?
I thought about writing a serverless function, but it would still have to pass the values to the client to finish the transaction (the prices are baked into a statically-generated site). I'm not sure if PayPal's IPN listener is still even a thing, or even the NVP (name-value-pairs). My options as I see them:
Verify the prices and do payment server-side (way more complex)
Sign the prices with a secret key at build time, and reference those prices when sending to PayPal.
I should also mention that I'm completely open to ideas, and in no way think that these are the 'best' as it were.
Pseudo-code:
cart = {
product: [ obj1, obj2, obj3 where obj = { price, sale_price, etc.}],
total: cart.products.length
}
create an order with PayPal, using the cart array, and mapping over values
cart.products.map( prod => { return prod.sale_price || prod.price } etc.
Someone could easily modify the object to make price '.01' instead of '99.99' (for example)

How can I set a deposit tag for XRP transactions using the Coinbase API?

I am playing with the Coinbase API and am attempting to send XRP from my Coinbase wallet to another account (outside of Coinbase). The Coinbase send API (https://developers.coinbase.com/api/v2#send-money) allows me to set the destination address but there is no means of setting the destination tag, which is required for XRP transfers.
How can I set the destination tag?
The Coinbase Pro API has documentation for their platform that hints at a possible solution (https://docs.pro.coinbase.com/?r=1#crypto). Two parameters of interest are destination_tag and no_destination_tag. So if you want to send XRP using Python, you might say the following:
client.send_money(account_id = <account-id>,
to = <destination-address>,
amount = <amount>,
currency = 'XRP',
destination_tag = <destination-tag>,
no_destination_tag = False)
If you don't want to use the destination tag, you can just omit the destination_tag parameter and set no_destination_tag to True.
Extremely late to this but figured it'll be useful for someone else stumbling onto this thread.
I've just tested putting deposit_tag into the POST request to Coinbase's API (not Coinbase Pro) and it successfully puts the deposit tag through.
Coinbase also doesn't allow you to send XRP if you don't specify a deposit tag, which is handy.

How to fix memory leak in my application?

In my GAE app I add rows to Google Spreadsheet.
taskqueue.add(url='/tabletask?u=%s' % (user_id),
retry_options=taskqueue.TaskRetryOptions(task_retry_limit=0),
method='GET')
class TableTaskHandler(webapp2.RequestHandler):
def get(self):
user_id = self.request.get('u')
if user_id:
try:
tables.add_row(
user_id
)
except Exception, error_message:
pass
def get_google_api_service(scope='https://www.googleapis.com/auth/spreadsheets', api='sheets', version='v4'):
''' Login to Google API with service account and get the service
'''
service = None
try:
credentials = AppAssertionCredentials(scope=scope)
http = credentials.authorize(httplib2.Http(memcache))
service = build(api, version, http=http)
except Exception, error_message:
logging.exception('Failed to get Google API service, exception happened - %s' % error_message)
return service
def add_row(user_id, user_name, project_id, question, answer, ss_id=SPREADSHEET_ID):
service = get_google_api_service()
if service:
values = [
[
user_id, user_name, project_id, question, answer # 'test1', 'test2'
],
# Additional rows ...
]
body = {
'values': values
}
# https://developers.google.com/sheets/api/guides/values#appending_values
response = service.spreadsheets().values().append(
spreadsheetId=ss_id,
range='A1:E1000',
valueInputOption='RAW',
body=body).execute()
I add many tasks with different row values.
In result I get critical errors 'Exceeded soft private limit of 128 Mb with 158 Mb' after servicing 5 requests in total.
What could be wrong here?
At first glance there’s nothing special in your code that might lead to a memory leak.
I don’t think anybody can locate it unless he’s very deeply familiar with the 3rd party libraries used and their existing bugs. So I’d approach the problem as follows:
First lets find out where exactly memory is leaking and whether it’s leaking at all.
Refer to tracemalloc, memory_profiler, heapy or whatever else you’re familiar with. Most profilers available are listed here Which Python memory profiler is recommended?
Expected outcome: you clearly know where exactly the memory is leaking, up to a code line / python expression
If the problem is in a 3rd party code, try to dig deeper into its code and figure out what’s up there
Depending on p.2 outcome
a. Post another SO question like ‘why this python code excerpt leads to a memory leak’ - ideally it should be a standalone code snippet that leads to a weird behavior free of any third party libraries and reproducible locally. Environment specification - at least python version, is appreciated
b. If the problem is in a 3rd party library and you’ve located the problem, open a bug report on github/anywhere the target project is hosted
c. If the problem is clearly in a 3rd party library and you’re unable to find the cause, open a ticket describing the case with the profiler's report attached
It seems to be that you are running instance class B1 or F1, which has a memory limit of 128 MB.
A possible solution would be to use a higher instance class. But please keep in mind that choosing a different instance class will have impact on your pricing and quotas.

'IntegrityError: column username is not unique' while using Django User model in test

While running some tests, I started to get an IntegrityError in my setUp function. Here is my code:
def setUp(self):
self.client = Client()
self.emplUser = User.objects.create_user('employee#email.com', 'employee#email.com', 'nothing')
self.servUser1 = User.objects.create_user('thebestcompany#email.com', 'thebestcompany#email.com', 'nothing')
self.servUser2 = User.objects.create_user('theothercompany#email.com', 'theothercompany#email.com', 'nothing')
self.custUser1 = User.objects.create_user('john#email.com', 'john#email.com', 'nothing')
self.custUser2 = User.objects.create_user('marcus#email.com', 'marcus#email.com', 'nothing')
... save users here ...
Im wondering as to how this IntegrityError keeps getting raised. I delete all the users in the tearDown function and am using sqlite3 as my DB backend. I see no conflicting usernames and in production, I have no issues with using emails as usernames.
This started happening only half an hour ago, out of the blue. Has anyone run into a solution to this problem?
I'm sure you're not suffering this problem anymore since you wrote 18 months ago, but I had this problem too, and finally figured out what was happening. When using Postgres for test cases, DB changes are done in a transaction and simply rolled back, and so it is not necessary to explicitly clear tables in tearDown(), however, in SQLite, it is necessary.
Late but more appropriate answer, for the people who would land there after a google search:
When there is interaction with the database in your tests (typically, creating model instances), you should subclass your test class from django.test.TestCase, which flushes the database after each test is run.
Then you don't need to write a tedious tearDown method in all your test classes.
See https://docs.djangoproject.com/en/dev/topics/testing/overview/#writing-tests

Resources