Not clear difference between the Tron TRC20 transactions - cryptocurrency

I cannot get the difference between tron blockchain TRC20 transfers transactions. I am investingation for USDT TRC20 contract
https://tronscan.io/#/token20/TR7NHqjeKQxGTCi8q8ZY4pL8otSzgjLj6t
I see some transactions are "transfer" method call but the tronscan API and tronscan UI behaves differently for them.
For example lets take these two transactions:
sending USDT to binance cold tron address
https://tronscan.io/#/transaction/8df6e8160dfed2d0247bda50b2373cebe8bba35378840297fdb4947a5ce7d754
this transaction is returned by API endpoint for getting transactions:
https://apilist.tronscan.org/api/transaction?sort=-timestamp&count=true&limit=200&start=0&address=TMuA6YqfCeX8EhbfYEg5y7S4DqzSJireY9
also I noted that this transaction is shown in the Transfers tab on tronscan.io and also in the Transactions tab
https://tronscan.io/#/address/TMuA6YqfCeX8EhbfYEg5y7S4DqzSJireY9/transfers
Another one is just random transaction sending the same USDT contract tokens to a tron address
https://tronscan.io/#/transaction/334992dd56a8b798523b21694605d9fae45d0f6d8fc5afa504cb2dd1814fc8cf
but the same API for transactions list doesn't return it:
https://api.trongrid.io/v1/accounts/TAFJ9rnRwKtzurDikR5ETEvpQnqxVPtjXW/transactions?limit=200
and interesting that it is not shown in the Transactions tab in tronscan.io - only in the Transfers tab:
https://tronscan.io/#/address/TAFJ9rnRwKtzurDikR5ETEvpQnqxVPtjXW
I figured out that if I try another API for endpoint for retrieving transaction for specific contract then this transaction is being returned:
https://api.trongrid.io/v1/accounts/TAFJ9rnRwKtzurDikR5ETEvpQnqxVPtjXW/transactions/trc20?limit=200
So what is the difference between these transactions? Why one is returned by general transaction API endpoint and another is not? Why one is shown in both tabs on tronscan and another only in transfer list?
Also noted that another API for tron - trongrid - behaves the same way.

Related

Best practice for Apache Flink for user defined alerts

Let's say my Flink job receives a stream of Stock Prices (as an example) and issues alert if lets say a Stock drops below a certain price. Users can add or remove these alert criteria. For example user abc#somemail.com creates a rule to be alerted if price of GME drops below $100. How can my Flink job dynamically keep track of all these alert criteria in a scalable manner?
I could create an API which my Flink job could call to get all of the updated alert criteria but that would mean calling the API numerous times to keep every thing up to date.
Or I could create a permanent table with Flink Table API, which another Flink job updates as soon as users creates a new alert criteria.
What would be the best practice for this use case?
Notes:
Alert should be issued with minimal latency
Alert criteria should be updated as soon as user creates it.
Here's a design sketch for a purely streaming approach:
alertUpdates = alerts
.keyBy(user)
.process(managePreviousAlerts) // uses MapState<Stock, Price>
.keyBy(stock, price)
priceUpdates = prices
.keyBy(stock)
.process(managePriceHistory)
.keyBy(stock, price)
alertUpdates
.connect(priceUpdates)
.process(manageAlertsAndPrices) // uses MapState<User, Boolean>
managePreviousAlerts maintains a per-user MapState from stocks to alert prices. When a new alert arrives, find the existing alert for this stock (for this user), if any. Then emit two AlertUpdates: a RemoveAlert event for this (user, stock, oldAlertPrice) and an AddAlert event for this (user, stock, newAlertPrice).
managePriceHistory keeps some per-stock pricing history in state, and uses some business logic to decide if the incoming price is a change that merits triggering alerts. (E.g., maybe you only alert if the price went down.)
manageAlertsAndPrices maintains a per-stock, per-price MapState, keyed by user.
The keys of this MapState are all of the users w/ alerts for this stock at this price. Upon receiving a PriceUpdate, alert all of these users by iterating over the keys of the MapState.
Upon receiving a RemoveAlert, remove the user from the MapState.
Upon receiving an AddAlert, add the user to MapState.
This should scale well. The latency will be governed by the two network shuffles caused by the keyBys.
I think this depends on how do You approach generating alerts in general. My first idea would be to use Kafka to store the new alerts, so that Flink can receive them as a Stream. Then, depending on the requirements You could simply broadcast stream of alerts and connect it with stream of Stock Prices. This should allow You to scale pretty well.
But if You are using Table API, then using the external Table to store the data may also be a good idea, then You could take a look at something like that.

Amazon MWS determine if product is in BuyBox

I am integrating MWS Amazon API. For importing product I need one important field like whether the seller product is buybox winner or not. I need to set flag in our DB.
I have check all the possible API of product in Amazon scratchpad but not get luck how to get this information.
The winner of the buy box can (and does) change very frequently, depending on the number of sellers for the product. The best way to get up-to-the-minute notifications of a product's buy box status is to subscribe to the AnyOfferChangedNotification: https://docs.developer.amazonservices.com/en_US/notifications/Notifications_AnyOfferChangedNotification.html
You can use those notifications to update your database. Another option is the Products API which has a GetLowestPricedOffersForASIN operation which will tell you if your ASIN is currently in the buy box. http://docs.developer.amazonservices.com/en_US/products/Products_GetLowestPricedOffersForASIN.html
Look for IsBuyBoxWinner.
While the question is old, it could still be useful to someone having a right answer about the product api solution.
In the product api there is GetLowestPricedOffersForSKU (slightly different from GetLowestPricedOffersForASIN ) which has, in addition to the information "IsBuyBoxWinner", the information "MyOffer". The two values combined can tell if you have the buybox.
Keep in mind that the api call limits for both are very strict (200 requests max per hours) so in the case of a very high number of offers the subscription to "AnyOfferChangedNotification" is the only real option. It requires further developing to consume those notifications though, so it is by no means simple to develop.
One thing to consider is that the AnyOfferChangedNotification is not a service that can push to the SQS queue that is a FIFO(First one first-out) style buffer. You can only push to a standard - random order sqs queue. I thought I was being smart when I set up two threads in my application, one to download the messages and one to process them. However when you download messages from download these messages you can get messages from anywhere in the SQS queue. To be successful you need to at least
Download all the messages to your local cache/buffer/db until amazon returns 'there are no more messages'
Run your process off of that local buffer that was built and current as the time you got the last 'no more message' returned from amazon
It is not clear from amazons documentation but I had a concern that I have not proven yet but worth looking in to. If an asin reprices two or three times quickly it is not clear if the messages could come in to queue out of order (or any one messages could be delayed). By 'out of order' I mean for one sku/asin it is not clear if you can get a message with a more recent 'Time of Offer Change' before one with an older 'Time of Offer Change' If so that could create a situation where 1)you have a ASIN that reprices at 12:00:00 and again at 12:00:01(Time of Offer Change time). 2) At 12:01:00 you poll the queue and the later 12:00:01 price change is there but not ther earlier one from 12:00:00. 3)You iterate the sqs queue until you clear it and then you do your thing(reprice or send messages or whatever). Then on the next pass you poll the queue again and you get this earlier AnyOfferChangeNotification. I added logic in my code to track the 'Time of Offer Change' for any asin/sku and alarm if it rolls backwards.
Other things to consider.
1)If you go out of stock on a ASIN/SKU you stop getting messages 2)You dont start getting messages on ASIN/SKU until you ship the item in for the first time, just adding it to FBA inventory is not enough. If you need pricing to update that earlier (or when you go out of stock) you also need to poll GetLowestPricedOffersForASIN

How can I get number of API transactions used by Watson NLU?

AlchemyLanguage used to return the number of API transactions that took place during any call, this was particularly useful when making a combined call.
I do not see the equivalent way to get those results per REST call.
Is there any way to track or calculate this? I am concerned about things like some of the sub-requests like when you ask for sentiment on entities does that count as two, or one plus an additional call per recognized entity?
There's currently no way to track the transactions from the API itself. To track this (particularly for cost estimates), you'll have to go to the usage dashboard in Bluemix. To find it: sign in to Bluemix, click Manage, then select Billing and Usage, and finally select Usage. At the bottom of the page you'll se a list of all your credentialed services. Expanding any of those will show the usage plus total charges for the month.
As far as how the NLU service is billed, it's not necessarily on a per API request as you mentioned. The serivce is billed in "units" and from the pricing page (https://console.ng.bluemix.net/catalog/services/natural-language-understanding):
A NLU item is based on the number of data units enriched and the
number of enrichment features applied. A data unit is 10,000
characters or less. For example: extracting Entities and Sentiment
from 15,000 characters of text is (2 Data Units * 2 Enrichment
Features) = 4 NLU Items.
So overall, the best way to understand your transaction usage would be to run a few test requests and then check the Bluemix usage dashboard.
I was able to do a simple test, and made calls to a set of highlevel features and included sub-features. And it appeared to only register calls for the highlevel features.

Google Datastore app architecture questions

I'm working on a Google AppEngine app connecting to the Google Cloud Datastore via its JSON API (I'm using PHP).
I'm reading all the documentation provided by Google and I still have questions:
In the documentation about Transactions, there is the following mention: "Transactions must operate on entities that belong to a limited number (5) of entity groups" (BTW few lines later we can found: "All Datastore operations in a transaction can operate on a maximum of twenty-five entity groups"). I'm not sure about what is an entity group. Let's say that I've an object Country which is identified only by its kind (COUNTRY) and a datastore's auto affected key id. So there is no ancestor path, hierarchical relationships, etc... Is all the countries entities counting for only 1 entity group? Or each country is counting for one?
For the Country entity kind I need to have an incremental unique id (like the SQL AUTOINCREMENT). It has to be absolutely unique and without gap. Also, this kind of object won't be created more than few / minute so there is no need to handle contention & sharding. I'm thinking about having a unique counter that will reflect the auto increment and using it inside a transaction. Is the following code pattern OK?:
Starting transaction, getting the counter, commit the creation of the Country along with the update of the counter. Rollback the transaction if the commit fails. Does this pattern prevents the affectation of 2 same ids? Could you confirm me that if 2 processes get the counter at the same time (so the same value), the first one who commits will make the other to fail (so it will be able to restart and get the new counter value)?
The documentation also mention that: "If your application receives an exception when attempting to commit a transaction, it does not necessarily mean that the transaction has failed. It is possible to receive exceptions or error messages even when a transaction has been committed and will eventually be applied successfully" !? How are we supposed to handle that case? If this behavior occurs on the creation of my country (question #2), I will have an issue with my auto increment id, no!?
Since the datastore needs that all the writes actions of a transaction to be done in only one call. And since the transaction ensure that all or none of the transaction's actions will be performed, why do we have to make a rollback?
Is the limit of 1 write / sec only on an entity (so something defined by its kind and its key path) and not a whole entity group (I will be reassured only when I'll be sure about what exactly is an entity group ;-) question #1)
I'm stoping here to not make a huge post. I'll probably get back with others (or refined) questions after getting answers on this ones ;-)
Thanks for your help.
[UPDATE] Country is just used as a sample class object.
No, ('Country', 123123) and ('Country', 679621) are not in the same entity group. But ('Country', 123123, 'City', '1') and ('Country', 123123, 'City', '2') are in the same entity group. Entities with the same ancestor are in the same group.
Sounds like really bad idea to use auto-increment for things like countries. Just generate an ID based on the name of the country.
From the same paragraph:
Whenever possible, structure your Datastore transactions so that the end result will be unaffected if the same transaction is applied more than once.
In internal datastore APIs like db or ndb you don't have to worry about rolling back, its happening automatically.
It's about 1 write per sec per whole entity group, that's why you need to keep groups as smaller as possible.

App Engine ndb parallel fetch by key

I'm retrieving a batch of items using their keys, with something like this:
from google.appengine.ext.ndb import model
# …
keys = [model.Key('Card', id, namespace=ns) id in ids]
cards = yield model.get_multi_async(keys)
The result of that in appstats is this:
The reverse-waterfall thing seems to be caused by keys being sent one by one in parallel, each in its own RPC.
My question is, is there a way to retrieve multiple objects by keys with a single RPC call? (Assuming that would speed up the overall response time of the app).
Quoting guido response in the thread linked by lecstor.
You can always try issuing fewer RPCs by passing
max_entity_groups_per_rpc=N to the get_multi_async() call
Multiple parallel rpcs should be more efficient than a single multi-key RPC.
The engineers responsible for the HRD implementation assure me this is more
efficient than issuing a single multi-key Get RPC

Resources