Please Explain Simultaneous In/Out Swaps of Same Coin from Same Wallet - cryptocurrency

I see frequent trades of the same token in the same wallet in the same minute. What is going on?
For example, this pair of swaps in and out of AIOZ are from the same wallet:
Out:
https://etherscan.io/tx/0xf306745eef0ea056d28df0a70a0b78c8544b2cb2421697a55371345486c74320
In
https://etherscan.io/tx/0x827313545c9fe2c3f5990d11e728c9c9be7f4f1581a1107c5f75a4b97cb5f1ad
Thanks.

I received this answer from a blockchain dev:
The account you're referring to has been doing it since day 1 with
large amounts in the order of $20k to $30k almost never in profit.
This to me is wash trading to simulate volume. It is a common but
unspoken practice of new projects until they get up and running with
legitimate volume. Most CEXs require a certain sustained daily volume
while other not so reputable ones do not care if it is done through
wash trading. This behaviour usually dies down shortly after CEX
listings with a reputable project. If it persists too long it is a
serious warning sign. In this market I would be concerned if wash
trading goes on longer than 2 months after IDO
Pretty important stuff!
IMHO it is a failure of stack overflow to allow down voting without justification (if at all). Clearly this was not down voted because someone recognized a problem with the question. They had indigestion for all we know. I'm hesitant to post here anymore.

Related

Critic Loss for RL Agent

While I was implementing agents for various problems...I have seen that my actor loss is reducing as expected. But my critic loss kept increases even though the policy learned is very. This happens for DDPG , PPO etc.
Any thoughts why my critic loss is increasing.
I tried playing with hyper parameters, it actually makes my policy worse.
In Reinforcement Learning, you really shouldn't typically be paying attention to the precise values of your loss values. They are not informative in the same sense that they would be in, for example, supervised learning. The loss values should only be used to compute the correct updates for your RL approach, but they do not actually give you any real indication of how well or poorly you are doing.
This is because in RL, your learning targets are often non-stationary; they are often a function of the policy that you are modifying (hopefully improving!). It's very well possible that, as the performance of your RL agent improves, your loss actually increases. Due to its improvement, it may discover new parts of its search space which lead to new target values that your agent was previously completely oblivious to.
Your only really reliable metric for how well your agent is doing is the returns it collects in evaluation runs.

Time-order in database transactions

i have always worked with concurrency control, but recently i have thought about how the non-determinism on the execution of transactions in a database can change the final result from the user's point of view.
Consider that two persons P1 and P2 both want to withdraw 50 euros from a bank account that has precisely 50 euros.
P1 requests the operation from an ATM at time 8:00
P2 requests the operation from an ATM at time 8:02
Both requests eventually arrive at the bank database system, but due to non-deterministic factors (transaction ordering, OS thread scheduling, etc) P2's request is executed first and the withdrawal is successful, and P1's request fails because it was executed after P2's request and hence there was not enough Money to withdraw.
We arrive at a situation where the person who first requested the operation ends without the Money. Are these concerns taken in account in real time systems? I hear some people saying that these things are not important and the world will go on, the only concern is to not violate the consistency constraints (no Money disappears, no Money magically appears)
Nonetheless, i think that this time-request fairness is also important.
Thanks for the attention
I decided to write extended answer to refer to these consideration in the future.
Nonetheless, I think that this time-request fairness is also important.
Is victory really unfair?
In fact, the system is fair because time-request fairness is guaranteed based on which request was written to database first.
The only problem is that it is not explicit to users.
Solution: set clear rules of the game
Eventually, this is less about fairness of ATM system in providing adequate service. Instead, this is about SLA it guarantees.
If withdrawal transaction completes within maximum timeout of 5 minutes, all a person should do to guarantee his victory is to be the first AND 5 minutes ahead of the other one to touch ATM.
Otherwise, in overlapping time intervals, the winner is selected randomly (as it appears).
Both rules are fair if they are agreed upon.
DETAILS
Article "Race Conditions Don't Exist" review similar example in CQRS context.
Guaranteeing absolute fairness is sometimes prohibitively expensive or completely impossible.
Ultimately, what matters is whether operational model is adequate to specific Domain.
Is it fair for the system to let users judge about itself based on their own assumptions?
We deal with multiple views P1 and P2.
Both views show only the money left at the time of balance request (fair). None of the views was given explicit guarantee that the entire sum was exclusively locked for this specific view (fair).
What may get users upset is their own communication (additional view) outside the system. For example, both person see each other and know exactly who was the first to submit withdraw request at its own ATM. System is not responsible for consistent views outside itself. Users are upset because they wrongly assume system selects winner based on them pressing ATM button first.
How far can we go technically guarantee?
System could lock entire balance the moment user touches ATM. There are lots of details why this is not practical. Most importantly, it may not satisfy those who assume it is unfair to lock balance without eventually withdrawing anything. And how to guarantee to acquire lock first by the user who touched the ATM first?
System could be designed to wait for candidate requests within timeout adjusted to maximum allowed delays in request propagation. Then it may have several candidates for victory and it may use timestamps given by source ATMs. However, even then there would be unfairness due precision of clock synchronization.
Effectively, first is never absolutely defined due to physics - see uncertainty principle ;)

How to handle this DB schema?

Using SO as a prime example, let's say a question or answer to a question is deleted but garnered a few up-votes before it was deleted. I imagine these points are still awarded to the author (if they aren't, let's suppose they are), then how does SO keep an accurate reputation total?
Are the questions/answers actually not deleted from the DB itself, and perhaps have a status field that is processed and decides whether a question or answer is visible?
Or, they are, in fact, deleted and the reputation relies on the system being continuously accurate as each vote is counted and doesn't necessarily have a history of it (like a question that recorded the vote)
SO uses a combination of soft and hard deletes, to the best of my knowledge. I can say for sure that I've lost reputation that was gained on questions deleted by either the poster or the moderator. That is not the point of your question, however, so...
If you want to be able to deduce an accurate total, especially if you want to be able to account for that total (the way SO lets you do by looking at your points history) then you need to keep transactional information, not a running total.
If you want to have referential integrity for the transactional log of points then you will need to use a soft-delete mechanism to hide questions that are "deleted".
If you don't keep the transactional log and you don't have soft delete-able questions to back up your transactional points log, then you won't be able to either recalculate or justify point totals. You'll also have a much harder time displaying a graph of points awarded over time and accumulated reputation over time. You could do these graphs by keeping a daily point snapshot, but that would be much more onerous and costly in terms of storage than just tracking up and down votes.

Fast, high volume data input in SQL Server

I'm currently in the preparatory phase for a project which will involve (amongst other things) writing lots of data to a database, very fast (i.e. images (and associated meta-data) from 6 cameras, recording 40+ times a second).
Searching around the web, it seems that 'Big Data' more often applies to a higher rate, but smaller 'bits' (i.e. market data).
So..
Is there a more scientific way to proceed than "try it and see what happens"?
Is "just throw hardware at it" the best approach?
Is there some technology/white papers/search term that I ought to check out?
Is there a compelling reason to consider some other database (or just saving to disk)?
Sorry, this is a fairly open-ended question (maybe better for Programmers?)
Is there a more scientific way to proceed than "try it and see what happens"?
No, given your requirements are very unusual.
Is "just throw hardware at it" the best approach?
No, but at some point it is the only approach. You wont get a 400 horse power racing engine just by tuning a fiat panda. You wont get high throughput at any database without appropriate hardware.
Is there some technology/white papers/search term that I ought to check out?
Not a valid question in the context of the question - you ask specifically for sql server.
Is there a compelling reason to consider some other database (or just saving to disk)?
No. As long as you stick relational database the same rules apply pretty much - another may be faster, but not by a wide margin.
Your main problem will be disc IO and network bandwidth, depending on size of the images. Properly size the equipment and you should be fine. At the end this seems less than 300 images per second. Sure you want the images themselves in the database? I normally like that, but this is like storing a movie in pictures and that may be stretching it.
Whatever you do, that is a lot of disc IO and size, so - hardware is the only way to go if you need IOPS etc.

Transactional counter with 5+ writes per second in Google App Engine datastore

I'm developing a tournament version of a game where I expect 1000+ simultaneous players. When the tournament begins, players will be eliminated quite fast (possibly more than 5 per second), but the process will slow down as the tournament progresses. Depending when a player is eliminated from the tournament a certain amount of points is awarded. For example a player who drops first, gets nothing, while player who is 500th, receives 1 point and the first place winner receives say 200 points. Now I'd like to award and display the amount of points right away after a player has been eliminated.
The problem is that when I push a new row into a datastore after a player has been eliminated, the row entity has to be in a separate entity group so I would not hit the gae datastore limit of 1-5 writes per second for 1 entity group. Also I need to be able to read and write a count of rows consistently so I can determine the prize correctly for all the players that get eliminated.
What would be the best way to implement the datamodel to support this?
Since there's a limited number of players, contention issues over a few a second are not likely to be sustained for very long, so you have two options:
Simply ignore the issue. Clusters of eliminations will occur, but as long as it's not a sustained situation, the retry mechanics for transactions will ensure they all get executed.
When someone goes out, record this independently, and update the tournament status, assigning ranks, asynchronously. This means you can't inform them of their rank immediately, but rather need to make an asynchronous reply or have them poll for it.
I would suggest the former, frankly: Even if half your 1000 person tournament went out in the first 5 minutes - a preposterously unlikely event - you're still looking at less than 2 eliminations per second. In reality, any spikes will be smaller and shorter-lived than that.
One thing to bear in mind is that due to how transaction retries work, transactions on the same entity group that occur together will be resolved in semi-random order - that is, it's not a strict FIFO queue. If you require that, you'll have to enforce it yourself, though that's a far from trivial thing to do in a distributed system of any sort.
the existing comments and answers address the specific question pretty well.
at a higher level, take a look at this post and open source library from the google code jam team. they had a similar problem and ended up developing a scalable scoreboard based on the datastore that handles both updates and requests for arbitrary pages efficiently.

Resources