I have a couple of items in a shop that will be power ups or multipliers for amount of coins one gets from work or an amount of xps one gets from messaging (like dank memer bot has).
Now, I would like to pull the item user owns after 15 minutes or 1 hour depending on the power up from user's inventory database. For normal items I just use $pull to pull the item from inventory array in database at the time user runs use command in discord, but how can I pull the item after 15 minutes or 1 hour?
I thought of doing it with setTimeout but I am not sure if that will slow down the bot since a lot of people might be using a lot of power ups.
I am using discord.js
Related
I'd like to give the users the ability to select which time they'll get the notifications.
The should be able to select one or multiple hour values (0-24) and then get notified daily at selected hours.
Whats the best way to model it?
Was thinking about this solution: adding an ARRAY column within user table containing hours eg. [1, 6, 23] but dunno how fast it's gonna be during scanning the table each hour in order to find users to send notification to.
The array for all users would have to be read every hour just to find any occurrences of that hour. Seems a bit much.
A single row for each user for each hour then you only select the notifications that you actually need to send. Deletes/updates can easily be garbaged out of the DB.
Let's assume there is an item.
It is sold by 100 users every second.
Describe one cycle transaction.
Amazon checks its stock. (10ms)
If an available request to card company for payment. (50ms)
Update item's stock. (50ms)
110ms(I decided arbitrarily) was consumed on one purchase.
How to handle this on amazon?
Is there one database on large services?
Is there a queue?
Just curiosity, Short answers or keywords are fine. thank you.
I keep users session for 2 hours for statistics reasons, but 'real' user are one which did some action in the past half an hour.
I need to sort the users by their actions:
First will be the real users,
Second will be the users in the statistics state (hence didn't make
an action for more than half an hour, but their session is less than
2 hours),
and last are all the rest (their statistics state is over).
I'm using 2 columns: IsLogin (bit) and LastAction (DateTime) columns.
My logic is that sorting first by bit for logged in users and then sort by the timespan will be suffice.
I'm talking of around 200,000 users, 50% are online.
Every user can do that search so I need to do the search as quickly as possible.
Notice that when user is logging in - he/she is ordered first until the next one is logging in, then he/she become second and so on.
I'm using scrolling pagination of 20 per page (scrolling down retrieves the next 20).
The table has 23 columns.
Am I using the right columns?
Need I do something else?
Does selection by bit column and then order by timespan will be
faster than just ask if timespan is less than 30 minutes from now and
then order by?
Say you have a list of events/tasks with time stamps for created, completed.
Examples:
Customer entering a queue, then being served
Business process starting and completing
Order received, order dispatched
Also related fill, take events:
Pay goes into bank account, pay bills, buy food, go to the movies
Fuel tanker replenishes gas station, customers purchase fuel
Store receives stock, customers purchase stock
Now say I have a huge list of this data. I don't know the starting inventory levels because I've come in after the beginning and I can't view the current inventory either.
How can I query this data so I can tell current inventory levels, queue size, etc at any given time? Or even start to plot the size of inventory on a time line?
edit: I'll explain my specific requirements in more detail
Our warehouse management sytem contains historical data for each task that occurs. It doesn't capture the state of a pick bin as the event happens, only how much was picked or how much was replenished. We also have cycle count tasks which does show how much stock is in the pick bin. I am trying to find a way of tying these three processes together, replenishment, picking, cycle counts, so I can plot on a time line:
how many tasks are currently waiting
current stock levels
At the moment with the data, I have only figured out how to plot how many tasks were created or completed in period of time, or how much was picked, based off the time stamps for the task.
Now the reason I would like this data is to track performance, under/over allocation of staff and indentifying config issues that could be causing performance issues.
Now say I have a huge list of this data. I don't know the starting inventory levels because I've come in after the beginning and I can't view the current inventory either.
How can I query this data so I can tell current inventory levels, queue size, etc at any given time?
The short answer is: "You can't."
If all you have are the deposits/withdrawals without an opening balance or a closing balance then there is no way to track the actual current balance because you have no reference point from which to start. You could track the relative balance (or inventory level, or whatever) by assuming a starting balance of 0, or 100, or 1000, or any value you like.
As for plotting the data, Excel is the logical place to start for a task like that.
I'm developing a high score web service for my game, and it's running on Google App Engine.
My game has 5 difficulties, so I originally had 5 boards with entries for each (player_login, score and time). If the player submitted a lower score than the previously scored, it got dismissed, so only the highest score is kept for each player.
But to add more fun into this, I'd decided to include daily/weekly/monthly/yearly high score tables. So I've created 5 boards for each difficulty, making it 25 boards. When a score is submitted, it's saved into each board, and the boards are supposed to be cleared on every day/week/month/year.
This happens by a cron job that is invoked and deletes all entries from a specific board.
Here comes the problem: it looks like deleting entries from the datastore is slow. From my test daily cleanups it looks like deleting a single entry takes around 200 ms.
In the worst-case scenario, if the game would be quite popular and would have, say, 100 000 players, and each of them would have an entry in the yearly board, it would take 100 000 * 0.012 seconds = 12 000 seconds (3 hours!!) to clear that board. I think we are allowed to have jobs of up to 30 seconds in App Engine, so this wouldn't work.
I'm deleting with following code (thanks to Nick Johnson):
q = Score.all(keys_only=True).filter('b = ',boardToClear)
results = q.fetch(500)
while results:
self.response.out.write("deleting one batch;")
db.delete(results)
q = Score.all(keys_only=True).filter('b = ',boardToClear).with_cursor(q.cursor())
results = q.fetch(500)
What do you recommend me to do with this problem?
One approach that comes to my mind is to use a task queue and delete older scores than that are permitted in each board, i.e. which have expired, but in smaller quantities. This way I wouldn't hit the CPU limit for one task, but the cleanup would not be (nearly) instantaneous, so my 12 000 seconds long cleanup would be split into 1 200 tasks, each roughly 10 seconds long.
But I think that there is something that I'm doing wrong, this kind of operation would be a lot faster when done in relational database. Possibly something is wrong with my approach to the datastore and scoring, because being locked in RDBMS mindset.
First, a couple of small suggestions:
Does deletion take 200ms per item even when you delete items in a batch process? The fastest way to delete should be to do a keys_only query and then call db.delete() on an entire list of keys at once.
The 30-second limit was recently relaxed to 10 minutes for background work (like the cron jobs or queue tasks that you're contemplating) as of 1.4.0.
These may not fundamentally address your problem, though. I think there's no way to get around the fact that deleting a large number of records (hundreds of thousands, say), will take some time. I'm not sure that this is as big a problem for your use case though, as I can see a couple of techniques that would help.
As you suggest, use a task queue to split up a long-running tasks into several smaller tasks. Your use case (deleting a huge number of items that match a particular query) is ideal for a map-reduce task. Nick Johnson's blog post on the Mapper API may be very helpful for you (so that you don't have to write all of that task management code on your own).
Do you need to delete all the out-of-date board entries immediately? If you had a field that listed which week, month, or year that a particular entry counted for, you could index on that field and then only display entries from the current month on the visible leaderboard. (Disk space is cheap, after all.) And then if you wanted to slowly (over hours, say, instead of milliseconds) remove the out-of-date data, you could do that in the background without ever having incorrect data on your leaderboards.
Delete entities in batches. Although a single delete takes a noticeable amount of time (though 200ms seems very high), batch deletes take no longer, as they delete all the entities in parallel. Task Queue and cron jobs can now run for up to 10 minutes, so timeouts should not be an issue.