Similar to Tetris on Facebook, where if you're at 100 Energy, after usage (playing a game), it goes down 5 units, and the recharges 1 unit every 10 minutes. I'm curious how to handle polling, and possibly making it server-side so that there's no "time manipulation" (e.g setting the clock forward in the future) to circumvent measures for receiving energy early. Thanks in advance!
You will need some kind of persistent infrastructure, and ideally you want to run a check against the server from the client side, just have it return a JSON string with the value whenever it loads. Also, don't use the client clock, because that could change across machines etc. which would not yield the ideal results.
Related
I have a react-native application that populates pins on a map that have been submitted by users. The front end gets the corners of the window and then the back end goes through each pin to check if it falls within the boundary, and returns the ones that do.
This is taking too long on the backend and I want to ask the community for ideas, because I doubt I have the best one.
My idea is to store tables of pins grouped by quadrants, effectively a cache, and then I can in almost constant time return the pins from the quadrants involved.
Is there a simpler way to do this?
Maybe using NoSQL?
🙏🏻
A month later it seems geohashing is probably the best way, plus AWS has a library for automatically handling this with dynamodb. Apparently it takes the corners of the screen, lat/lon, and automatically returns the items from the DB in the view, in, I assume, constant time, since that's the whole point of geohashing, getting performance that works at scale..
https://www.npmjs.com/package/dynamodb-geo
https://aws.amazon.com/blogs/compute/implementing-geohashing-at-scale-in-serverless-web-applications/
Otherwise, using a geohashing library that is built for serving mobile apps likely exists.
New to Gatling world but an experienced Loadrunner user.
I created a sample simulation to run two scenarios, each with 10 users and want to run it for 10 minutes. Below is what I have in my setUp function. But each time I run the simulation, it only runs for 136 seconds. The holdFor doesn't seem to take into effect.
setUp(
scn.inject(rampUsers(10) over (10 seconds)),
scen.inject(rampUsers(10) over (10 seconds))
)
.protocols(httpProtocol)
.throttle(
reachRps(2) in (10 seconds),
holdFor(10 minutes)
)
I am using Gatling 2.2.2 bundle.
Output: Simulation computerdatabase.BasicSimulation completed in 136 seconds
The throttle works as a bottleneck, effectively working as an upper boundary for how many requests will be sent. If your scenarios + injection profiles aren't able to generate as many requests as you would like in the first place, the ones that are generated simply pass through the throttle unhindered. The throttle cannot increase the load to match the desired RPS, it can only decreases it.
You will need to inject enough users into your scenarios for them to be able to generate the 2 RPS you want in the first place, and keep adding more of them over the course of the simulation, in order for the throttle to do what you are looking for.
Try changing your injection profiles to for example something like this (and adjust the constantUsersPerSec value as needed), I believe this might give you a load-profile a step closer to what you are looking for:
scn.inject(constantUsersPerSec(1) during(10 minutes))),
scen.inject(constantUsersPerSec(1) during (10 minutes)))
The example above was just a very quick and dirty way to illustrate the point of having to inject users over time, but as chance would have it, injecting 600 users in total over 10 minutes into a scenario is 10 users every ten seconds and should be exactly what you want, unless I'm falling ass first into a basic arithmetic error and/or misunderstanding.
It will also naturally ramp up and down to some extent, although you can more explicitly control the ramp up by chaining injection steps if you need, for example like this:
scn.inject(
rampUsers(10) over (1 minute),
constantUsersPerSecond(1) during (10 minutes)
)
For another approach to more explicitly control the ramp over time, you could also play around with a configuration like this:
scn.inject(
splitUsers(600) into(rampUsers(10) over(10 seconds)) separatedBy(10 seconds)
)
I have Ardupilot on plane, using 3DR Radio back to Raspberry Pi on the ground doing some advanced geo and attitude based maths, and providing audio feedback to pilot (rather than looking to screen).
I am using Dronekit-python, which in turn uses Mavproxy and Mavlink. What I am finding is that I am only getting new attitude data to the Pi at about 3hz - and I am not sure where the bottleneck is:
3DR is running at 57.6 khz and all happy
I have turned off the automatic push of logs from Ardupilot down to Pi (part of Mavproxy)
The Pi can ask for Attitude data (roll, yaw etc.) through the DroneKit Python API as often as it likes, but only gets new data (ie, a change in value) about every 1/3 second.
I am not deep enough inside the underlying architecture to understand what the bottleneck may be -- can anyone help? Is it likely a round trip message response time from base to plan and back (others seem to get around 8hz from Mavlink from what I have read)? Or latency across the combination of Mavproxy, Mavlink and Drone Kit? Or is there some setting inside Ardupilot or Telemetry that copuld be driving this.
I am aware this isn't necessarily a DroneKit issue, but not really sure where it goes as it spans quite a few components.
Requesting individual packets should work, but that was never meant to be requested lots of times per second.
In order to get a certain packet many times per second, set up streams. A stream will trigger a certain number of times per second, and will then send whichever packet is associated with it, automatically. The ATTITUDE message is in the group called EXTRA1.
Let's suppose you want to receive 10 ATTITUDE messages per second. The relevant parameter is called SR0_EXTRA1. This defines the number of Attitude packets sent per second. The default is 4. Try increasing that parameter to 10.
How would you account for an event during which time is very important? Most games use frames and the simulations take place in time steps. What if an event occurred that needed a specific time to take place on?
For example, in a game like DOTA, attack speed is very important. Now, let's say the time-step for this game is 50ms. Two heroes are fighting. HeroA attacks HeroB and lands a "killing blow" at 14ms into the time-step. HeroB attacks HeroA and also lands a "killing blow", only this occurs at 40ms into the time-step. However, neither one of these blows will be simulated until the 50ms time-step. Therefore, at 50ms, both heroes will be killed when really only HeroA should be left standing because he managed to attack first.
Is there some way to account for this?
I don't know how your game architecture is designed, but if you are able to get the timestamps of your events, you can easily put them into a buffer with another Thread and process them in the time specific order every 50ms.
In Amazon Web Services, their queues allow you to post messages with a visibility delay up to 15 minutes. What if I don't want messages visible for 6 months?
I'm trying to come up with an elegant solution to the poll/push problem. I can write code to poll the SQS (or a database) every few seconds, check for messages that are ready to be visible, then move them to a "visible queue", or something like that. I wish there was a simpler, more reliable method to have messages become visible in queues far into the future without me having to worry about my polling application working perfectly all the time.
I'm not married to AWS, SQS or any of that, but I'd prefer to find a cloud-friendly solution that is stable, reliable and will trigger an event far into the future without me having to worry about checking on its status every day.
Any thoughts or alternate trees for me to explore barking up are welcome.
Thanks!
It sounds like you might be misunderstanding the visibility delay. Its purpose is to make sure that the polling application doesn't pull the same item off the queue more than once.
In other words, when the item is pulled off the queue it becomes invisible for a predetermined period of time (default is 30 seconds, max is 15 minutes) in case the polling system has a cluster of machines reading from the queue all at once.
Here's the relevant documentation:
http://docs.amazonwebservices.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/IntroductionArticle.html#AboutVT
...and the sentence in particular that relates to my comment is:
"Immediately after the component receives the message, the message is still in the queue. However, you don't want other components in the system receiving and processing the message again. Therefore, Amazon SQS blocks them with a visibility timeout, which is a period of time during which Amazon SQS prevents other consuming components from receiving and processing that message."
You should be able to use SQS for your purpose since you can leave an item in the queue for as long as you want.
7 years later, and Amazon still doesn't support the feature you need!
The two ways you can sort of get it to work are:
have messages contain a delivery target datetime in their message_attributes, and have the workers that consume the queue's messages just delete and recreate any message that is consumed before its target, with delay = max(0, min(secs_until_target_datetime, 900)) ; that would allow you to effectively schedule a message for any arbitrary future time;
or,
(slightly less frequent and costly:) similarly, if a message isn't due to be handled yet, recreate it and change its visibility timeout to be timeout = max(0, min(secs_until_target_datetime, 43200))
The disadvantage of using visibility timeout is that any read will re-trigger it.
There has been a direct AWS solution possible since 2016-12-01: AWS Step Functions
Each execution can last/idle up to one year, persists the state between transitions, and doesn't cost you any money while it waits.