I have discovered, as have many before me, that AppEngine has a 60 second execution deadline.
Process terminated because the request deadline was exceeded.
My use case is a bit different to the others I've seen. I have a web form that lets you move a toggle switch; there's a page you can GET which represents this toggle state with a 1 or 0. A Raspberry Pi hits this page every 10 seconds and makes a light at my front gate match the state of the toggle. I'm doing all of this over HTTP (the Pi is on a 4G modem which firewalls traffic on other ports).
I had the idea earlier today of making a "has the state changed" handler. The Pi would get and match the state at first boot, but after that it would hit a (often very slow to load) handler that did something like this:
iterations = 0
current_state = get_state()
while iterations < 600
if get_state != current_state:
return "Change!"
iterations = iterations + 1
time.sleep(1)
return "No change"
This would reduce my 4G overhead to a single request every ten minutes, but - if the state changed - the page would finish loading immediately and I could act on it straight away. If nothing changed, I would just call the process again - but now I'd be doing it once every 10 min instead of once every 10 sec.
Even with a 50s upper limit, I can build this and it will save me some overhead + improve my response latency. But is there something I'm missing about how deadlines work which would let me do this in GAE for longer periods of time?
It is possible to reach 60 minutes timeout by switching to the flexible environment. From Comparing high-level features:
Related
I have a Flink streaming job, where I am creating windows based on some keys and adding the data points to a data point.
.window(SlidingProcessingTimeWindows.of(Time.days(1), Time.minutes(5)))
.trigger(CountTrigger.of(5))
.window(<ProcessWindowFunction>)
I'm using the above piece of code for creating sliding window of size 1 day with a slide of 5 minutes. Als, count trigger is triggering the process function once 5 data points are accumulated.
In addition to this, I want to trigger the process function for every slide that happens. This means, till 1 day of data points are accumulated (window size), CountTrigger shall trigger the process function and once 1 day worth points are created and window slides for every 5 minutes, I want to trigger the process function for every data point instead of waiting for CountTrigger to accumulate 10 data points. Can someone please help me on how to do this?
Be aware that this is going to be pretty painful. Every event is going to be assigned to a total of 288 windows (24 hours / 5 minutes). This means that every event is going to trigger 288 calls to your ProcessWindowFunction.
If you find you need to optimize this, you can probably get better performance with a carefully implemented KeyedProcessFunction.
Extend org.apache.flink.streaming.api.windowing.triggers.CountTrigger and override onProcessingTime method. Implement your processing time logic there. Then use this trigger instead of plain CountTrigger.
I struggle with what seems like an easy thing to do.
I'd like to ramp up the number of concurrent calls to an api over time until it breaks.
From my understanding this is rampUsersPerSec with a low initial rate increasing to something high enough to break it with during long enough to see when it actually breaks.
This is my code
val httpProtocol = http
.baseUrl("http://some.url")
.userAgentHeader("breaking test")
.shareConnections()
val scn = scenario("Ralph breaks it")
.exec(
http("root page")
.get("/index.html")
.check(status.is(200))
)
setUp(
scn.inject(
rampUsersPerSec(1) to 100000 during (10 minutes))
.protocols(httpProtocol))
Two things happen
I get lots of exceptions when I run it. Request Timeouts, Connection Timeouts and so on. How can I stop the test when that happens?
Active Users seems much higher than expected. Did I understand 'rampUsersPerSec` wrong? What is the correct way to scale number of calls per second linear?
When you use rampUsersPerSec you are defining the arrival rate of users. So with 'rampUsersPerSec(1) to 100000 during (10 minutes))' gatling will inject 1 user per second at the start, and gradually increase the rate until at 10 minutes in it is injecting 100,000 users per second.
Depending on the time it takes for your call to /index.html to respond, this could very quickly out of hand as gatling isn't waiting for the users already injected to actually finish - it just keeps adding them regardless. So (roughly) in the first second gatling might inject 1 user, but then in the 2nd it might inject 166, and in the 3rd 333 users and so on. So if your scenario takes a few seconds to respond, the number of concurrent
users can increase rapidly.
Unfortunately, I don't think there's any way to have the simulation detect when you've hit a defined error rate and stop. You would be better having a much slower ramp over a longer duration. Alternatively, you could use the closed form injection methods that target a given level of concurrency rather than arrival rate
I'm doing similar thing to yours and I managed to solve that using below code
private val injection = incrementConcurrentUsers(1)
.times(56)
.eachLevelLasting(1700 millis)
.startingFrom(10)
I came across this code by Ganssle regarding switch debouncing. The code seems pretty efficient, and the few questions I have maybe very obvious, but I would appreciate clarification.
Why does he check 10 msec for button press and 100 msec for button release. Can't he just check 10 msec for press and release?
Is polling this function every 5 msec from main the most effecient way to execute it or should I check for an interrupt in the pin and when there is a interrupt change the pin to GPI and go into the polling routine and after we deduce the value switch the pin back to interrupt mode?
#define CHECK_MSEC 5 // Read hardware every 5 msec
#define PRESS_MSEC 10 // Stable time before registering pressed
#define RELEASE_MSEC 100 // Stable time before registering released
// This function reads the key state from the hardware.
extern bool_t RawKeyPressed();
// This holds the debounced state of the key.
bool_t DebouncedKeyPress = false;
// Service routine called every CHECK_MSEC to
// debounce both edges
void DebounceSwitch1(bool_t *Key_changed, bool_t *Key_pressed)
{
static uint8_t Count = RELEASE_MSEC / CHECK_MSEC;
bool_t RawState;
*Key_changed = false;
*Key_pressed = DebouncedKeyPress;
RawState = RawKeyPressed();
if (RawState == DebouncedKeyPress) {
// Set the timer which will allow a change from the current state.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
} else {
// Key has changed - wait for new state to become stable.
if (--Count == 0) {
// Timer expired - accept the change.
DebouncedKeyPress = RawState;
*Key_changed=true;
*Key_pressed=DebouncedKeyPress;
// And reset the timer.
if (DebouncedKeyPress) Count = RELEASE_MSEC / CHECK_MSEC;
else Count = PRESS_MSEC / CHECK_MSEC;
}
}
}
Why does he check 10 msec for button press and 100 msec for button release.
As the blog post says, "Respond instantly to user input." and "A 100ms delay is quite noticeable".
So, the main reason seems to be to emphasize that the make-debounce should be kept short so that the make is registered "immediately" by human sense, and that the break debounce is less time sensitive.
This is also supported by a paragraph near the end of the post: "As I described in the April issue, most switches seem to exhibit bounce rates under 10ms. Coupled with my observation that a 50ms response seems instantaneous, it's reasonable to pick a debounce period in the 20 to 50ms range."
In other words, the code in the example is much more important than the example values, and that the proper values to be used depends on the switches used; you're supposed to decide those yourself, based on the particulars of your specific use case.
Can't he just check 10 msec for press and release?
Sure, why not? As he wrote, it should work, even though he wrote (as quoted above) that he prefers a bit longer debounce periods (20 to 50 ms).
Is polling this function every 5 msec from main the most effecient way to execute it
No. As the author wrote, "All of these algorithms assume a timer or other periodic call that invokes the debouncer." In other words, this is just one way to implement software debouncing, and the shown examples are based on a regular timer interrupt, that's all.
Also, there is nothing magical about the 5 ms; as the author says, "For quick response and relatively low computational overhead I prefer a tick rate of a handful of milliseconds. One to five milliseconds is ideal."
or should I check for an interrupt in the pin and when there is a interrupt change the pin to GPI and go into the polling routine and after we deduce the value switch the pin back to interrupt mode?
If you implement that in code, you'll find that it is rather nasty to have an interrupt that blocks the normal running of the code for 10 - 50ms at a time. It is okay if checking the input pin state is the only thing being done, but if the hardware does anything else, like update a display, or flicker some blinkenlights, your debouncing routine in the interrupt handler will cause noticeable jitter/stutter. In other words, what you suggest, is not a practical implementation.
The way the periodic timer interrupt based software debouncing routines (shown in the original blog post, and elsewhere) work, they take only a very short amount of time, just a couple of dozen cycles or so, and do not interrupt other code for any significant amount of time. This is simple, and practical.
You can combine a periodic timer interrupt and an input pin (state change) interrupt, but since the overhead of many of the timer-interrupt-only -based software debounces is tiny, it typically is not worth the effort trying to combine the two -- the code gets very, very complicated, and complicated code (especially on an embedded device) tends to be hard/expensive to maintain.
The only case I can think of (but I'm only a hobbyist, not an EE by any means!) is if you wanted to minimize power use for e.g. battery powered operation, and used the input pin interrupt to bring the device to partial or full power mode from sleep, or similar.
(Actually, if you also have a millisecond or sub-millisecond counter (not necessarily based on an interrupt, but possibly a cycle counter or similar), you can use the input pin interrupt and the cycle counter to update the input state on the first change, then desensitize it for a specific duration afterwards, by storing the cycle counter value at the state change. You do need to handle counter overflow, though, to avoid the situation where a long ago event seems to have happened just a short time ago, due to counter overflowing.)
I found Lundin's answer quite informative, and decided to edit my answer to show my own suggestion for software debouncing. This might be especially interesting if you have very limited RAM, but lots of buttons multiplexed, and you want to be able to respond to key presses and releases with minimum delay.
Do note that I do not wish to imply this is "best" in any sense of the world; I only want you to show one approach I haven't seen often used, but which might have some useful properties in some use cases. Here, too, the number of scan cycles (milliseconds) the input changes are ignored (10 for make/off-to-ON, 10 for break/on-to-OFF) are just example values; use an oscilloscope or trial-and-error to find the best values in your use case. If this is an approach you find more suitable to your use case than the other myriad alternatives, that is.
The idea is simple: use a single byte per button to record the state, with the least significant bit describing the state, and the seven other bits being the desensitivity (debounce duration) counter. Whenever a state change occurs, the next change is only considered a number of scan cycles later.
This has the benefit of responding to changes immediately. It also allows different make-debounce and break-debounce durations (during which the pin state is not checked).
The downside is that if your switches/inputs have any glitches (misreadings outside the debounce duration), they show up as clear make/break events.
First, you define the number of scans the inputs are desensitized after a break, and after a make. These range from 0 to 127, inclusive. The exact values you use depend entirely on your use case; these are just placeholders.
#define ON_ATLEAST 10 /* 0 to 127, inclusive */
#define OFF_ATLEAST 10 /* 0 to 127, inclusive */
For each button, you have one byte of state, variable state below; initialized to 0. Let's say (PORT & BIT) is the expression you use to test that particular input pin, evaluating to true (nonzero) for ON, and false (zero) for OFF. During each scan (in your timer interrupt), you do
if (state > 1)
state -= 2;
else
if ( (!(PORT & BIT)) != (!state) ) {
if (state)
state = OFF_ATLEAST*2 + 0;
else
state = ON_ATLEAST*2 + 1;
}
At any point, you can test the button state using (state & 1). It will be 0 for OFF, and 1 for ON. Furthermore, if (state > 1), then this button was recently turned ON (if state & 1) or OFF (if state & 0) and is therefore not sensitive to changes in the input pin state.
In addition to the accepted answer, if you just wish to poll a switch from somewhere every n ms, there is no need for all of the obfuscation and complexity from that article. Simply do this:
static bool prev=false;
...
/*** execute every n ms ***/
bool btn_pressed = (PORT & button_mask) != 0;
bool reliable = btn_pressed==prev;
prev = btn_pressed;
if(!reliable)
{
btn_pressed = false; // btn_pressed is not yet reliable, treat as not pressed
}
// <-- here btn_pressed contains the state of the switch, do something with it
This is the simplest way to de-bounce a switch. For mission-critical applications, you can use the very same code but add a simple median filter for the 3 or 5 last samples.
As noted in the article, the electro-mechanical bounce of switches is most often less than 10ms. You can easily measure the bouncing with an oscilloscope, by connecting the switch between any DC supply and ground (in series with a current-limiting resistor, preferably).
I'm creating a card game in pygame for my college project, and a large aspect of the game is how the game's AI reacts to the current situation. I have a function to randomly generate a number within 2 parameters, and this is how long I want the program to wait.
All of the code on my ai is contained within an if statement, and once called I want the program to wait generated amount of time, and then make it's decision on what to do.
Originally I had:
pygame.time.delay(calcAISpeed(AIspeed))
This would work well, if it didn't pause the rest of the program whilst the AI is waiting, stopping the user from interacting with the program. This means I cannot use while loops to create my timer either.
What is the best way to work around this without going into multi-threading or other complex solutions? My project is due in soon and I don't want to make massive changes. I've tried using pygame.time.Clock functions to compare the current time to the generated one, but resetting the clock once the operation has been performed has proved troublesome.
Thanks for the help and I look forward to your input.
The easiest way around this would be to have a variable within your AI called something like "wait" and set it to a random number (of course it will have to be tweaked to your program speed... I'll explain in the code below.). Then in your update function have a conditional that waits to see if that wait number is zero or below, and if not subtract a certain amount of time from it. Below is a basic set of code to explain this...
class AI(object):
def __init__(self):
#put the stuff you want in your ai in here
self.currentwait = 100
#^^^ All you need is this variable defined somewhere
#If you want a static number as your wait time add this variable
self.wait = 100 #Your number here
def updateAI(self):
#If the wait number is less than zero then do stuff
if self.currentwait <= 0:
#Do your AI stuff here
else:
#Based on your game's tick speed and how long you want
#your AI to wait you can change the amount removed from
#your "current wait" variable
self.currentwait -= 100 #Your number here
To give you an idea of what is going on above, you have a variable called currentwait. This variable describes the time left the program has to wait. If this number is greater than 0, there is still time to wait, so nothing will get executed. However, time will be subtracted from this variable so every tick there is less time to wait. You can control this rate by using the clock tick rate. For example, if you clock rate is set to 60, then you can make the program wait 1 second by setting currentwait to 60 and taking 1 off every tick until the number reaches zero.
Like I said this is very basic so you will probably have to change it to fit your program slightly, but it should do the trick. Hope this helps you and good luck with your project :)
The other option is to create a timer event on the event queue and listen for it in the event loop: How can I detect if the user has double-clicked in pygame?
So I have to implement a discrete event cpu scheduler for my OS class, but I don't quite understand what how it works. Every explanation/textbook I've read always put things in terms a little too abstract for me to be able to figure out how it actually works, nor does it put things in terms of cpu bursts and io bursts (some did but still not helpful enough).
I'm not posting any of the code I have (I wrote a lot actually but I think I'm going to rewrite it after I figure out (in the words of Trump) what is actually going on). Instead I just want help to figure out a sort of pseudocode I can then implement.
We are given multiple processes with an Arrival Time (AT), Total Cpu (TC), Cpu burst (CB), and Io burst (IO).
Suppose that I was given: p1 (AT=1, TC=200, CB=10, IO=20) and p2 (AT=1000, TC=200, CB=20, IO=10). And suppose I am implementing a First Come First Serve scheduler.
I also put question marks (?) where I'm not sure.
Put all processes into eventQ
initialize all processes.state = CREATED
While(eventQueue not empty) process = eventQueue.getFront()
if process.state==CREATED state, it can transition to ready
clock= process.AT
process.state = READY
then I add it back to the end (?) of the eventQueue.
if process.state==READY, it can transition to run
clock= process.AT + process.CPU_time_had + process.IO_time_had (?)
CPU_Burst = process.CB * Rand(b/w 0 and process.CB)
if (CB >= process.TC - process.CPU_time_had)
then it's done I don't add it back
process.finish_time = clock + CB
continue
else
process.CPU_time_had += CB
(?) Not sure if I put the process into BLOCK or READY here
Add it to the back of eventQueue (?)
if process.state==BLOCK
No idea what happens (?)
Or do things never get Blocked in FCFS (which would make sense)
Also how do IO bursts enter into this picture???
Thanks for the help guys!
Look at arrival time of each thread, you can sort the queue such that arrival times occurring first appear before threads with later arrival times. Run the front of the queue's thread (this is a thread scheduler). Run the thread a burst at a time, when the burst's cpu time is up, enter a new event at the back of the queue with an arrival time of the current time plus the burst's io time (sort the queue again on arrival times). This way other threads can execute while a thread is performing io.
(My answer is assuming you are in the same class as me. [CIS*3110])