Timer to represent AI reaction times - timer

I'm creating a card game in pygame for my college project, and a large aspect of the game is how the game's AI reacts to the current situation. I have a function to randomly generate a number within 2 parameters, and this is how long I want the program to wait.
All of the code on my ai is contained within an if statement, and once called I want the program to wait generated amount of time, and then make it's decision on what to do.
Originally I had:
pygame.time.delay(calcAISpeed(AIspeed))
This would work well, if it didn't pause the rest of the program whilst the AI is waiting, stopping the user from interacting with the program. This means I cannot use while loops to create my timer either.
What is the best way to work around this without going into multi-threading or other complex solutions? My project is due in soon and I don't want to make massive changes. I've tried using pygame.time.Clock functions to compare the current time to the generated one, but resetting the clock once the operation has been performed has proved troublesome.
Thanks for the help and I look forward to your input.

The easiest way around this would be to have a variable within your AI called something like "wait" and set it to a random number (of course it will have to be tweaked to your program speed... I'll explain in the code below.). Then in your update function have a conditional that waits to see if that wait number is zero or below, and if not subtract a certain amount of time from it. Below is a basic set of code to explain this...
class AI(object):
def __init__(self):
#put the stuff you want in your ai in here
self.currentwait = 100
#^^^ All you need is this variable defined somewhere
#If you want a static number as your wait time add this variable
self.wait = 100 #Your number here
def updateAI(self):
#If the wait number is less than zero then do stuff
if self.currentwait <= 0:
#Do your AI stuff here
else:
#Based on your game's tick speed and how long you want
#your AI to wait you can change the amount removed from
#your "current wait" variable
self.currentwait -= 100 #Your number here
To give you an idea of what is going on above, you have a variable called currentwait. This variable describes the time left the program has to wait. If this number is greater than 0, there is still time to wait, so nothing will get executed. However, time will be subtracted from this variable so every tick there is less time to wait. You can control this rate by using the clock tick rate. For example, if you clock rate is set to 60, then you can make the program wait 1 second by setting currentwait to 60 and taking 1 off every tick until the number reaches zero.
Like I said this is very basic so you will probably have to change it to fit your program slightly, but it should do the trick. Hope this helps you and good luck with your project :)

The other option is to create a timer event on the event queue and listen for it in the event loop: How can I detect if the user has double-clicked in pygame?

Related

Netlogo - how to show the final sum of ticks passed

I'm currently working on a simulation where i've programmed turtles to move faster when they reach a specific number of ticks. Given this reason i have to reset the ticks to make the command go over and over again. What i want to do is to see the final sum of ticks that has been run through the entire simulation but with the reset tick command, i can only see how many ticks has been run since the last time i used the move command on my turtles. This makes it impossible for me to use the "monitor" in my interface to show ticks. So how do i see the final count of ticks that has been run since i started the simulation, and not only since the last time it reset the ticks.
To do what you describe, you could create a global variable (say, tickTotal), increment it each tick, and add a monitor to the interface.
But what you should do instead is stop resetting ticks. Instead, use the mod command to control your cyclical response of turtle movement to the tick count.

Process scheduler

So I have to implement a discrete event cpu scheduler for my OS class, but I don't quite understand what how it works. Every explanation/textbook I've read always put things in terms a little too abstract for me to be able to figure out how it actually works, nor does it put things in terms of cpu bursts and io bursts (some did but still not helpful enough).
I'm not posting any of the code I have (I wrote a lot actually but I think I'm going to rewrite it after I figure out (in the words of Trump) what is actually going on). Instead I just want help to figure out a sort of pseudocode I can then implement.
We are given multiple processes with an Arrival Time (AT), Total Cpu (TC), Cpu burst (CB), and Io burst (IO).
Suppose that I was given: p1 (AT=1, TC=200, CB=10, IO=20) and p2 (AT=1000, TC=200, CB=20, IO=10). And suppose I am implementing a First Come First Serve scheduler.
I also put question marks (?) where I'm not sure.
Put all processes into eventQ
initialize all processes.state = CREATED
While(eventQueue not empty) process = eventQueue.getFront()
if process.state==CREATED state, it can transition to ready
clock= process.AT
process.state = READY
then I add it back to the end (?) of the eventQueue.
if process.state==READY, it can transition to run
clock= process.AT + process.CPU_time_had + process.IO_time_had (?)
CPU_Burst = process.CB * Rand(b/w 0 and process.CB)
if (CB >= process.TC - process.CPU_time_had)
then it's done I don't add it back
process.finish_time = clock + CB
continue
else
process.CPU_time_had += CB
(?) Not sure if I put the process into BLOCK or READY here
Add it to the back of eventQueue (?)
if process.state==BLOCK
No idea what happens (?)
Or do things never get Blocked in FCFS (which would make sense)
Also how do IO bursts enter into this picture???
Thanks for the help guys!
Look at arrival time of each thread, you can sort the queue such that arrival times occurring first appear before threads with later arrival times. Run the front of the queue's thread (this is a thread scheduler). Run the thread a burst at a time, when the burst's cpu time is up, enter a new event at the back of the queue with an arrival time of the current time plus the burst's io time (sort the queue again on arrival times). This way other threads can execute while a thread is performing io.
(My answer is assuming you are in the same class as me. [CIS*3110])

How do I reliably pause the state of a game?

So I have a couple instances where I want to be able to 'freeze' the state of my game. It's a top-down scroller, and I want to give the player the ability to pause the scrolling of the screen for a short time by using a powerup (if you fall to the bottom of the screen you die). I also want to pause the game as it is starting, and draw a 3, 2, 1, go! to give the player time to get ready, because right now as soon as you hit play, the screen starts scrolling.
I have been using Timer to accomplish this, however it doesn't work if I want to freeze the screen on consecutive occasions. Like if a player uses a freeze, the screen sucesssfully freezes, but if they quickly use another freeze, it doesn't work. There seems to be an unintended cool-down. I have a similar problem for the 'intro delay' I explained earlier. For some reason it only works on the first 2 levels. Here is how I am using Timer.
if(gameState != STATE.frozen) {
camera.translate(0, (float) scrollSpeed);
staminaBar.setPosition(staminaBar.getX(), (float) (staminaBar.getY()+scrollSpeed));
staminaMeter.setPosition(staminaMeter.getX(), (float) (staminaMeter.getY()+scrollSpeed));
healthBar.setPosition(healthBar.getX(), (float) (healthBar.getY()+scrollSpeed));
healthMeter.setPosition(healthBar.getX(), (float) (healthMeter.getY()+scrollSpeed));
boostBar.setPosition(boostBar.getX(), (float) (boostBar.getY()+scrollSpeed));
boostMeter.setPosition(boostMeter.getX(), (float) (boostMeter.getY()+scrollSpeed));
screenCeiling += (float) scrollSpeed;
screenFloor += (float) scrollSpeed;
}
else {
Timer.schedule(new Task() { //freeze the screen for 5 seconds
#Override
public void run() {
gameState = STATE.playing;
}
}, 5);
}
From what I understand, it waits 5 second before resuming the game to the 'playing' state. But like I said, this only works when activated between large intervals and I don't know why. Is there a better way I can be doing this?
As for the intro delay, this may be a question better asked seperate, but I use the same method, but it doesn't let me draw sprites over my tiledmap, so if anyone knows how to do that please include it in your response
Assuming the code you posted is in your render loop, then whenever you are not in the frozen state, you are creating a new timer task on every frame. So if you freeze for 5 seconds and your game is running at 60fps, you will create 300 timer tasks, each of which is going to force the game to go back to playing state. The last one won't fire until 5 seconds after the first one fires, so there will be a five second "cooldown" during which you cannot change the state to anything besides playing, because there will be another timer task firing on every frame during that time.
You need to ensure that you only create one timer task, only when you first enter frozen state.
I do have a suggestion...instead of using a state to freeze the game, use a variable that's multiplied by scrollSpeed. Change that variable from one to zero when the player uses the powerup. Then you can do fancy stuff like quickly interpolating from one to zero so the speed change isn't so abrupt. And it will probably make your code simpler since there would be one less state that must be handled differently in the algorithm.
Check your gameState variable in the render method and if the game is playing, then update the game as usual and draw it.
If the game is not playing then skip the game's update method and create a time delay from the current time:
endTime = TimeUtils.millis()+5000;
Then each time through the render method check to see if current time is greater than the end time. When the current time is past your delay time, set gameState back to playing and have the game go back to updating.
You'll have to have another boolean flag so you only set the endTime once (you don't want to keep resetting this each time through the render loop), or if "STATE" is an enum, then include an option for "justPaused" for the exact frame that you pause the game, set the end time, then set STATE to "notPlaying".
You can also use this to create an alternative "update" method where you can update your countdown sprites, but not update the game itself. When the game is playing this other update method will be skipped.

Laview PID.vi continues when event case is False

I'm looking for a way to disable the PID.vi from running in Labview when the event case container is false.
The program controls motor position to maintain constant tension on a cable using target force and actual force as the input parameters. The output is motor position. Note that reinitialize is set to false since it needs previous instances to spool the motor.
Currently, when the event case is true the motor spools as expected and maintains the cable tension. But when the event case state is toggled the PID.vi seems to be running in the background when false causing the motor spool sporatically.
Is there a way to freeze the PID controls so that it continues from where it left off?
The PID VI does not run in the background. It only executes when you call it. That said, PID is a time-based calculation. It calculates the difference from the last time you called the VI and uses that to calculate the new values. If a lot of time passed, it will just try to fix it using that data.
If you want to freeze the value and then resume fixing smoothly, you can use the limits input on the top and set the max and min to your desired output. This will cause the PID VI to always output that value. You will probably need a feedback node or shift register to remember the last value output by the PID.
What Yair said is not entirely true - the integral and derivative terms are indeed time dependent, but the proportional is not. A great reference for understanding PIDs and how they are implemented in LabVIEW can be found here (not sure why it is archived). Also, the PID VIs are coded in G so you can simply open them to see how they operate.
If you take a closer look at the PID VI, you can see what is happening and why you might not get the response you expect. In the VI itself, dt will be either 1) what you set it to, or 2) an accumulation of time based on a tick count stored in the VI (the default). Since you have not specified a dt, the PID algorithm uses the accumulated time between calls. If you have "paused" calculation for some time, this will have an impact on the integral and derivative output.
The derivative output will kick in when there is a change in the process variable (use of the process variable prevents derivative kick). The effect of a large accumulated time between calls will be to reduce the response of this term. The time that you have paused will have a more significant impact on the integral term. Since the response of the integral portion of the controller is the proportional to the integral of the error over dt, the longer you pause the larger the response simply because because the the algorithm is performing a trapezoidal integration over dt.
My first suggestion is don't pause the controller - let the PID do what it is supposed to do. If you are using it properly, then you should not have to stop the controller action. But, if you must pause the controller action, consider re-initializing the controller. This will force the controller to reset the accumulated time term and the response in the first iteration will be purely proportional.
Hope this helps.

Timing while a value is true in Labview

I have been making a labview program for kids to moniter energy production from various types of power sources. I have a condition where if they are underproducing a warning will fire, and if they are overproducing by a certian threshold, another warning will fire.
I would like to time how long throughout the activity, each type of warning is fired so each group will have a score at the end. This is just to simulate how the eventual program will behave.
Currently I have a timer which can derrive the amount of time the warning is true, but it will overwrite itself each time the warning goes off and on again.
So basically I need to to sum up the total time that the value has been true, even when it has flitted between true and false.
One method of tabulating the total time spent "True" would be exporting the Warning indicator from the While-loop using an indexed tunnel. If you also export from the loop a millisecond counter value of when the indicator was triggered, you can post process what will be an array of True/False values with the corresponding time at which the value transitioned.
The post processing could be a for-loop that keeps a running total of time spent true.
P.s. if you export your code as a VI snippet, others will be able to directly examine and modify the code without needing to remake it from scratch. See the NI webpage on the subject:
http://www.ni.com/white-paper/9330/en/
I would suggest going another way. Personally, I found the code you used confusing, since you subtract the tick count from the value in the shift register, which may work, but doesn't make any logical sense.
Instead, I would suggest turning this into a subVI which does the following:
Keep the current boolean value, the running total and the last reset time in shift registers.
Initialize these SRs on the first call using the first call primitive and a case structure.
If the value changes from F to T (compare the input to the SR), update the start time.
If it changes from T to F, subtract the start time from the current time and add that to the total.
I didn't actually code this now, so there may be holes there, but I'm leaving that as an exercise. Also, I would suggest making the VI reentrant. That way, you can simply call it a second time to get the same functionality for the second timer.

Resources