how do I keep the output to two decimal places
no decimal places:
import time
print ("60 SECOND TIMER")
run = input('click ENTER to run')
secs=0
while secs < 60:
print(60 - secs)
time.sleep(1)
secs = secs+1
two decimal places:
import time
print ("60 SECOND TIMER")
run = input('click ENTER to run')
secs=0
while secs < 60:
print(60 - secs)
time.sleep(0.01)
secs = secs+0.01
Quick note: two decimal places starts going mad (ends up with 8 or 9 decimal places
Use the round() function like this:
print(round(60 - secs, 2))
to output the remaining time to two decimal places.
Incidentally, printing it every 10 ms may be a bit optimistic considering that your display is probably only updated 60 times a second, i.e. at 16.67 ms intervals.
try time.sleep(.01 - timer() % .01), to lock the sleep with the timer(). Though it won't help if either time.sleep() or timer() do not support 10ms granularity. It may also depend on how Python interpreter switches between threads (GIL acquire/release) and OS scheduler (how busy the system is and how fast OS can switch between processes/threads).
To pause for a short duration, you could try a busy loop instead:
from time import monotonic as timer
deadline = timer() + .01
while timer() < deadline:
pass
For example, to do something every 10ms for a minute using time.sleep() would probably fail:
import time
from time import monotonic as timer
now = timer()
deadline = now + 60 # a minute
while now < deadline: # do something until the deadline
time.sleep(.01 - timer() % .01) # sleep until 10ms boundary
now = timer()
print("%.06f" % (deadline - now,))
but the solution based on a busy loop should be more precise:
import time
from time import monotonic as timer
dt = .01 # 10ms
time.sleep(dt - timer() % dt)
deadline = now = timer()
outer_deadline = now + 60 # a minute
while now < outer_deadline: # do something until the deadline
print("%.06f" % (outer_deadline - now,))
# pause until the next 10ms boundary
deadline += dt
while now < deadline:
now = timer()
Related
C Time Difference
scanf ("%2d %2d", &shours, &sminutes);
printf ("Enter End Time : ");
scanf ("%2d %2d", &ehours, &eminutes);
printf ("\nTIME DIFFERENCE\n");
tohours = ehours - shours;
printf("Hour(s) : %2d", tohours);
tominute = eminutes - sminutes;
printf("\nMinute(s): %2d ", tominute);
How can I make my output like this? When I try to run my code the minutes output is -59 instead of 1 and my hours is the one who got the output "1"
P.S. without using the if else statements
Use (some sort of) timestamps, by turning your hours and minutes variables into one, e.g:
stime = shours * 60 + sminutes;
etime = ehours * 60 + eminutes;
then calculate de difference of that
totime = etime - stime;
then convert that back into hours and minutes
tominutes = totime % 60;
tohours = (totime - tominutes) / 60;
(integer division will take care of rounding down)
Not the most elaborated solution, but I guess you're looking for an beginners-friendly solution
Edit
speaking of beginner-friendly: the % is the modulus operator that returns the remainder of a division. So when you divide 119 by 60 it returns 59. And yes, you could also just get the hours from dividing totime by 60 and let the integer division do the job, but it's nicer (read: clearer to read what's going on) when you divide (totime - tominutes) because it's like the missing part to the line with the modulus
I want to control TPH / TPS using HP LoadRunner. In JMeter we can do it by using constant throughput timer or if any one has alternative ways then please share.
For example:
Transaction A-Login (100 TPH)
Transaction B-Search Product (1000 TPH)
Transaction C-Add Product in cart (200 TPH)
Transaction D-Payment (200 TPH)
Transaction E-Logout (100 TPH)
If all of these transactions are in different scripts, no problem, since you can set different pacing and run time settings to each script.
I assume your problem is that all of these transactions are in the same script. In this case, the only solution is to create a parameter in your script, let's call this parameter iterator, and set its type as iteration number. This way, this parameter will be with value 1 in the first iteration, value 2 in the second, etc. etc.
Now you can use this parameter before calling each transaction.
Let's say your maximum TPH is 1,000. Then set the script's run time settings pace to 1,000 TPH. But if you want a certain transaction to run less than that, let's say only 100 TPH, then you need to run it every 10th iteration only (1,000 / 100 = 10).
To do that, in your script, you can use iterator % 10:
// Cast the iterator parameter to an int
var i;
i = atoi(lr_eval_string("{iterator}"));
// This will run 100 TPH
if ((i % 10) == 0)
{
lr_start_transaction("Login");
// Do login
...
lr_end_transaction("Login", LR_AUTO);
}
And another example, to run 200 TPH, you can use iterator % 5:
// This will run 200 TPH
if ((i % 5) == 0)
{
lr_start_transaction("Add Product");
// Do Add Product
...
lr_end_transaction("Add Product", LR_AUTO);
}
my python code goes like this:
def a():
...
...
subprocess.call()
...
...
def b():
...
...
and so on.
My task:
1) If subprocess.call() returns within 3 seconds, my execution should continue the moment subprocess.call() returns.
2) If subprocess.call() does not return within 3 seconds, the subprocess.call() should be terminated and my execution should continue after 3 seconds.
3) Until subprocess.call() returns or 3 seconds finishes, the further execution should not take place.
This can be done with threads but how?
Relevant part of the real code goes like this:
...
cmd = ["gcc", "-O2", srcname, "-o", execname];
p = subprocess.Popen(cmd,stderr=errfile)//compiling C program
...
...
inputfile=open(input,'w')
inputfile.write(scanf_elements)
inputfile.close()
inputfile=open(input,'r')
tempfile=open(temp,'w')
subprocess.call(["./"+execname,str(commandline_argument)],stdin=inputfile,stdout=tempfile); //executing C program
tempfile.close()
inputfile.close()
...
...
I am trying to compile and execute a C program using python.
When I am executing C program using subprocess.call() and suppose if the C program contains an infinite loop, then the subprocess.call() should be terminated after 3 seconds and the program should continue. I should be able to know whether the subprocess.call() was forcefully terminated or successfully executed so that I can accordingly print the message in the following code.
The back end gcc is of linux.
My task:
1) If subprocess.call() returns within 3 seconds, my
execution should continue the moment subprocess.call() returns.
2) If
subprocess.call() does not return within 3 seconds, the
subprocess.call() should be terminated and my execution should
continue after 3 seconds.
3) Until subprocess.call() returns or 3
seconds finishes, the further execution should not take place.
On *nix, you could use signal.alarm()-based solution:
import signal
import subprocess
class Alarm(Exception):
pass
def alarm_handler(signum, frame):
raise Alarm
# start process
process = subprocess.Popen(*your_subprocess_call_args)
# set signal handler
signal.signal(signal.SIGALRM, alarm_handler)
signal.alarm(3) # produce SIGALRM in 3 seconds
try:
process.wait() # wait for the process to finish
signal.alarm(0) # cancel alarm
except Alarm: # subprocess does not return within 3 seconds
process.terminate() # terminate subprocess
process.wait()
Here's a portable threading.Timer()-based solution:
import subprocess
import threading
# start process
process = subprocess.Popen(*your_subprocess_call_args)
# terminate process in 3 seconds
def terminate():
if process.poll() is None:
try:
process.terminate()
except EnvironmentError:
pass # ignore
timer = threading.Timer(3, terminate)
timer.start()
process.wait()
timer.cancel()
Finally the below code worked:
import subprocess
import threading
import time
def process_tree_kill(process_pid):
subprocess.call(['taskkill', '/F', '/T', '/PID', process_pid])
def main():
cmd = ["gcc", "-O2", "a.c", "-o", "a"];
p = subprocess.Popen(cmd)
p.wait()
print "Compiled"
start = time.time()
process = subprocess.Popen("a",shell=True)
print(str(process.pid))
# terminate process in timeout seconds
timeout = 3 # seconds
timer = threading.Timer(timeout, process_tree_kill,[str(process.pid)])
timer.start()
process.wait()
timer.cancel()
elapsed = (time.time() - start)
print elapsed
if __name__=="__main__":
main()
If you're willing to convert your call to a Popen constructor instead of call (same way you are running gcc), then one way to approach this is to wait 3 seconds, poll the subprocess, and then take action based on whether its returncode attribute is still None or not. Consider the following highly contrived example:
import sys
import time
import logging
import subprocess
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s', level=logging.INFO)
if __name__ == '__main__':
logging.info('Main context started')
procCmd = 'sleep %d' % int(sys.argv[1])
proc = subprocess.Popen(procCmd.split())
time.sleep(3)
if proc.poll() is None:
logging.warning('Child process has not ended yet, terminating now')
proc.terminate()
else:
logging.info('Child process ended normally: return code = %s' % str(proc.returncode))
logging.info('Main context doing other things now')
time.sleep(5)
logging.info('Main context ended')
And this results in different logging output depending upon whether the child process completed within 3 seconds or not:
$ python parent.py 1
2015-01-18 07:00:56,639 INFO Main context started
2015-01-18 07:00:59,645 INFO Child process ended normally: return code = 0
2015-01-18 07:00:59,645 INFO Main context doing other things now
2015-01-18 07:01:04,651 INFO Main context ended
$ python parent.py 10
2015-01-18 07:01:05,951 INFO Main context started
2015-01-18 07:01:08,957 WARNING Child process has not ended yet, terminating now
2015-01-18 07:01:08,957 INFO Main context doing other things now
2015-01-18 07:01:13,962 INFO Main context ended
Note that this approach above will always wait 3 seconds even if the subprocess completes sooner than that. You could convert the above into something like a loop that continually polls the child process if you want different behavior - you'll just need to keep track of how much time has elapsed.
#!/usr/bin/python
import thread
import threading
import time
import subprocess
import os
ret=-1
def b(arg):
global ret
ret=subprocess.call(arg,shell=True);
thread.start_new_thread(b,("echo abcd",))
start = time.time()
while (not (ret == 0)) and ((time.time() - start)<=3):
pass
if (not (ret == 0)) :
print "failed"
elapsed = (time.time() - start)
print elapsed
thread.exit()
elif (ret == 0):#ran before 3 sec
print "successful"
elapsed = (time.time() - start)
print elapsed
I have written the above code which is working and satisfying all my contstraints.
The link https://docs.python.org/2/library/thread.html says:
thread.exit()
Raise the SystemExit exception. When not caught, this will cause the thread to exit silently.
So I suppose there should be no problem of orphan processes, blocked resources, etc. Please suggest.
I'm trying to determine how long certain statements take to run in my lua code.
My code looks like this:
function test(self)
local timer1
local timer2
local timer3
timer1 = os.time()
print('timer1 start time is:'.. timer1)
--do some stuff.
print( 'Timer1 end time is:' , os.difftime(os.time(), timer1) )
timer2 = os.time()
print('timer2 start time is:'.. timer2)
-- do a lot of stuff
print( 'Timer2 end time is:' , os.difftime(os.time(), timer2) )
timer3=os.time()
print('timer3 start time is:'.. timer3)
-- a lot of processing...
print( 'Timer3 end time is:' , os.difftime(os.time(), timer3) )
end
This is what the output looks like:
timer1 start time is:1401798084
Timer1 end time is: = 0
timer2 start time is:1401798084
Timer2 end time is: = 0
timer3 start time is:1401798084
Timer3 end time is: = 2
Other things I've tried:
Lua - Current time in milliseconds
In the above post, I found this snippet of code:
local x = os.clock()
local s = 0
for i=1,100000 do s = s + i end
os.execute("sleep "..1)
print(string.format("elapsed time: %.2f\n", os.clock() - x))
I added the sleep time... but when it runs, I get the output:
elapsed time: 0.00
I'm clearly doing something wrong. If you have suggestions on how I can fix / improve this, I'm all ears. In the interim, I'm going to revisit the lua site to read up on os.difftime() in case I'm using it incorrectly.
EDIT 1
I changed the test code to look like this:
local x = os.clock()
local s = 0
for i=1,100000 do
s = s + i
os.execute("sleep "..1)
end
print(string.format("elapsed time: %.2f\n", os.clock() - x))
and now I'm getting some values that make sense!
os.clock measures CPU time, not wall time. CPU time does not include time spent in sleep. So the script below still prints zero elapsed time:
local x = os.clock()
os.execute("sleep 60")
print(string.format("elapsed time: %.2f\n", os.clock() - x))
When you move os.execute into the loop, what you're probably measuring is the time to fork a shell. The script below print nonzero elapsed time, even if it is a short loop:
local x = os.clock()
for i=1,1000 do os.execute("true") end
print(string.format("elapsed time: %.2f\n", os.clock() - x))
Finally, you got zero elapsed time in the first loop because Lua is fast. Try changing the limit to 1000000:
local x = os.clock()
local s = 0
for i=1,1000000 do s = s + i end
print(string.format("elapsed time: %.2f\n", os.clock() - x))
This snippet does many rounds of addition and then executes one sleep call for one second.
for i=1,100000 do s = s + i end
os.execute("sleep "..1)
This snippet does the same amount of addition but sleeps for one second each time through the loop.
for i=1,100000 do
s = s + i
os.execute("sleep "..1)
end
That is a big difference.
How can I make timeout = 1 second for wait_event_timeout function?
Function : wait_event_timeout (wq,condition,timeout);
How can I make timeout = 1 second.
And if call function like that : wait_event_timeout(queue,flag!='n',30*HZ);
timeout =???
The function wait_event_timeout takes its timeout value in jiffies. Use the constant HZ (number of timer ticks per second) to specify time in jiffies. The expression HZ is the equivalent of one second. The expression 30 * HZ is the equivalent of 30 seconds.
wait_event_timeout (wq,condition,HZ);
wait_event_timeout take timeout in jiffies. and HZ is a defined identifier in linux which means 1 second. So n * HZ means n seconds. Hope now you can convert jiffies time to real world time, like n millisecond = n*HZ/1000
just a remark: take into account that HZ differs from system to system. on most systems / kernels i know Hz is set to 100. so dividing it by 1000 to get milliseconds will always end up with value 0.