I want to measure my program exe startup time it
is the time from the moment of click or [enter]
if started from console or any other way till
the moment when program startup is done and
its code can execute with all structures set up
(probably it means when first commands in main()
are executed [?])
I want to measure it programmaticaly from
inside of my code - so i think a way to do it
would need to read an exact time of firing
the program then substract this time from
the time of first line of codes execution
(sorry for my bad english) How to do it?
Do you want to measure from the click or [enter], which includes the shell (cmd.exe or Windows Explorer) overhead, or from program startup?
Programatically you can only measure time from process creation, so that won't include finding, reading and mapping the .exe file or any DLLs. The timing will vary depending on what is already mapped in virtual memory. It will include initialisation of the C RTL, but not much else.
Probably the best you can do is GetProcessTimes.
The problem is that even the parent process does not necessarily wait for the child process initialisation to complete -- it could, using the WaitForInputIdle, but if you are using standard tools like Windows Explorer then you stuck with that. I can't see any way to measure the shell overhead without writing your own.
Related
I have a tcl/tk with c desktop application, and one of the requirements is to change the system time, in the background there are threads running from the c code, and "after" commands from the tcl code. Whenever I change the time to an earlier time the system hangs
i.e: 05:50:12 -> 05:45:12 also i get weird behavior when going forward in time. I'm running lubuntu. I'm not sure what to do in this situation, I made some test and it seems the after keeps on waiting after i change back in time.
to change the time i use : exec date --set="STRING" from the tcl code
Tcl depends on the system time (converted to seconds from the start of the Unix epoch) increasing fairly close to monotonically for the correct behaviour of a number of things, but most particularly anything in the after command. Internally, after computes the absolute time that an event should happen and only triggers things once that time is reached, so that things being triggered early (which can happen because of various OS events) don't cause problems. If you set the system time back a long way, Tcl will wait until the absolute time is reached anyway, which will look a lot like a hang.
Just synch your clock with NTP (i.e., switch on ntpd) and stop fiddling with the system clock by hand.
I'm working on a program which may spawn multiple child processes, and I need to get precise information about the CPU time used by each child process, even if there are several child processes running simultaneously. I'm doing this using wait4(2) on a separate thread of the parent process, which works quite well.
However, this approach provides the total time spent by a specific child process, and I'm only interested in the amount of time spent after a particular event, namely the child process' first output to stdout. I've looked into other ways of getting the CPU time of child processes, such as getrusage(2) and times(3), but these don't seem to be able to distinguish between multiple child processes' times, and instead provide the sum of all child processes' times.
I'm working on a text editor application that lets users run scripts and code in a variety of different languages, and the app has a built-in code timing feature. The app relies on bash scripts to run the users code, and the first thing my bash scripts do are to output a start-of-heading byte (0x02). After this the bash script does whatever it needs to do to run the users code, and that is the thing I want to time. Bash may do a bit of initialization (to set up PATH variables etc) which may take 30 or 40 ms to complete, and I don't want that initialization to be timed along with the rest. If the users code is for instance a simple Hello World type program in C, the timing feature might display something like 41 ms instead of the actual 1 ms which it took to run their code.
Any ideas on how this might be done?
Thanks :)
A couple of possible solutions come to mind. They don't get CPU time after first output exactly, but they may avoid the problem you're dealing with.
The first is to get rid of the bash scripts and just do the equivalent work in your program before running the user's code (between fork() and exec(), for example). That way the child process' CPU time from wait4() doesn't include your extra setup.
Another possibility is to write a simple application that does nothing but run the user's application and report its CPU time back to your main application. That runner application can then be called from your scripts to run the user's program, rather than calling the user's program directly. The runner application might itself use fork()/exec()/wait4() to run the user's program, and could report the information from wait4() to your main program through any of a variety of means such as a named pipe, message queue, socket, or even just writing the information to a file your main program can open afterward. That way your bash scripts can do work both before and after running the user's program that won't be included in the CPU time reported by the runner application. You'd probaby want the runner to accept an argument like the name of a pipe or an output file in addition to the user's program's path and arguments so that you can control how the information is reported -- that way you could run more than one instance of the runner application and still keep the information they report separate.
If you do want to include the work done by the script, but not the time taken to load bash, then you could signal the main program by echoing something to a pipe from the bash script before and after the parts you want to time. The main program can then measure the time between the start and stop signals, which will at least get you wall-clock time (though not actual CPU time). Otherwise I'm not sure there's a way to perfectly measure the CPU time for just part of the script without using a modified bash (which I'd avoid if possible).
I am designing a file system in user space and need to test it. I do not want to use the available benchmarking tools as my requirements are different. So to test the file system I wish to simulate file access operation. To do this, I first use the ftw() function to walk through one f my existing file system(experimental) and list all the files and directories in a file.
Then I invoke a simulator to simulate file access by a number of processes. Thus, the simulator randomly starts a process i.e it forks a thread which does what a real process would have done. The thread randomly selects a file operation (read, write, rename etc) selects arguments to this operation from the list(generated by ftw()) . The thread does a number of such file operations and then exits marking the end of a process. The simulator continues to spawn threads; thread execution can overlap just as real processes do. Now, as operations are performed by threads, files get inserted, deleted, renamed and this is updated in the list of files.
I have not yet started coding. Does the plan seem sane? I am also not sure how to code the simulator...how will it spawn threads over a period of time. Should I be using some random delay to do this.
Thanks
Yep, that seems fairly reasonable to me. I would consider attempting to impose a statistical distribution over your file operations (and accesses to particular files) that is somehow matched to your expected workload. You might be able to find some statistics about typical filesystem workloads as a starting point.
That sounds about right for a decent test case just to make sure it's working. You could use sleep() to wait between spawning threads or just spawn them all at once and have them do an operation then wait a bit, then do another operation, etc... IMO if you hit it hard with a lot of requests and it works then there's a likely chance your filesystem will do just fine. Take an example from PostMark which all it does is append like crazy to different files and other benchmarks that do random access reads/writes in different locations to make sure that the page has to be read from disk.
Is there any way to determine whenever a context switch takes place without the use of profilers? I have written a C program to monitor the time taken for different processes in a program to finish execution. I want to show the process/thread context switching as well. The time at which the switch takes place and from prev_id -> curr_id. These 3 informations would be helpful.
You can observe voluntary_ctxt_switches and nonvoluntary_ctxt_switches values from the /proc/self/status file.
I have a project written in C and I would like to know if there is a simple way to profile its execution time and memory usage under Windows.
Thanks in advance.
You can launch process monitor, run your program, and then go back to procmon and use Tools/Process Activity Summary to have an overview of the time and memory used by your program.
Timing information is easy enough: create a .bat file invoking the program, output the current system time before the program starts and after it ends...something like this in pseudocode:
print system time
execute program
print system time
As for memory consumption, I would say it's a bit more involved, although definitely doable. You might try something along these lines...