I use time command to measure execution time of a c program and I see that it spends so much time in kernel mode (although I expect it to run mostly in user mode). I don't know why, and I don't have any clue where to search for the problem.
this is an example:
real 0m44.548s
user 0m19.956s
sys 1m19.944s
These are information about the test program. it is streamcluster from parsec benchmark tools.
Application domain: data mining
Data sharing: low
Data exchange: medium
Parallelization model: data-parallel
Contains many pthread_mutexes and pthread_conditions
CPU bound
Little memory allocations or writing to files
I run this program on a virtual machine.
Related
I'm doing some investigation in profiling scalability tests on openMPI, with Linux Ubuntu 18.04.
I could profile the benchmark within some useful MPI profiling tools like mpiP, Scalasca, etc. However, still, there is an open question for me,
What is the kernel usage (time, memory, I/O, etc) to do an MPI job? I need to see kernel movements to profile the MPI tasks (processes) over different ranks, How can I profile the kernel usage? I think all the (mentioned above) profilers provide for me are from the user point of view, right?
I'm currently working on an IoT project and I want to log the execution of my software and hardware.
I want to log them then send them to some DB in case I need to have a look at my device remotely.
The wip IoT device will have to be as minimal as possible so the act of having to write very often inside a flash memory module seems weird to me.
I know that it will run the RTOS OS Nucleus on an Cortex-M4 with some modules connected through SPI.
Can someone with more expertise enlighten me ?
Thanks.
You will have to estimate your hourly/daily/whatever data volume that needs to go into the log and extrapolate to the expected lifetime of your product. Microcontroller flash usually isn't made for logging and thus it features neither enduring flash cells (some 10K-100K write cycles usually compared to 1M or more for dedicated data chips - look it up in the uC spec sheet) nor wear leveling. Wear leveling is any method which prevents software from writing to the same physical cell too frequently (which would e.g. be the directory for a simple file system).
For your log you will have to create a quite clever or complex method to circumvent any flash lifetime problems.
But the problems don't stop there: usually the MCU isn't able to read from Flash memory when writing to it where "writing" means a prolonged (several microseconds up to milliseconds depending on the chip) sequence of instructions controlling the internal Flash statemachine (programming voltage, saturation times, etc.) until the new values have reliably settled in the memory. And, maybe you guessed it, "reading" in this context also means reading instructions, that is you have to make sure that whichever code and interrupts that may occur during the Flash write are only executing code in RAM, cache or other memories and not in the normal instruction memory. It is doable but the more complex the SW system that you are running above the HW layer, the less likely it will work reliably.
I have to measure the latency between a user space program to the driver it interacts with. I basically send a packet through this application. The latecny is between write in the user space to corresponding write function in the kernel
I used clock_gettime with CLOCK_MONOTONIC in the user space and
getrawmonotonic in the kernel (driver) and when I see the difference, it is huge (around 4ms). So I am definitely using the wrong approach.
So, what are the best ways to do this?
To measure just a single context switch from user- to kernel-space, try to use TSC (Time Stamp Counter). It is available on x86 and ARMs, user- and kernel-space.
More info about TSC on Wikipedia: https://en.wikipedia.org/wiki/Time_Stamp_Counter
BSD-licensed implementation for x86 could be found here
and for 64-bit ARM here.
Also, as comments suggested, consider to use any standard tool available to measure a round-trip latency, i.e. use-to-kernel and back.
If I do this, I would use ftrace, which is a performance tool provided by linux kernel.
It can trace almost all the function in the kernel.
It firstly log the information into a ring buffer in the memory, so it coast very little.
There is a very good document in linux kernel source code "Documentation/trace/ftrace.txt", you also can find it here.
1.prepare the environment, configure ftrace.
2.run the application.
3.0.In the application, bind cpu, and give the application priority。
3.1.In the application, write something to the trace_marker,
3.2.In the application, call the function which you want to test.
4.get the log from the ring buffer.
5.calculate the latency.
My C program which uses sorting runs 10x slower the first time than other times. It uses file of integers to sort and even if I change the numbers, program still runs faster. When I restart the PC, the very first time program runs 10x slower. I use time to count the time.
The operating system holds the data in RAM even if it's not needed anymore (this is called "caching"), so when the program runs again, it gets all data from there and there's no disk I/O. Even when you change the data, that change happens in RAM first, and it stays there even after its written to the file.
It doesn't stay in RAM forever though, mind you. If the memory is needed for something else, the cache is deleted. At that point, a disk access is needed (and it's cached in RAM again at that point.)
This is why first access after a reboot is always slow; the data hasn't been cached yet since it was never read from the file.
You have to make hypothesis and confront them to reality. The first you can reasonably make is that it does smell a lot like a caching issue !
Ask yourself those questions :
Does my data fits in free RAM (= is my file cached by the OS FS cache
?)
Does my data fits in CPU data cache ?
Does my data fits in HDD internal cache ?
The most easy hypothesis to discard is the FS cache. Under linux, just issue sync; echo 3 > /proc/sys/vm/drop_caches between each call to your program. The first will make sure the cached data will make it to the physical medium (hard drive), the second will drop the content of the filesystem cache from memory.
The 'physical medium' might be the HDD cache itself, so beware... Under linux you can disable this "write-back" cache with the command hdparm -W 0 <device>, for instance if you are working with drive sda, hdparm -W 0 /dev/sda will do the job. You might want to re-enable it after you are finished with your tests :)
Another hypothesis is the CPU cache, have a look at How can I do a CPU cache flush in x86 Windows? and How to clear CPU L1 and L2 cache
Well, it may or may not be one of those, but it doesn't hurt trying :)
If your program does network access then that could be the reason for the initial delay. Many network protocols need time to setup things. Some examples:
DNS: if your program does any network access, chances are it needs to resolve a hostname to an IP address. The first time it would need at least a network round trip to populate a local cache. Following requests would be shorter.
Networked filesystems (NFS, CIFS and others): opening files can happen through the network.
Even some seemingly innocuous library functions can require network access: the users list for the host can be on a remote directory server.
Appart from this you could use some low level tracing tool to see where the time is spent. On linux a basic tool is strace -r. There is probably some similar tool for other systems. Your compiler must also come with a profiler (i.e. gprof for GCC or maybe Valgrind).
I had a very similar issue but I wasn't loading in a large file - so I was baffled at the long first execution time (caching couldn't have been the issue).
This answer pointed me in the right direction - it was my real-time anti-virus protection. Every time I recompiled the program it would re-scan it as being potentially malicious. I added my project path as an "Exception" to Avira's (in my case) real-time virus protection.
Program execution on the first execution is now lightening quick!
This is nothing new, not just your program many popular commercial softwares face this problem.
To start with check this MATLAB Article about slow fist time execution
In case of other programming language which runs on a Virtual Machine like C# or Java this is quite common.
http://en.wikipedia.org/wiki/Just-in-time_compilation#Startup_delay_and_optimizations
Caching is a good reason for that to happen in C but still 10x is quite a long duration..It might be also possible that you system was loading other resources after you restart.
You should run the program after say 10 minutes after restart for better results. All the startup application would be loaded by that time. (10 minutes ---- depends on the number of startup applications and the time it takes to start each of them)
This is because of compiler optimatization ,what it does is it caches the result for Temoparal Locality and the activation record is saved,time is also saved because the binding object donot have to be reloaded again during Linking Stage
There are two components to the time measured
If you are reading a file from disk,and loading it in memory - and sorting :
1)Time to read the file & store it in an array
2)Time of sorting
Were these measured separately?
Can you check this out?
Invalidating Linux Buffer Cache
Instead of doing a restart, if repeating the experiment with clearing the cache gives the same result, then you can infer that File buffer caching effects were not factored into.
My C program which uses sorting runs 10x slower the first time than other times. It uses file of integers to sort and even if I change the numbers, program still runs faster. When I restart the PC, the very first time program runs 10x slower. I use time to count the time.
The operating system holds the data in RAM even if it's not needed anymore (this is called "caching"), so when the program runs again, it gets all data from there and there's no disk I/O. Even when you change the data, that change happens in RAM first, and it stays there even after its written to the file.
It doesn't stay in RAM forever though, mind you. If the memory is needed for something else, the cache is deleted. At that point, a disk access is needed (and it's cached in RAM again at that point.)
This is why first access after a reboot is always slow; the data hasn't been cached yet since it was never read from the file.
You have to make hypothesis and confront them to reality. The first you can reasonably make is that it does smell a lot like a caching issue !
Ask yourself those questions :
Does my data fits in free RAM (= is my file cached by the OS FS cache
?)
Does my data fits in CPU data cache ?
Does my data fits in HDD internal cache ?
The most easy hypothesis to discard is the FS cache. Under linux, just issue sync; echo 3 > /proc/sys/vm/drop_caches between each call to your program. The first will make sure the cached data will make it to the physical medium (hard drive), the second will drop the content of the filesystem cache from memory.
The 'physical medium' might be the HDD cache itself, so beware... Under linux you can disable this "write-back" cache with the command hdparm -W 0 <device>, for instance if you are working with drive sda, hdparm -W 0 /dev/sda will do the job. You might want to re-enable it after you are finished with your tests :)
Another hypothesis is the CPU cache, have a look at How can I do a CPU cache flush in x86 Windows? and How to clear CPU L1 and L2 cache
Well, it may or may not be one of those, but it doesn't hurt trying :)
If your program does network access then that could be the reason for the initial delay. Many network protocols need time to setup things. Some examples:
DNS: if your program does any network access, chances are it needs to resolve a hostname to an IP address. The first time it would need at least a network round trip to populate a local cache. Following requests would be shorter.
Networked filesystems (NFS, CIFS and others): opening files can happen through the network.
Even some seemingly innocuous library functions can require network access: the users list for the host can be on a remote directory server.
Appart from this you could use some low level tracing tool to see where the time is spent. On linux a basic tool is strace -r. There is probably some similar tool for other systems. Your compiler must also come with a profiler (i.e. gprof for GCC or maybe Valgrind).
I had a very similar issue but I wasn't loading in a large file - so I was baffled at the long first execution time (caching couldn't have been the issue).
This answer pointed me in the right direction - it was my real-time anti-virus protection. Every time I recompiled the program it would re-scan it as being potentially malicious. I added my project path as an "Exception" to Avira's (in my case) real-time virus protection.
Program execution on the first execution is now lightening quick!
This is nothing new, not just your program many popular commercial softwares face this problem.
To start with check this MATLAB Article about slow fist time execution
In case of other programming language which runs on a Virtual Machine like C# or Java this is quite common.
http://en.wikipedia.org/wiki/Just-in-time_compilation#Startup_delay_and_optimizations
Caching is a good reason for that to happen in C but still 10x is quite a long duration..It might be also possible that you system was loading other resources after you restart.
You should run the program after say 10 minutes after restart for better results. All the startup application would be loaded by that time. (10 minutes ---- depends on the number of startup applications and the time it takes to start each of them)
This is because of compiler optimatization ,what it does is it caches the result for Temoparal Locality and the activation record is saved,time is also saved because the binding object donot have to be reloaded again during Linking Stage
There are two components to the time measured
If you are reading a file from disk,and loading it in memory - and sorting :
1)Time to read the file & store it in an array
2)Time of sorting
Were these measured separately?
Can you check this out?
Invalidating Linux Buffer Cache
Instead of doing a restart, if repeating the experiment with clearing the cache gives the same result, then you can infer that File buffer caching effects were not factored into.