It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am using C on linux, and my program is both of high CPU-density an I/O-density. Using time command shows that my program having much overhead:
real 1m4.639s
user 0m53.929s
sys 0m9.747s
Is that possible to find out what costs 'sys 0m9.747s' and reduce it?
=================================================
Excuse me if this question isn't easy to answer without the code, but my code is too long to be posted here. So any tips or clues will also do. Thank you
The system CPU time is the time spent in the kernel for your process, doing system calls. You could use strace to find out which system calls your process is doing.
Maybe you have many many small read system calls (or write ones). You might lower their number by increasing your buffers size (so each read and write would transmit more bytes). See this and that answers to relevant questions.
You could also use mmap to project files into (virtual) memory; this could be a better way for some kind of disk I/O.
But I won't focus specifically on system time. It seems to eat only 15% of the CPU time which is a reasonable ratio.
I would suggest to profile your program (using gprof or oprofile) and to find where are the bottlenecks.
That is an extremely open-ended question, and there is no specific right answer given how broad the question is without more information. That said, I would recommend you use something like valgrind to profile your application, figure out what specific functions in your code are taking the most time, and work on optimizing them.
With that out of the way, you should really concentrate on the time spent in user land. With 53 secs vs 9 secs of system time, you'll probably be able to optimize a heck of a lot more there. Your optimization time would be better used there.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
http://www.pcwintech.com/about-cleanmem
has anyone used this tool?
with the simple C program doing malloc and then doing sleep forever on windows I could see memory goes down, if ran cleanmem.
Questions:
Is this tool giving illusion by moving process memory to system cache? (as using windows api)
If this is the case when using C, everyone will prefer to run cleanmem, instead doing free (I don't agree with this, 'mem leak is mem leak' unless you call a free)
Does any similar tool exist for linux?
This program doesn't actually do anything. The author knows just enough to be dangerous but doesn't really know how memory works in Windows. This is probably my favorite line on the page you linked:
Warning: Memory Terminology in Windows is completely screwed. System Cache could mean something else, perhaps Memory Cache is better? as proof of this confusing way the memory has been labeled in windows, in Windows XP the PF usage in the task manager is actually commit charge, not page file usage
If you really could prevent Windows from writing to the page file, all you could succeed in doing is making programs run out of memory and crash.
The line is also hilarious:
CleanMem WILL NOT make your system faster. What CleanMem does, again, is help avoid the use of the page file on the hard drive, which is where your slow down comes from. There have been users including my self who have noticed a smoother system. A placebo effect perhaps? Who knows. I do know that CleanMem hurts nothing, and does help, to a point.
Edit
One more:
I think I should also clarify, I am no memory expert.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a network program which involves several interacting threads and a thread pool for overlapped network I/O. I'm compiling with MinGW, which is gcc for Windows.
It works 100% fine without compiler optimization, across several different machines, however when I turn optimization on it breaks.
Should this be expected to happen, or is this revealing a bug that I need to fix?
The most likely explanation is that it is revealing a bug that you need to fix. It is most likely a race condition in the threading, but it's also possible that it's an aliasing violation.
One trick that might help you localize the problem, especially if you replicate it easily, is to do a binary search to find the affected file. Basically, compile half your files with optimization and half without. See if the code works or crashes. That will localize the problem to half your code. Repeat, narrowing down the files with the issue, until you localize it to a file. If needed, split that file in two and move code from one file to the other to figure out the chunk of code that fails when optimized and not when unoptimized.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I'm quite interested in getting "stuck in" to some Unix source code, say Fedora or Ubuntu.
In practical terms, how would one "re-write" some part of the Unix OS. I presume you would need two machines, a dev machine and a tester? Would you need to re-install the OS on each modification of a .c file? How could I edit the file and re-compile it etc?
What resources are there for knowing which parts of a Unix OS/Kernel relate to which C files (I presume there is no C++) and how to find them?
Thanks in advance for help
ps my motivations for doing this are to eventually be able to learn more about the lower-level fundamentals of the Unix OS, so that I could try and get into programming high freq trading systems.
I think it would probably be a good idea to have some kind of virtual machine to experiment with, that way you could do a snapshot apply your changes but still be able to go back without much effort. Also it allows you to simulate communication between PCs in a simple fashion.
First you need to know what you're looking for. You want to download and look at the: linux kernel. Which is the same for Fedora and Ubuntu (and all other GNU Linux distributions). Second, you might want to start with something easy, like downloading the kernel, configuring and compiling it and booting it. Once you do that you can move up from there.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
can anybody help? How to load big files (2-5 MB) into SynEdit/UniSynEdit and do not let application get stuck, to make fast work? Is there virtual mode?
Thanks!!!
If resizing is slow, the problem is not loading, but rendering. Text is already in memory, but the component has to compute each line beginning on screen. If this part of the editor is not optimized, it could be slow (especially if it does allocate a lot of small strings for each line or word on screen).
The bottleneck of this component is when you use text word wrapping: the TSynWordWrapPlugin.DoWrapLine method '(doing all the work) do rely on the highlighter and will tokenize all text. I suspect that with a profiler, you'll see that most time is spent here. But I don't see any other way of handling it, without a major code modification. There is no so called "virtual mode" in SynEdit: it loads all and renders all lines in memory.
You could try the Letterpress version, which claims to be faster than original SynEdit. But it uses the same wrapping logic, so I guess there won't be a huge difference.
If you are using a Delphi 6 - 7 version of the compiler, please use FastMM4 as your memory manager: SynEdit does a lot of memory allocation, and the older BorlandMM is much slower than FastMM4. With modern version of Delphi, FastMM4 is the default MM (Memory Manager).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
kindly tell me that is it possible that i run a program at an intervel of 30 sec ,run for some time and give error which was previously not given ? thanks
how to stop this
if my question is wrong kindly dont vote me down
just tell me my intention is not to hurt any one nor ask stupid question as i really appreciate you guys
thanks
Yes, it is possible for a program to work for a period of time and then fail.
Have you tried Valgrind?
Yes, it's possible. Just because your program appears to "work" doesn't mean that it does not have bugs.
There are many types of mistakes that you can make, especially when working with memory (pointers, arrays, and the like), that may sometimes silently work anyway, and at other times may completely crash. It's largely arbitrary, based on whatever values happen to be present in memory at the addresses that you are erroneously accessing.
Use a tool like Valgrind and/or GDB to debug these sorts of issues.
It's not specific enough to answer without more details. However, the best thing for you to do would be to use a debugger to run your program in so you can examine what it's doing when it dies, a problem occurs, or to just help you walk through it. GDB is a popular free debugger.