It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
can anybody help? How to load big files (2-5 MB) into SynEdit/UniSynEdit and do not let application get stuck, to make fast work? Is there virtual mode?
Thanks!!!
If resizing is slow, the problem is not loading, but rendering. Text is already in memory, but the component has to compute each line beginning on screen. If this part of the editor is not optimized, it could be slow (especially if it does allocate a lot of small strings for each line or word on screen).
The bottleneck of this component is when you use text word wrapping: the TSynWordWrapPlugin.DoWrapLine method '(doing all the work) do rely on the highlighter and will tokenize all text. I suspect that with a profiler, you'll see that most time is spent here. But I don't see any other way of handling it, without a major code modification. There is no so called "virtual mode" in SynEdit: it loads all and renders all lines in memory.
You could try the Letterpress version, which claims to be faster than original SynEdit. But it uses the same wrapping logic, so I guess there won't be a huge difference.
If you are using a Delphi 6 - 7 version of the compiler, please use FastMM4 as your memory manager: SynEdit does a lot of memory allocation, and the older BorlandMM is much slower than FastMM4. With modern version of Delphi, FastMM4 is the default MM (Memory Manager).
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
http://www.pcwintech.com/about-cleanmem
has anyone used this tool?
with the simple C program doing malloc and then doing sleep forever on windows I could see memory goes down, if ran cleanmem.
Questions:
Is this tool giving illusion by moving process memory to system cache? (as using windows api)
If this is the case when using C, everyone will prefer to run cleanmem, instead doing free (I don't agree with this, 'mem leak is mem leak' unless you call a free)
Does any similar tool exist for linux?
This program doesn't actually do anything. The author knows just enough to be dangerous but doesn't really know how memory works in Windows. This is probably my favorite line on the page you linked:
Warning: Memory Terminology in Windows is completely screwed. System Cache could mean something else, perhaps Memory Cache is better? as proof of this confusing way the memory has been labeled in windows, in Windows XP the PF usage in the task manager is actually commit charge, not page file usage
If you really could prevent Windows from writing to the page file, all you could succeed in doing is making programs run out of memory and crash.
The line is also hilarious:
CleanMem WILL NOT make your system faster. What CleanMem does, again, is help avoid the use of the page file on the hard drive, which is where your slow down comes from. There have been users including my self who have noticed a smoother system. A placebo effect perhaps? Who knows. I do know that CleanMem hurts nothing, and does help, to a point.
Edit
One more:
I think I should also clarify, I am no memory expert.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
For a few years now I have been working on a system that currently stores its data in a database. It has quite a high demand on it with millions of transactions.
There is no need to do this but purely for fun I have been for a long time now wondering just how fast I could make it if I wrote the whole thing in c, and writing/reading directly from disk. I know this is a little crazy.
All of the data fits in memory, so the biggest issue is going to be somehow storing a transaction log that can be replayed if the system crashes.
I am wondering what people with more experience in C than I think about this.
If i understand he question correctly, I can see two options:
You could look at something like SQLite, which gives both the "written in C" and fast execution parts, in addition to handling your storage to disk. It is a file-based database and is very fast and resilient against system/program crashes.
You could log all your data to disk while storing the live copy, but if you store it as SQL transactions, it is going to be larger than the equivalent raw data. In this case you have a trade-off in that something like SQLite will likely have more processing overhead than your hand coded RAM storage method, but may have less to write to disk due to its raw (non-SQL) storage.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have a network program which involves several interacting threads and a thread pool for overlapped network I/O. I'm compiling with MinGW, which is gcc for Windows.
It works 100% fine without compiler optimization, across several different machines, however when I turn optimization on it breaks.
Should this be expected to happen, or is this revealing a bug that I need to fix?
The most likely explanation is that it is revealing a bug that you need to fix. It is most likely a race condition in the threading, but it's also possible that it's an aliasing violation.
One trick that might help you localize the problem, especially if you replicate it easily, is to do a binary search to find the affected file. Basically, compile half your files with optimization and half without. See if the code works or crashes. That will localize the problem to half your code. Repeat, narrowing down the files with the issue, until you localize it to a file. If needed, split that file in two and move code from one file to the other to figure out the chunk of code that fails when optimized and not when unoptimized.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am using C on linux, and my program is both of high CPU-density an I/O-density. Using time command shows that my program having much overhead:
real 1m4.639s
user 0m53.929s
sys 0m9.747s
Is that possible to find out what costs 'sys 0m9.747s' and reduce it?
=================================================
Excuse me if this question isn't easy to answer without the code, but my code is too long to be posted here. So any tips or clues will also do. Thank you
The system CPU time is the time spent in the kernel for your process, doing system calls. You could use strace to find out which system calls your process is doing.
Maybe you have many many small read system calls (or write ones). You might lower their number by increasing your buffers size (so each read and write would transmit more bytes). See this and that answers to relevant questions.
You could also use mmap to project files into (virtual) memory; this could be a better way for some kind of disk I/O.
But I won't focus specifically on system time. It seems to eat only 15% of the CPU time which is a reasonable ratio.
I would suggest to profile your program (using gprof or oprofile) and to find where are the bottlenecks.
That is an extremely open-ended question, and there is no specific right answer given how broad the question is without more information. That said, I would recommend you use something like valgrind to profile your application, figure out what specific functions in your code are taking the most time, and work on optimizing them.
With that out of the way, you should really concentrate on the time spent in user land. With 53 secs vs 9 secs of system time, you'll probably be able to optimize a heck of a lot more there. Your optimization time would be better used there.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
I've just recently begun programming in C. I currently have an application that read's in values from a COM port and write them to a file. It reads about 500 data points per second. I want to be able to compute a real-time 2D plot of the data points with respect to time. Can someone please point me in the right direction?
I've tried to post-process the data in Excel and the built in capabilities allow me to get a great graph. However, I would just like something that is computed in real-time rather than post processing it. I am using Windows XP.
Thanks in advance !
You can use KST to plot your graphs in real time. You can probably keep your existing application as is (I assume you are writing to a CSV file if you are reading it in Excel) as KST will read the data from the file as it gets updated, and update its chart.
Here are some options for you to explore:
You can use OpenGL and in particular, GLUt. I have some C code for
this if you are interested.
You can pipe commands to gnuplot.
You can use GNU Octave from a C/C++ program. You can read more about
this here.
You can create your own bitmaps in real-time of you graphs. This isn't as hard as it sounds.