What free tools can I use to debug multithreaded programs created using the pthread library in Linux? (Besides pen & paper, of course...)
The usual way of printing debug messages isn't working out very well.
Debugging concurrent programs is inherently difficult, because the debugging tool tends to change the scheduling (often making it tamer so that the bug disappears).
One technique I've had some success with is to log to a data structure that is not protected by a lock. Then, once the system is idle, print the data structure or look at it in a debugger. The important thing is to avoid making a system call or invoking a synchronization primitive when logging, so that logging has minimal influence on the scheduler.
static char *log_buffer[LOG_BUFFER_LENGTH];
static size_t log_index;
#define LOG(message) (log_buffer[log_index++] = (message))
If your thread is interrupted in the middle of logging, the log buffer will become inconsistent. This is improbable enough to be useful for debugging, though it must be kept in mind. I've never tried this on a multiprocessor machine; I would expect the lack of synchronization to make the log buffer inconsistent very quickly in practice¹.
¹
Which is one more reason not to do multithreaded programming on multiprocessor machines. Use message passing instead.
Both the GNU gdb debugger and its ddd graphical front-end support threads.
Related
I've been looking into how I could embed languages (let's use Lua as an example) in Erlang. This of course isn't a new idea and there are many libraries out there that can do this. However I was wondering if it was possible to start a Genserver with state which is modified by Lua. This means that once you start the Genserver, it will start a (long running) Lua process to manipulate the Genserver's state. I know this is possible as well, but I was wondering if I could spawn 1,000 10,000 or even 100,000 of these processes.
I'm not really familiar with this topic but I have done some research.
(Please correct me if I'm wrong on any of these options).
TLDR; Skip to the last paragraph.
First option: NIFs:
This doesn't seem like an option since it will block the Erlang Scheduler of the current process. If I want to spawn a large amount of these it will freeze the entire runtime.
Second option: Port Driver:
It's like a NIF but communicates by sending data to a specified port, which can also send data back to Erlang. This is nice although this also seems to block the scheduler. I've tried a library which does the boiler plat for you as well, but that seemed to block the scheduler after spawning 10 processes. I've also looked into the postgresql example on the Erlang Documentation which is said to be async but I couldn't get the example code to work (R13?). Is it even possible to run as many Port Driver processes without blocking the runtime?
Third option: C Nodes:
I thought this was very interesting and wanted to try it out, but apparently the project "erlang-lua" already does this. It's nice because it won't crash your Erlang VM if something goes wrong and the processes are isolated. But in order to actually spawn a single process you need to spawn an entire node. I have no idea how expensive this is. Nor am I sure what the limit is for connecting nodes in a cluster, but I don't see myself spawning 100,000 C nodes.
Fourth option: Ports:
At first I thought this was the same as a Port Driver but it's actually different. You spawn a process which executes an application and communicates through STDIN and STDOUT. This would work well for spawning a large amount of processes, and (I think?) they aren't a threat to the Erlang VM. But if I'm going to communicate through STDIN / STDOUT, why even bother with an embeddable language to begin with? Might as well use any other scripting language.
And so after much research in a field I'm not familiar with I've come to this. You could a Genserver as an "entity" where the AI is written in Lua. Which is why I'd like to have a processes for each entity. My question is how do I achieve spawning many Genservers which communicate with long running Lua processes? Is this even possible? Should I be tackling my problem differently?
If you can make the Lua code — or more accurately, its underlying native code — cooperate with the Erlang VM, you have a few choices.
Consider one of the most important functions of the Erlang VM: managing the execution of a (potentially large number of) Erlang's lightweight processes across a relatively small set of scheduler threads. It uses several techniques to know when a process has used up its timeslice or is waiting and so should be scheduled out to give another process a chance to run.
You seem to be asking how you can get native code to run however it likes within the VM, but as you've already hinted, the reason native code can cause problems for the VM is that it has no practical way to stop the native code from completely taking over a scheduler thread and thus preventing regular Erlang processes from executing. Because of this, native code has to cooperatively yield the scheduler thread back to the VM.
For older NIFs, the choices for such cooperation were:
Keep the amount of time NIF calls ran on a scheduler thread to 1ms or less.
Create one or more private threads. Transition each long-running NIF call from its scheduler thread over to a private thread for execution, then return the scheduler thread to the VM.
The problems here are that not all calls can complete in 1ms or less, and that managing private threads can be error-prone. To get around the first problem, some developers would break the work down into chunks and use an Erlang function as a wrapper to manage a series of short NIF calls, each of which completed one chunk of work. As for the second problem, well, sometimes you just can't avoid it, despite its inherent difficulty.
NIFs running on Erlang 17.3 or later can also cooperatively yield the scheduler thread using the enif_schedule_nif function. To use this feature, the native code has to be able to do its work in chunks such that each chunk can complete within the usual 1ms NIF execution window, similar to the approach mentioned earlier but without the need to artificially return to an Erlang wrapper. My bitwise example code provides many details about this.
Erlang 17 also brought an experimental feature, off by default, called dirty schedulers. This is a set of VM schedulers that do not have the same native code execution time constraints as the regular schedulers; work there can block for essentially infinite periods without disrupting normal VM operation.
Dirty schedulers come in two flavors: CPU schedulers for CPU-bound work, and I/O schedulers for I/O-bound work. In a VM compiled to enable dirty schedulers, there are by default as many dirty CPU schedulers as there are regular schedulers, and there are 10 I/O schedulers. These numbers can be altered using command-line switches, but note that to try to prevent regular scheduler starvation, you can never have more dirty CPU schedulers than regular schedulers. Applications use the same enif_schedule_nif function mentioned earlier to execute NIFs on dirty schedulers. My bitwise example code provides many details about this too. Dirty schedulers will remain an experimental feature for Erlang 18 as well.
Native code in linked-in port drivers is subject to the same on-scheduler execution time constraints as NIFs, but drivers have two features NIFs don't:
Driver code can register file descriptors into the VM polling subsystem and be notified when any of those file descriptors becomes I/O-ready.
The driver API supports access to a non-scheduler async thread pool, the size of which is configurable but by default has 10 threads.
The first feature allows native driver code to avoid blocking a thread for I/O. For example, instead of performing a blocking recv call, driver code can register the socket file descriptor so the VM can poll it and call the driver back when the file descriptor becomes readable.
The second feature provides a separate thread pool useful for driver tasks that can't conform to the scheduler thread native code execution time constraints. You can achieve the same in a NIF but you have to set up your own thread pool and write your own native code to manage and access it. But regardless of whether you use the driver async thread pool, your own NIF thread pool, or dirty schedulers, note that they are all regular operating system threads, and so trying to start a huge number of them simply isn't practical.
Native driver code does not yet have dirty scheduler access, but this work is on-going and it might become available as an experimental feature in an 18.x release.
If your Lua code can make use of one or more of these features to cooperate with the Erlang VM, then what you're attempting may be possible.
I'm developing a program that will need to run on Internet servers (a back-end component to be used by several cross-platform programs). I'm familiar with the security precautions to take (to prevent buffer overflows and SQL Injection attacks, for instance), but have never written a server program before, or any program that will be used on this scale.
The program needs to be able to serve hundreds or thousands of clients simultaneously. The protocols are designed for processing speed and to minimize the amount of data that must be exchanged, and the server side will be written in C. There will be both a Windows and a Linux version from the same code.
Questions:
How should the program handle communications -- multiple threads, a single thread handling all the sockets in turn, or spawn a new process for every so many incoming connections (or for each one)?
Do I need to worry about things like memory fragmentation, since this program will need to run for months at a time?
What other design issues, specific to this kind of programming, might an experienced developer of cross-platform programs for desktop and mobile systems not be aware of?
Please, no suggestions to use a different language. That decision has already been made, for reasons I'm not at liberty to go into.
For I'd use libevent or libev and non-blocking I/O. This way the operating system will take case of most of your scheduling problems. I'd also use a thread pool for processing tasks, that by nature are blocking, so they don't block the main loop. And if you ever need to read or write large amounts of data to or from the disc, use mmap, again to let the OS handle as much as possible.
The basic advice is use the OS, as much as possible. If you want a good example of a program which does this look at Varnish, it is very well written, and performs fantastic.
With my experience running multiple servers for over 3 years of uptime, and programs with little over a year of uptime I can still recommend making the setup so that the system gracefully recovers from a program error and from a server reboot.
Even though performance gets a hit when a program is restarted, you need to be able to handle that as external circumstances can force the program to such a restart.
Don't try to reinvent the wheel when not needed, and have a look at zeromq or something like that to handle distribution of incoming communications. (If you are allowed to, prototype the backends in a more forgiving language than C like Python, then reimplement in C but keeping the communications protocol)
I am working on a message queue used to communication among process on embedded Linux. I am wondering why I'm not using the message queues provided by Linux as following:
msgctl, msgget msgrcv, msgsnd.
instead of creating shared memory, and sync up with semaphore?
What's the disadvantage of using this set of functions directly on a business embedded product?
The functions msgctl(), msgget(), msgrcv(), and msgsnd() are the 'System V IPC' message queue functions. They'll work for you, but they're fairly heavy-weight. They are standardized by POSIX.
POSIX also provides a more modern set of functions, mq_close(), mq_getattr(), mq_notify(), mq_open(), mq_receive(), mq_send(), mq_setattr(), and mq_unlink() which might be better for you (such an embarrassment of riches).
However, you will need to check which, if either, is installed on your target platforms by default. Especially in an embedded system, it could be that you have to configure them, or even get them installed because they aren't there by default (and the same might be true of shared memory and semaphores).
The primary advantage of either set of message facilities is that they are pre-debugged (probably) and therefore have concurrency issues already resolved - whereas if you're going to do it for yourself with shared memory and semaphores, you've got a lot of work to do to get to the same level of functionality.
So, (re)use when you can. If it is an option, use one of the two message queue systems rather than reinvent your own. If you eventually find that there is a performance bottleneck or something similar, then you can investigate writing your own alternatives, but until then — reuse!
System V message queues (the ones manipulated by the msg* system calls) have a lot of weird quirks and gotchas. For new code, I'd strongly recommend using UNIX domain sockets.
That being said, I'd also strongly recommend message-passing IPC over shared-memory schemes. Shared memory is much easier to get wrong, and tends to go wrong much more catastrophically.
Message passing is great for small data chunks and where immutability needs to be maintained, as message queues copy data.
A shared memory area does not copy data on send/receive and can be more efficient for larger data sets at the tradeoff of a less clean programming model.
The disadvantages message queues are miniscule - some system call and copying overhead - which amount to nothing for most applications. The benefits far outweigh that overhead. Synchronization is automatic and they can be used in a variety of ways: blocking, non-blocking, and since in linux the message queue types are implemented as file descriptors they can even be used in select() calls for multiplexing. In the POSIX variety, which you should be using unless you have a really compelling need to use SYSV queues, you can even automatically generate threads or signals to process the queue items. And best of all they are fully debugged.
Message queue and shared memory are different. And it is upto the programmer and his requirement to select which to use. In shared memory you have to be bit careful in reading and writing. And the processes should be synchronized. So the order of execution is very important in shared memory. In shared memory, there is no way to find whether the reading value is newly written value or the older one. And there is no explicit mechanism to wait.
Message queue and shared memory are different. And it is upto the programmer and his requirement to select which to use. There are predefined functions to make your life easy in message queue.
I am dealing with an existing project (in C) that is currently running on a single thread, and we would like to run on multiple platforms AND have multiple threads. Hopefully, there is a library for this, because, IMHO, the Win32 API is like poking yourself in the eye repeatedly. I know about Boost.Thread for C++, but, this must be C (and compilable on MinGW and gcc). Cygwin is not an option, sorry.
Try OpenMP API, it's multi-platform and you can compile it with GCC.
Brief description from the wikipedia:
OpenMP (Open Multi-Processing) is an application programming interface
(API) that supports multi-platform shared memory multiprocessing
programming in C, C++, and Fortran,[3] on most platforms, processor
architectures and operating systems, including Solaris, AIX, HP-UX,
Linux, macOS, and Windows. It consists of a set of compiler
directives, library routines, and environment variables that influence
run-time behavior.
I would use the POSIX thread API - pthread. This article has some hints for implementing it on Windows, and a header-file-only download (BSD license):
http://locklessinc.com/articles/pthreads_on_windows/
Edit: I used the sourceforge pthreads-win32 project in the past for multi-platform threading and it worked really nicely. Things have moved on since then and the above link seems more up-to-date, though I haven't tried it. This answer assumes of course that pthreads are available on your non-Windows targets (for Mac / Linux I should think they are, probably even embedded)
Windows threading has sufficiently different functionality when compared to that of Linux such that perhaps you should consider two different implementations, at least if application performance could be an issue. On the other hand, simply implementing multi-threading may well make your app slower than it was before. Lets assume that performance is an issue and that multi-threading is the best option.
With Windows threads I'm specifically thinking of I/O Completion Ports (IOCPs) which allow implementing I/O-event driven threads that make the most efficient use of the hardware.
Many "classic" applications are constructed along one thread/one socket (/one user or similar) concept where the number of simultaneous sessions will be limited by the scheduler's ability to handle large numbers of threads (>1000). The IOCP concept allows limiting the number of threads to the number of cores in your system which means that the scheduler will have very little to do. The threads will only execute when the IOCP releases them after an I/O event has occurred. The thread services the IOC, (typically) initiates a new I/O and returns to wait at the IOCP for the next completion. Before releasing a thread the IOCP will also provide the context of the completion such that the thread will "know" what processing context the IOC belongs to.
The IOCP concept completely does away with polling which is a great resource waster although "wait on multiple object" polling is somewhat of an improvement. The last time I looked Linux had nothing remotely like IOCPs so a Linux multi-threaded application would be constructed quite differently compared to a Windows app with IOCPs.
In really efficient IOCP apps there is a risk that so many IOs (or rather Outputs) are queued to the IO resource involved that the system runs out of non-paged memory to store them. Conversely, in really inefficient IOCP apps there is a risk that so many Inputs are queued (waiting to be serviced) that the non-paged memory is exhausted when trying to temporarily buffer them.
If someone needs a portable and lightweight solution for threading in C, take a look at the plibsys library. It provides you thread management and synchronization, as well as other useful features like portable socket implementation. All major operating systems (Windows, Linux, OS X) are supported, various other less popular operating systems are also supported (i.e. AIX, HP-UX, Solaris, QNX, IRIX, etc). On every platform only the native calls are used to minimize the overheads. The library is fully covered with Unit tests which are run on a regular basis.
glib threads can be compiled cross-platforms.
The "best"/"simplest"/... answer here is definitely pthreads. It's the native threading architecture on Unix/POSIX systems and works almost as good on Windows. No need to look any further.
Given that you are constrained with C. I have two suggestions:
1) I have a seen a project (similar to yours) that had to run on Windows and Linux with threads. The way it was written was that it (the same codebase) used pthreads on Linux and win32 threads on Windows. This was achieved by a conditional #ifdef statement wherever threads needed to be created such as
#ifdef WIN32
//use win32 threads
#else
//use pthreads
#endif
2) The second suggestion might be to use OpenMP. Have you considered OpenMP at all?
Please let me know if I missed something or if you want more details. I am happy to help.
Best,
Krishna
From my experience, multi threading in C for windows is heavily tied to Win32 APIs. Other languages like C# and JAVA supported by a framework also tie into these core libraries while offering their thread classes.
However, I did find an openthreads API platform on sourceforge which might help you:
http://openthreads.sourceforge.net/
The API is modeled with respect to the Java and POSIX thread standard,
I have not tried this myself as I currently do not have a need to support multiple platforms on my C/C++ projects.
I understand the differences between a multithreaded program and a program relying on inter-machine communication. My problem is that I have a nice multithreaded program written in 'C' that works and runs really well on an 8-core machine. There is now opportunity to port this program to a cluster to gain access to more cores. Is it worth the effort to rip out the pthread stuff and retrofit MPI (which I've never used) or are we better off recoding the whole thing (or most of it) from scratch? Assume we are "stuck" with C so a wholesale change of language isn't an option.
Depending on how your software is written, there may or may not be advantages to going to MPI over keeping your pthread implementation.
Unfortunately (or fortunately), message passing is a very different beast than pthreading - the basic assumption is quite different. I love this quote from Joshua Phillips of the Maestro team: "The difference between message-passing and shared-state communication is equivalent to the difference between sending a colleague an e-mail requesting her to complete a task and opening up her organizer to write down the task directly in her to-do list. More than just being rude, the latter is likely to confuse her – she might erase it, not notice it, or accidentally prioritize it incorrectly."
Unfortunately, the way you share data is very different. There is no direct access to data in other threads (since it can be on other machines), so it can be a very daunting task to migrate from pthreads to MPI. On the other hand, if the code is written so each thread is isolated, it can be an easy task, and definitely worthwhile.
In order to determine how useful this will be, you'll need to understand the code, and what you hope to achieve by switching. It can be worthwhile as a learning experience (you learn a LOT about synchronization and threading by working in MPI), but may not be practical if the gains will be minor.
Re. your comment to Reed -- this sounds like an easy, low-overhead conversion to MPI. Just be careful: not all MPI APIs support dynamic creation of processes, i.e., you start your program with N processes (specified at startup) and you're stuck with N processes throughout the life-time of the program.