I want to record synchronization operations, such as locks, sempahores, barriers of a multithreaded application, so that I can replay the recorded application later on, for the purpose of debugging.
On way is to supply your own lock, sempaphore, condition variables etc.. functions that also do logging, but I think that is an overkill, because down under they must be using some common synchronization operations.
So my question is which synchronization operations I should log so that I require minimum modifications to my program. In other words, which are the functions or macros in glibc and system calls over which all these synchronization operations are built? So that I only modify those for logging and replaying.
The best I can think of is debugging with gdb in 'record' mode:
Gdb process record/replay execution log
According to this page: GDB Process Record threading support is underway, but it might not be complete yet.
Less strictly answering your question, may I suggest
Helgrind
DRD
On other platforms, several other threading checkers exist, but I haven't got much experience with them.
In your case, an effective method of "logging" systems calls on Linux may be to use the LD_PRELOAD trick, and over-ride the actual system calls with your own versions of the calls that will log the use of the call and then forward to the actual system call.
A more extensive example is given here in Linux Journal.
As you can see at these links, the basic gist of the "trick" is that you can make the system load your own dynamic library before any other system libraries, such as pthreads, etc., and then mask the calls to those library functions by placing your own versions of those functions as the precendent. You can then, inside your over-riding function, log the use of the original function, as well as pass on the arguments to the actual call you're attempting to log.
The nice thing about this method is it will catch pretty much any call you can make, both a function that remains entirely in user-land, as well as a function that will make a kernel call.
So GDB record mode doesn't support multithreading, but the RR record/replay system absolutely does: https://rr-project.org/ .
For a commercial solution with fewer technical restrictions, there's also UDB: https://undo.io/solutions/ .
I've worked on debuggers for some years now and from what I've seen, the GDB record+replay stuff is really not ready for primetime, for this and other reasons (eg, slowdown & huge memory requirements).
If you can get it to work in your dev environment, record+replay/reversible debugging can be pretty gamechanging for your workflow; I hope you find a way to leverage it.
Related
I am a beginner C/C++ programmer first of all, but I am curious about it.
My question is more theoretical.
I heard that C does not have explicit multithreading (MT) support, however there are libraries which implement this. I found "process.h" header which has to be included for building MT programs, but the thing I don't understand is how the MT itself works.
I know there are threads in CPU (assume it's single core for simplicity) running and there is only one thread per moment. The CPU is switching between threads really fast so that user sees it as a simultaneous work (correct me if not).
But - what really happens when I write the following
beginthread( Thread, 0, NULL ) //or whatever function/class method we use
keeping in mind that C does not have MT support. I mean, how does code tell the PC to run two functions multithreaded while it is not possible by the language explicit methods? I guess there is some "cheat" inside library related to "process.h", but what is that cheat, I can't just find on the web.
To be more specific - I am not asking about how to use MT, but how is it build?
Sorry if was answered earlier, or question is too complicated :)
UPD:
Imagine we have C language. It has functions, variables, pointers etc. I dont know any "special" function type that can run concurrently with other. Unless there are calls to some other functions from it. But then the caller function stops and waits?
Is it so that when I run MT applications, there is a special "global" function that calls my f1() and f2() repeatedly which looks like they were simultaneously working?
First of all, C11 does actually add multithreading support to the standard, so the premise that C does not support multithreading is no longer entirely correct.
However, I'm assuming your question is more to do with how can multithreading be implemented by a C library when standard C does(/did) not provide the necessary tools. The answer lies in the word “standard” – compilers and platforms can provide additional functionality beyond that required by the standard. Using such extra features makes the program/library less portable (i.e., more is required than is specified in the C standard), but the language and function call semantics can still be C.
Perhaps it is helpful to consider a standard library function such as fopen – somewhere inside that function code must eventually be called which could not be written in standard C, i.e., the implementation of the standard library itself must also rely on platform-specific code to access operating system functionality such as the file system. Every implementation of the standard library must thus implement the non-portable parts in a way specific to that platform (this is kind of the point of having a standard library instead of all code being platform-specific). But likewise a multithreading library can be implemented with non-standard features provided by that platform, but using such a library makes the code portable only to the platforms for which the same (or compatible) multithreading library is available.
As for how multithreading itself works, it is certainly outside the scope of what can be answered here, but as a simplified conceptual model on a single processor core, you can imagine the operating system managing “concurrent” processes by running one process for a short time, interrupting it, saving its state (current instruction, registers, etc), loading the saved state of another process, and repeating this. This gives the illusion of concurrent execution though in actual fact it is switching rapidly between different processes. On multi-core systems the execution on different cores can actually be concurrent, but there are typically more processes than there are cores, so this kind of switching will still happen on individual cores. Things are further complicated by processes waiting for something (I/O, another process, a timer, etc). Perhaps it suffice to say that the scheduler is a piece of software managing all of this inside the operating system and the multithreading library communicates with it.
(Note that there are many different ways to implement multithreading and multitasking, and statements in the above paragraph do not apply to all of them.)
It's platform specific. On Windows it eventually goes down to NtCreateThread which uses the assembly instruction syscall to call the operating system. So you can qualify it as a cheat.
On Linux it's the same, just the function with the syscall is called clone instead.
So here my problem. I have a Linux program running on a VM which uses OpenCL via dlopen to execute some commands. About half way through the program's execution it will sleep, and upon resume can make no assumptions about any state on the GPU (in fact, the drivers may have been reloaded and the physical GPU may have changed). Before sleeping dlclose is called and this does unload the OpenCL memory regions (which is a good thing), but the libraries OpenCL uses (cuda and nvidia libraries in this case) are NOT unloaded. Thus when the program resumes and tries to reinitalize everything again, things fail.
So what I'm looking for is a method to effectively unlink/unload the shared libraries that OpenCL was using so it can properly "restart" itself. As a note, the process itself can be paused (stopped) during this transition for as long as needed, but it may not be killed.
Also, assume I'm working with root access and have more or less no restrictions on what files I can modify/touch.
For a less short-term practical solution, you could probably try forcing re-initialization sequence on a loaded library, and/or force unloading.
libdl is just an ordinary library you can copy, modify, and link against instead of using the system one. One option is disabling RTLD_NODELETE handling. Another is running DT_FINI/DT_INIT sequence for a given library.
dl_iterate_phdr, if available, can probably be used to do the same without the need to modify libdl.
Poking around in libdl and/or using LD_DEBUG (if supported) may shed some light on why it is not being unloaded in the first place.
If they don't unload, this is either because something is keeping a reference to them, or they have the nodelete flag set, or because the implementation of dlclose does not support unloading them for some other reason. Note that there is no requirement that dlclose ever unload anything:
An application writer may use dlclose() to make a statement of intent on the part of the process, but this statement does not create any requirement upon the implementation. When the symbol table handle is closed, the implementation may unload the executable object files that were loaded by dlopen() when the symbol table handle was opened and those that were loaded by dlsym() when using the symbol table handle identified by handle.
Source: http://pubs.opengroup.org/onlinepubs/9699919799/functions/dlclose.html
The problem you're experiencing is a symptom of why it's an utterly bad idea to have graphics drivers loaded in the userspace process, but fixing this is both a technical and political challenge (lots of people are opposed to fixing it).
The only short-term practical solution is probably either factoring out the code that uses these libraries as a separate process that you can kill and restart, and talking to it via some IPC mechanism, or simply having your whole program serialize its state and re-exec itself on resume.
I'm implementing a pthread replacement library which is extremely lightweight. There are several reasons why I want to disable __thread completely.
It's a waste of memory. If I'm creating a thousand threads which has nothing to do with the context that declares a variable with __thread they will still allocate the program will still have allocated 1000*the size of that data bytes and never use it. It's simply not memory compatible with the mass-concurrency model. If we need extremely lightweight fibers with only 8K of stack, a TLS block of just 4K would be an overhead of 50% of the memory used by each thread. In some scenarios the TLS overhead would be enormous.
TLS is a complex standard and I simply don't have the time/resources to support it. It's to expensive. Personally I think the standard is poorly designed. It should have defined standard functions that had to be provided by the linker so thread libraries can take control over where TLS allocation takes place and insert relevant offsets and addresses it requires. Also the standard ELF implementation has been infected with pthread, expecting pthread sized structs to calculate offsets making it really hard to adapt to something else.
It's just a bad pattern. It encourages using globals and creating functions with static functions/side effects. This is not a territory we want to be in if we're creating correct programs that are easy to analyze.
If we really need "thread context" for some magic that tracks thread state behind the scenes (like for allocation or cancellation tracking) why not just expose the magic that TLS uses to understand that context in the first place? Personally I'm just going to use the %fs register directly. This would not be possible in libraries for obvious reasons but why should they be thread aware to begin with? Why not just design them correctly so they get the context related data they need in the first place right in the argument list?
My question is simply: What is the simplest way to disable __thread support and make clang emit errors if you accidentally used it? How can I get errors if I load a dynamic library which happens to require TLS?
I believe the simplest way is adding something like this unconditionally to your CFLAGS (perhaps from clang's equivalent of the gcc specfile if you want it to be system-global):
-D__thread='^-^'
where the righthand side can be anything that's syntactically invalid (a constraint violation) at any point in a C program.
As for preventing loading of libraries with TLS, you'd have to patch the linker and/or dynamic linker to reject them. If you're just talking about dlopen, your program could first read the file and parse the ELF headers for TLS relocations, then reject the library (without passing it to dlopen) if it has any. This might even be possible with an LD_PRELOAD wrapper.
I agree with you that, especially in its current implementation, TLS is something whose use should generally be avoided, but may I ask if you've measured the costs? I think stamping it out completely will be fairly difficult on a system that's designed to use it, and there's much lower-hanging fruit for cutting bloat. Which libc are you using? If it's glibc, I'm pretty sure glibc has lots of TLS it uses internally itself these days... Of course if you're writing your own threads implementation, that's going to entail a lot of interaction with the rest of the standard library, so perhaps you're already patching it out...?
By the way (shameless plug follows), we have an extremely light-weight threads implementation in musl libc which presently does not have TLS. I don't think it would be easy to integrate with another libc (and I'm sure if you're writing your own you will find it difficult to integrate with glibc, especially glibc's dynamic linker, which expects TLS to be supported) but if you can use the whole library as-is, it may meet your needs for specific projects, or have useful code you can borrow (license is MIT).
I was wondering if I could make a large number of system calls at the same time, with only one switch overhead. I need this because I have a need to make many (128) system calls at the same time. If I could do this without switching between kernel and userland 256+ times I think it could make my (speed sensitive) library significantly faster.
You really can't do that from an application program. What you could do is build a loadable kernel module that implements those operations and presents a simple API -- then you can change context once, do all the work, and return.
However, as with most of these sorts of optimization questions, the first thing to ask is "why do you think it's going to be necessary?" Do you have timing information etc? Have you profiled? How much of a performance issue do you really have, and is the additional complexity going to be worth the speedup?
I don't think Linux will support syscall chaining anytime soon. You might have more luck implementing this on another kernel and porting your application.
That said, it's not difficult to write a proxy to do the job in kernelspace for you, but don't expect it to be merged upstream. I've worked on real-time stuff and we had a solution like that, but never used in production because of support issues :/.
There are multiple sections in the manpages. Two of them are:
2 Unix and C system calls
3 C Library routines for C programs
For example there is getmntinfo(3) and getfsstat(2), both look like they do the same thing. When should one use which and what is the difference?
System calls are operating system functions, like on UNIX, the malloc() function is built on top of the sbrk() system call (for resizing process memory space).
Libraries are just application code that's not part of the operating system and will often be available on more than one OS. They're basically the same as function calls within your own program.
The line can be a little blurry but just view system calls as kernel-level functionality.
Libraries of common functions are built on top of the system call interface, but applications are free to use both.
System calls are like authentication keys which have the access to use kernel resources.
Above image is from Advanced Linux programming and helps to understand how the user apps interact with kernel.
System calls are the interface between user-level code and the kernel. C Library routines are library calls like any other, they just happen to be really commonly provided (pretty much universally). A lot of standard library routines are wrappers (thin or otherwise) around system calls, which does tend to blur the line a bit.
As to which one to use, as a general rule, use the one that best suits your needs.
The calls described in section 2 of the manual are all relatively thin wrappers around actual calls to system services that trap to the kernel. The C standard library routines described in section 3 of the manual are client-side library functions that may or may not actually use system calls.
This posting has a description of system calls and trapping to the kernel (in a slightly different context) and explains the underlying mechanism behind system calls with some references.
As a general rule, you should always use the C library version. They often have wrappers that handle esoteric things like restarts on a signal (if you have requested that). This is especially true if you have already linked with the library. All rules have reasons to be broken. Reasons to use the direct calls,
You want to be libc agnostic; Maybe with an installer. Such code could run on Android (bionic), uClibc, and more traditional glibc/eglibc systems, regardless of the library used. Also, dynamic loading with wrappers to make a run-time glibc/bionic layer allowing a dual Android/Linux binary.
You need extreme performance. Although this is probably rare and most likely misguided. Probably rethinking the problem will give better performance benefits and not calling the system is often a performance win, which the libc can occasionally do.
You are writing some initramfs or init code without a library; to create a smaller image or boot faster.
You are testing a new kernel/platform and don't want to complicate life with a full blown file system; very similar to the initramfs.
You wish to do something very quickly on program startup, but eventually want to use the libc routines.
To avoid a known bug in the libc.
The functionality is not available through libc.
Sorry, most of the examples are Linux specific, but the rationals should apply to other Unix variants. The last item is quite common when new features are introduced into a kernel. For example when kqueue or epoll where first introduced, there was no libc to support them. This may also happen if the system has an older library, but a newer kernel and you wish to use this functionality.
If your process hasn't used the libc, then most likely something in the system will have. By coding your own variants, you can negate the cache by providing two paths to the same end goal. Also, Unix
will share the code pages between processes. Generally there is no reason not to use the libc version.
Other answers have already done a stellar job on the difference between libc and system calls.