Thread Creation - is it dynamically allocated? - c

Is a thread dynamically allocated memory?
I have been researching and have a fair understanding of threads and how they are used. I have specifically looked at the POSIX API for threads.
I am trying to understand thread creation and how it differs from a simple malloc call.
I understand that threads share certain memory segments with the parent process, but it has it's own stack.
Any resources I can read through on this topic is appreciated. Thanks!

Thread creation and a malloc() call are completely different concepts. A malloc() call dynamically allocates the requested byte chunk of memory from the heap for the use of the program.
Whereas a thread can be considered as a 'light-weight process'. The thread is an entity within a process and every process will have atleast one thread to help complete its execution. The threads of a process will share the process virtual address and all the resources of the process. When you create new threads of a process, these new threads will have their own user stack, they will be scheduled independently to be executed by the scheduler. Also for the thread to run concurrently they will have their context which will store the state of the thread just before preemption i.e the status of all the registers.

Is a thread dynamically allocated memory?
No, nothing of the sort. Threads have memory uniquely associated with them -- at least a stack -- but such memory is not the thread itself.
I am trying to understand thread creation and how it differs from a simple malloc call.
New thread creation is not even the same kind of thing as memory allocation. The two are not at all comparable.
Threading implementations that have direct OS support (not all do) are unlikely to rely on the C library to obtain memory for their stack, kernel data structures, or any other thread-implementation-associated data. On the other hand, those that do not have OS support, such as Linux's old "green" threads, are more likely to allocate memory via the C library. Even threading implementations without direct OS support have the option of using a system call to obtain the memory they need, just as malloc() itself must do. In any case, the memory obtained is not itself the thread.
Note also that the difference between threading systems with and without OS support is orthogonal to the threading API. For example, Linux's green threads and the now-ubiquitous, kernel-supported NPTL threads both implement the POSIX thread API.

Related

How do jemalloc and tcmalloc track threads?

Now I am actively studying the code of memory managers jemalloc and tcmalloc. But I can't understand how these two managers track threads.
If I understand correctly, a new thread can be detected during memory allocation, after which a new thread cache is created. But how does tcmalloc / jemalloc detect when a thread is destroyed and the thread cache attached to it can be freed for a future use?
Google results could not give even a minimum of any useful information.
I can only answer for jemalloc, but the way it works is that when the thread cache is created it is associated with the thread specific data for that thread.
When you create thread specific data, you can give it a 'destructor', which is invoked when the thread is being destroyed. If you're using pthreads it's the pthread_key_create routine, which is the C way of creating thread specific data.
In the case of jemalloc, there is a bit of code in tcache.h, which hooks tcache_thread_cleanup with the tcache data (my source jemalloc-3.0.0):
143 malloc_tsd_funcs(JEMALLOC_INLINE, tcache, tcache_t *, NULL,
144 tcache_thread_cleanup)
So when the thread is exited, the destructor gets called. It gets given the pointer to the cache for that thread and runs the tcache_thread_cleanup routine at that time.

pthread_create(3) and memory synchronization guarantee in SMP architectures

I am looking at the section 4.11 of The Open Group Base Specifications Issue 7 (IEEE Std 1003.1, 2013 Edition), section 4.11 document, which spells out the memory synchronization rules. This is the most specific by the POSIX standard I have managed to come by for detailing the POSIX/C memory model.
Here's a quote
4.11 Memory Synchronization
Applications shall ensure that access to any memory location by more
than one thread of control (threads or processes) is restricted such
that no thread of control can read or modify a memory location while
another thread of control may be modifying it. Such access is
restricted using functions that synchronize thread execution and also
synchronize memory with respect to other threads. The following
functions synchronize memory with respect to other threads:
fork() pthread_barrier_wait() pthread_cond_broadcast()
pthread_cond_signal() pthread_cond_timedwait() pthread_cond_wait()
pthread_create() pthread_join() pthread_mutex_lock()
pthread_mutex_timedlock()
pthread_mutex_trylock() pthread_mutex_unlock() pthread_spin_lock()
pthread_spin_trylock() pthread_spin_unlock() pthread_rwlock_rdlock()
pthread_rwlock_timedrdlock() pthread_rwlock_timedwrlock()
pthread_rwlock_tryrdlock() pthread_rwlock_trywrlock()
pthread_rwlock_unlock() pthread_rwlock_wrlock() sem_post()
sem_timedwait() sem_trywait() sem_wait() semctl() semop() wait()
waitpid()
(exceptions to the requirement omitted).
Basically, paraphrasing the above document, the rule is that when applications read or modify a memory location while another thread or process may modify it, they should make sure to synchronize the thread execution and memory with respect to other threads by calling one of the listed functions. Among them, pthread_create(3) is mentioned to provide that memory synchronization.
I understand that this basically means there needs to be some sort of memory barrier implied by each of the functions (although the standard seems not to use that concept). So for example returning from pthread_create(), we are guaranteed that the memory modifications made by that thread before the call appear to other threads (running possibly different CPU/core) after they also synchronize memory. But what about the newly created thread - is there implied memory barrier before the thread starts running the thread function so that it unfailingly sees the memory modifications synchronized by pthread_create()? Is this specified by the standard? Or should we provide memory synchronization explicitly to be able to trust correctness of any data we read according to POSIX standard?
Special case (which would as a special case answer the above question): does a context switch provide memory synchronization, that is, when the execution of a process or thread is started or resumed, is the memory synchronized with respect to any memory synchronization by other threads of execution?
Example:
Thread #1 creates a constant object allocated from heap. Thread #1 creates a new thread #2 that reads the data from the object. If we can assume the new thread #2 starts with memory synchronized then everything is fine. However, if the CPU core running the new thread has copy of previously allocated but since discarded data in its cache memory instead of the new value, then it might have wrong view of the state and the application may function incorrectly.
More concretely...
Previously in the program (this is the value in CPU #1 cache memory)
int i = 0;
Thread T0 running in CPU #0:
pthread_mutex_lock(...);
int tmp = i;
pthread_mutex_unlock(...);
Thread T1 running in CPU #1:
i = 42;
pthread_create(...);
Newly created thread T2 running in CPU #0:
printf("i=%d\n", i); /* First step in the thread function */
Without memory barrier, without synchronizing thread T2 memory it could happen that the output would be
i=0
(previously cached, unsynchronized value).
Update:
Lot of applications using POSIX thread library would not be thread safe if this implementation craziness was allowed.
is there implied memory barrier before the thread starts running the thread function so that it
unfailingly sees the memory modifications synchronized by pthread_create()?
Yes. Otherwise there would be no point to pthread_create acting as memory synchronization (barrier).
(This is afaik. not explicitly stated by posix, (nor does posix define a standard memory model),
so you'll have to decide whether you trust your implementation to do the only sane thing it possibly could - ensure synchronization before the new thread is run- I would not worry particularly about it).
Special case (which would as a special case answer the above question): does a context switch provide memory synchronization, that is, when the execution of a process or thread is started or resumed, is the memory synchronized with respect to any memory synchronization by other threads of execution?
No, a context switch does not act as a barrier.
Thread #1 creates a constant object allocated from heap. Thread #1 creates a new thread #2 that reads the data from the object. If we can assume the new thread #2 starts with memory synchronized then everything is fine. However, if the CPU core running the new thread has copy of previously allocated but since discarded data in its cache memory instead of the new value, then it might have wrong view of the state and the application may function incorrectly.
Since pthread_create must perform memory synchronization, this cannot happen. Any old memory that reside in a cpu cache on another core must be invalidated. (Luckily, the commonly used platforms are cache coherent, so the hardware takes care of that).
Now, if you change your object after you've created your 2. thread, you need memory synchronization again so all parties can see the changes, and otherwise avoid race conditions. pthread mutexes are commonly used to achieve that.
cache coherent architectures guarantee from the architectural design point of view that even separated CPUs (ccNUMA - cache coherent Not Uniform Memory Architecture), with independent memory channels when accessing a memory location will not incur in the incoherency you are describing in the example.
This happens with an important penalty, but the application will function correctly.
Thread #1 runs on CPU0, and hold the object memory in cache L1. When thread #2 on CPU1 read the same memory address (or more exactly: the same cache line - look for false sharing for more info), it forces a cache miss on CPU0 before loading that cache line.
You've turned the guarantee pthread_create provides into an incoherent one. The only thing the pthread_create function could possibly do is establish a "happens before" relationship between the thread that calls it and the newly-created thread.
There is no way it could establish such a relationship with existing threads. Consider two threads, one calls pthread_create, the other accesses a shared variable. What guarantee could you possibly have? "If the thread called pthread_create first, then the other thread is guaranteed to see the latest value of the variable". But that "If" renders the guarantee meaningless and useless.
Creating thread:
i = 1;
pthread_create (...)
Created thread:
if (i == 1)
...
Now, this is a coherent guarantee -- the created thread must see i as 1 since that "happened before" the thread was created. Our code made it possible for the standard to enforce a logical "happens before" relationship, and the standard did so to assure us that our code works as we expect.
Now, let's try to do that with an unrelated thread:
Creating thread:
i = 1;
pthread_create (...)
Unrelated thread:
if ( i == 1)
...
What guarantee could we possible have, even if the standard wanted to provide one? With no synchronization between the threads, we haven't tried to make a logical happens before relationship. So the standard can't honor it -- there's nothing to honor. There no particular behavior that is "right", so no way the standard can promise us the right behavior.
The same applies to the other functions. For example, the guarantee for pthread_mutex_lock means that a thread that acquires a mutex sees all changes made by, or seen by, any threads that have unlocked the mutex. We logically expect our thread to get the mutex "after" any threads that got the mutex "before", and the standard promises to honor that expectation so our code works.

Freeing memory across threads

Is it a bad practice to free memory across threads? Such that a thread allocates memory and, after exiting, passes the pointer to the main thread to free the memory. I feel like the answer is yes but I'm just wondering.
The purpose of this in my code is so that the main thread can do some other stuff with the memory before it gets freed. There's plenty of workarounds, in my case, which I'm totally fine with using. But having a thread return void * to a block of memory can, in my case, make the code pretty convenient.
EDIT: I know there are no technical faults in doing this.
It's not wrong for a thread to pass control of memory it has allocated to another thread. For example, in a producer/consumer model, it would be very reasonable for the producer thread to allocate memory for whatever it is that it produces, and then hand control over that memory to the consumer thread for the consumer thread to use and release.
It's not "bad practice" as long as it makes sense to your data flow model, and particular to the requirements your program has on object lifetimes, but it can incur costs. Many modern allocators use thread-local arenas, where allocating and freeing an object in the same thread incurs no synchronization penalty, but freeing it in a different thread forces synchronization or incurs other costs. I would not change your design for this reason unless it's a major bottleneck, but with this implementation-detail in mind you could also consider other designs, such as having the thread store its output in a buffer provided by the parent thread in the argument to the thread start function.
All threads share a common heap. It doesn't matter which thread allocates or frees the memory, as long as the other threads are done using the memory when it gets freed.
Dynamic memory usage comes with a responsibility that you are in complete control of it. It is the user’s responsibility to explicitly manage the lifetime of the dynamically allocated object and ensure its deallocation once the expected lifetime of the object ends. There is nothing wrong in dynamically allocated memory blocks used across different threads. All the threads in the same process share the same heap area. The only care that one needs to take care is that the object lifetimes are clearly well defined and scoped.

C - explicit memory reclamation

I have a number of data structures (trees, queues, lists), created using dynamic allocation routines (malloc, calloc). Under some critical conditions, the program should terminate. Traversing all objects to free their memory takes long time.
Is it safe to avoid traversing all data structures just before program stops? If yes, does it apply to all operating systems and environments (e.g. multiple threads)?
All the memory dynamically allocated by a process is released back to the OS on process termination, doesn't matter intentionally or via a crash. Same happens with files and sockets - ref counts inside the kernel get decremented and resources get released if there are no more references.
An exception to the above might be the shared memory.
When a program (i.e., a process) terminates, all local and heap memories are automatically reclaimed. Note that these memory regions are specific to a process. So, you may skip the traverse and deallocation just before the program termination. However, if the program uses a shared/global memory, then you need to explicitly reclaim that. Finally, it applies, at least, Linux/Unix and Windows. I believe it applies to all modern operating systems.
Short answer: yes. In any modern OS, memory is private to each process, and once the process exits, all memory is reclaimed by the OS (unless the OS itself is broken).
You don't have to free() all your dynamically-allocated memory before terminating the program. The operating system releases all the memory that was owned by the process anyway. It also closes any network connections that you had open.

Why are threads called lightweight processes?

A thread is "lightweight" because most of the overhead has already been accomplished through the creation of its process.
I found this in one of the tutorials.
Can somebody elaborate what it exactly means?
The claim that threads are "lightweight" is - depending on the platform - not necessarily reliable.
An operating system thread has to support the execution of native code, e.g. written in C. So it has to provide a decent-sized stack, usually measured in megabytes. So if you started 1000 threads (perhaps in an attempt to support 1000 simultaneous connections to your server) you would have a memory requirement of 1 GB in your process before you even start to do any real work.
This is a real problem in highly scalable servers, so they don't use threads as if they were lightweight at all. They treat them as heavyweight resources. They might instead create a limited number of threads in a pool, and let them take work items from a queue.
As this means that the threads are long-lived and small in number, it might be better to use processes instead. That way you get address space isolation and there isn't really an issue with running out of resources.
In summary: be wary of "marketing" claims made on behalf of threads. Parallel processing is great (increasingly it's going to be essential), but threads are only one way of achieving it.
Process creation is "expensive", because it has to set up a complete new virtual memory space for the process with it's own address space. "expensive" means takes a lot of CPU time.
Threads don't need to do this, just change a few pointers around, so it's much "cheaper" than creating a process. The reason threads don't need this is because they run in the address space, and virtual memory of the parent process.
Every process must have at least one thread. So if you think about it, creating a process means creating the process AND creating a thread. Obviously, creating only a thread will take less time and work by the computer.
In addition, threads are "lightweight" because threads can interact without the need of inter-process communication. Switching between threads is "cheaper" than switching between processes (again, just moving some pointers around). And inter-process communication requires more expensive communication than threads.
Threads within a process share the same virtual memory space but each has a separate stack, and possibly "thread-local storage" if implemented. They are lightweight because a context switch is simply a case of switching the stack pointer and program counter and restoring other registers, wheras a process context switch involves switching the MMU context as well.
Moreover, communication between threads within a process is lightweight because they share an address space.
process:
process id
environment
folder
registers
stack
heap
file descriptor
shared libraries
instruments of interprocess communications (pipes, semaphores, queues, shared memory, etc.)
specific OS sources
thread:
stack
registers
attributes (for sheduler, like priority, policy, etc.)
specific thread data
specific OS sources
A process contains one or more threads in it and a thread can do anything a process can do. Also threads within a process share the same address space because of which cost of communication between threads is low as it is using the same code section, data section and OS resources, so these all features of thread makes it a "lightweight process".
Just because the threads share the common memory space. The memory allocated to the main thread will be shared by all other child threads.
Whereas in case of Process, the child process are in need to allocate the separate memory space.

Resources