Implementing user level threads library Starting a new thread [Homework] - c

I have seen this: Implementing a User-Level Threads Package and it doesn't apply.
During the implementation of Thread_new(int func(void*)), that assigns a thread and creates a stack, I am unable to think of a way to set the program counter (%eip) if I am correct, so when the thread is started by the scheduler, it starts at the given function's (func) entry point.
Although I have seen many c-only (no assembly) implementations, we have been given the following code (x86):
_thrstart:
pushl %edi
call *%esi
pushl %eax
call Thread_exit
Is there a specific reason to push %edi to the stack? I can't seem to find another use for esi/edi apart from byte copying.
I realize that the indirect call to *%esi is probably used to call the function from the context of the new thread, but apart from that, I don't seem to understand how (or what) %esi points to being a valid function address when _thrstart is called from Thread_new
NOTES:
Thread_exit is the cleanup thread, implemented in c.
This is HOMEWORK

In general; you can break "scheduler" down into 4 parts.
The first part is the mechanics of switching from one thread to another. This mostly involves storing the previous thread's state somewhere and loading the next thread's state from somewhere. Here, "somewhere" could be some sort of thread control block, or it could be the thread's stack, or both, or something else. A thread's state may include the contents of general purpose registers, it's stack top (esp), it's instruction pointer (eip), and anything else (MMX/SSE/AVX registers). However, for co-operative scheduling a thread's state could be much less (e.g. most of a thread's state is trashed by thread switching and cooperative scheduling is used so that the thread itself knows when its state is going to be trashed and can prepare for that).
The second part is deciding when to do a thread switch and which thread to switch to. This varies widely for different schedulers.
The third part is starting a thread. This mostly involves constructing the data that would be loaded during a thread switch. However, it's possible to do this in a "lazy" way, where you only create the minimal amount of state when first creating a thread, and then finish creating the remainder of the thread's state after it has been given CPU time.
The fourth part is terminating a thread. This involves destroying/freeing the data that would be loaded during a thread switch; but can also mean cleaning up any resources that the thread failed to release (e.g. file handles, network connections, thread local storage, whatever) so that you don't end up with "resource leaks".

Typically, in simple RTOessess, threads are not started by being called or jumped to - they are started by being returned or interrupt-returned to.
The trick is to assemble data at the top of the new stack so that is looks as if the thread has been running before and has either called the scheduler or entered it via an interrupt. At the bottom of this 'frame' should be the address of the thread function. You can then load the stack pointer with the address of the frame, enable interrupts and and perform a RET or IRET to start the thread function.
It's convenient to also first shove on a parameter that the new thread can retrieve and a call to the 'TerminateThread' or 'Thread_Exit', so that if the thread function returns, the scheduler can terminate it.

Seems that the problem wasn't as complicated as before.
Based on the answer given by #Martin James, the Stack is prepared so that the return address is the _thrstart function.
Based on the assembly used to perform a context switch, the registers edi and esi are stored in specific locations on the stack (when the thread is inactive). By using edi and esi as general purpose registers, edi contains the void* argument, and esi contains the address of the function to be called from the new thread.
_thrstart:
pushl %edi #pushes argument for function func to the stack
call *%esi #indirect call to func
pushl %eax #Expect return value in eax, push to stack
call Thread_exit #Call thread cleanup

Related

Context Switching

I m trying to follow a tutorial implementinng task schedular in stm32f407 discovery board.
There are four functions which will be executed one at time for 1ms each and then switch to next function.
Tutorial defined the whole flow like, we will save each stack register of each function, namely these register xpsr,pc,lr,R0...R13 and then loading this value of the next function to PSP (processor stack pointer) at time of context switching (this is will happen inside systick_handler which will get trigger at 1ms interval).
What I dont understand is, I thought the registers are global and not private like variables inside a function.So how is he saving these register value for each function. This is the given code https://github.com/niekiran/CortexMxProgramming/blob/master/Source_code/015_task_scheduler/Src/main.c if anyone can brief me about the context switching part only a bit then I will be very much confident about what I m doing
Thank You
Imagine you could take a photograph of the CPU at some point in time, and that the photograph could show you the individual 1s and 0s in the CPU at that instant. If you had a way to restore the 1s and 0s from your photograph back into the CPU at some point in the future, and you could then let the CPU run, then assuming RAM and ROM contents were unaltered it would continue doing what it had been doing at the point the photograph was taken.
This is essentially what the context switch is doing. It is saving all of the "volatile context" of the CPU: the contents of all of the general purpose registers (including the program counter which tells it which instruction it was executing, roughly speaking, and the stack pointer) as well as the processor status register (PSR). This is sufficient information to allow the CPU to resume again from this exact point at some future time.
On the Cortex-M, there are two stack pointers, and these exist to make this process easier. One or the other of them is always accessible as sp (r13). The way this example is configured, handler-mode code uses the MSP (main stack pointer) and thread-mode code uses the PSP (process stack pointer). The registers r0-r3, r12, lr (r14), pc (r15) and the PSR are pushed to the active stack on entry to handler mode. That just leaves r4-r11 and the stack pointer (r13 in thread mode, but now accessed via the special-purpose PSP register because the handler is using the MSP).
So the context switch grabs the value of PSP, and then pushes r4-r11 to the task's own stack before saving the updated value of the task's stack pointer in its task control block. Now the entire volatile context of the CPU at the point where it entered handler mode has been saved to the stack of the task that was running, and the stack pointer has been saved in the TCB. All that remains is to find a new task to run, get its stack pointer out of its TCB, use it to pop r4-r11, and then update PSP before returning. On exit from handler mode, r0-r3, r12, lr, pc and the PSR will all be popped automatically by the hardware.
So yes, the registers are 'global', kind of, in that the same registers are used by every task. But when a task isn't running, the contents of those registers are stored on its stack, and restored back into the registers when it is next ready to run. That's the purpose of a context switch.

Where is The Value of the Current Stack Pointer Register Stored Before Context Switching In POSIX C Threads

If I were to use pthreads in POSIX environments, and a context switch is about to happen, the current value of the esp register has to be stored somewhere so I can retrieve it when I context switch back to this thread, as the esp register's value will be overwritten by another thread's saved SP value. I think it is impossible to have separate esp register for every thread (correct me if I am wrong). Having said that, I would like to know in what data structure the SP value of the current thread is stored right before the context switching is hit?
I tried looking up the struct pthread*'s value casted from the value of pthread_t, but nothing was changing when, say, I call a certain function to change the current SP of the thread I am testing (i.e. compare before and after calling the testing function).
This depends entirely upon how the Posix library is implemented. If the threads are implemented by the OS, the values of all registers are stored in the thread's [process] context block before a context switch.
if the thread are implemented in a library, the registers have to be stored in some data structure managed by the library. Such a library implementation needs to save all the general registers but does not (and cannot) need to save the process-specific kernel registers.

Implementing switching to a new threads' context in a virtual machine

For a (pet) project which is a virtual machine (written in pure C) I am developing a threading mechanism. A few notes to understand much better the problem:
the virtual machine interprets a sequence of bytecode, more or less similar to the x86 instructions
it has a set of registers, stack, IP, etc... which are all grouped into an execution context of the current thread.
each thread has its own execution context, so they do not mess around with other threads' data (however in their local stack they have a part of the global stack which was filled up till the point the thread started its own life with variables from the global context).
the VM has a list of execution contexts, representing each of the threads, and also has a current execution contex.
the code part (the bytes of the byteode) is stored common place
the threading mechanism is implemented as cycling through the execution contexts and always executing the bytecode from the current threads' execution contexts' instruction pointer (IP) (yes, it is a fake multi threaded system for now).
threads are (will be) put in a priority queue, which is always updated if the thread requires a new priority.
when there is (will be) a new thread created a new execution context is created and the VM will populate it with data, then will switch to it and this thread will run till the thread scheduler decides it is time to switch to another thread.
And now comes the question:
Based on what should the thread scheduler decide that ok, it is time to switch to a new thread automatically (not considering thread yields control, thread finished or created)?
I was thinking at the following solutions:
at the completion of each full (CPU level atomic) instruction the thread scheduler will switch to the next thread based on its priority (full instruction: mov ax, 13 so it will always complete it, will not switch after mov ax).
each thread has allocated a specific time slice and upon finishing it after the first completed instruction it will switch to the new thread
What are your suggestions?
Some random thought... Depends on for what reason your VM is created. If it simulates some real or imaginable hardware with cycle-precision or so, you have to follow its specification (I guess you wouldn't ask this question in this case :) ). Otherwise, I'd consider performance of the VM as one of the top priority, and for this reason, second-like solution sounds reasonable since it looks more cache-friendly. But instead of literal time-slice, I'd consider some buffer-size-based limits since it, again is closer to cache-efficiency.

kernel stack & process switching? [duplicate]

I want to learn and fill gaps in my knowledge with the help of this question.
So, a user is running a thread (kernel-level) and it now calls yield (a system call I presume).
The scheduler must now save the context of the current thread in the TCB (which is stored in the kernel somewhere) and choose another thread to run and loads its context and jump to its CS:EIP.
To narrow things down, I am working on Linux running on top of x86 architecture. Now, I want to get into the details:
So, first we have a system call:
1) The wrapper function for yield will push the system call arguments onto the stack. Push the return address and raise an interrupt with the system call number pushed onto some register (say EAX).
2) The interrupt changes the CPU mode from user to kernel and jumps to the interrupt vector table and from there to the actual system call in the kernel.
3) I guess the scheduler gets called now and now it must save the current state in the TCB. Here is my dilemma. Since, the scheduler will use the kernel stack and not the user stack for performing its operation (which means the SS and SP have to be changed) how does it store the state of the user without modifying any registers in the process. I have read on forums that there are special hardware instructions for saving state but then how does the scheduler get access to them and who runs these instructions and when?
4) The scheduler now stores the state into the TCB and loads another TCB.
5) When the scheduler runs the original thread, the control gets back to the wrapper function which clears the stack and the thread resumes.
Side questions: Does the scheduler run as a kernel-only thread (i.e. a thread which can run only kernel code)? Is there a separate kernel stack for each kernel-thread or each process?
At a high level, there are two separate mechanisms to understand. The first is the kernel entry/exit mechanism: this switches a single running thread from running usermode code to running kernel code in the context of that thread, and back again. The second is the context switch mechanism itself, which switches in kernel mode from running in the context of one thread to another.
So, when Thread A calls sched_yield() and is replaced by Thread B, what happens is:
Thread A enters the kernel, changing from user mode to kernel mode;
Thread A in the kernel context-switches to Thread B in the kernel;
Thread B exits the kernel, changing from kernel mode back to user mode.
Each user thread has both a user-mode stack and a kernel-mode stack. When a thread enters the kernel, the current value of the user-mode stack (SS:ESP) and instruction pointer (CS:EIP) are saved to the thread's kernel-mode stack, and the CPU switches to the kernel-mode stack - with the int $80 syscall mechanism, this is done by the CPU itself. The remaining register values and flags are then also saved to the kernel stack.
When a thread returns from the kernel to user-mode, the register values and flags are popped from the kernel-mode stack, then the user-mode stack and instruction pointer values are restored from the saved values on the kernel-mode stack.
When a thread context-switches, it calls into the scheduler (the scheduler does not run as a separate thread - it always runs in the context of the current thread). The scheduler code selects a process to run next, and calls the switch_to() function. This function essentially just switches the kernel stacks - it saves the current value of the stack pointer into the TCB for the current thread (called struct task_struct in Linux), and loads a previously-saved stack pointer from the TCB for the next thread. At this point it also saves and restores some other thread state that isn't usually used by the kernel - things like floating point/SSE registers. If the threads being switched don't share the same virtual memory space (ie. they're in different processes), the page tables are also switched.
So you can see that the core user-mode state of a thread isn't saved and restored at context-switch time - it's saved and restored to the thread's kernel stack when you enter and leave the kernel. The context-switch code doesn't have to worry about clobbering the user-mode register values - those are already safely saved away in the kernel stack by that point.
What you missed during step 2 is that the stack gets switched from a thread's user-level stack (where you pushed args) to a thread's protected-level stack. The current context of the thread interrupted by the syscall is actually saved on this protected stack. Inside the ISR and just before entering the kernel, this protected-stack is again switched to the kernel stack you are talking about. Once inside the kernel, kernel functions such as scheduler's functions eventually use the kernel-stack. Later on, a thread gets elected by the scheduler and the system returns to the ISR, it switchs back from the kernel stack to the newly elected (or the former if no higher priority thread is active) thread's protected-level stack, wich eventually contains the new thread context. Therefore the context is restored from this stack by code automatically (depending on the underlying architecture). Finally, a special instruction restores the latest touchy resgisters such as the stack pointer and the instruction pointer. Back in the userland...
To sum-up, a thread has (generally) two stacks, and the kernel itself has one. The kernel stack gets wiped at the end of each kernel entering. It's interesting to point out that since 2.6, the kernel itself gets threaded for some processing, therefore a kernel-thread has its own protected-level stack beside the general kernel-stack.
Some ressources:
3.3.3 Performing the Process Switch of Understanding the Linux Kernel, O'Reilly
5.12.1 Exception- or Interrupt-Handler Procedures of the Intel's manual 3A (sysprogramming). Chapter number may vary from edition to other, thus a lookup on "Stack Usage on Transfers to Interrupt and Exception-Handling Routines" should get you to the good one.
Hope this help!
Kernel itself have no stack at all. The same is true for the process. It also have no stack. Threads are only system citizens which are considered as execution units. Due to this only threads can be scheduled and only threads have stacks. But there is one point which kernel mode code exploits heavily - every moment of time system works in the context of the currently active thread. Due to this kernel itself can reuse the stack of the currently active stack. Note that only one of them can execute at the same moment of time either kernel code or user code. Due to this when kernel is invoked it just reuse thread stack and perform a cleanup before returning control back to the interrupted activities in the thread. The same mechanism works for interrupt handlers. The same mechanism is exploited by signal handlers.
In its turn thread stack is divided into two isolated parts, one of which called user stack (because it is used when thread executes in user mode), and second one is called kernel stack (because it is used when thread executes in kernel mode). Once thread crosses the border between user and kernel mode, CPU automatically switches it from one stack to another. Both stack are tracked by kernel and CPU differently. For the kernel stack, CPU permanently keeps in mind pointer to the top of the kernel stack of the thread. It is easy, because this address is constant for the thread. Each time when thread enters the kernel it found empty kernel stack and each time when it returns to the user mode it cleans kernel stack. In the same time CPU doesn't keep in mind pointer to the top of the user stack, when thread runs in the kernel mode. Instead during entering to the kernel, CPU creates special "interrupt" stack frame on the top of the kernel stack and stores the value of the user mode stack pointer in that frame. When thread exits the kernel, CPU restores the value of ESP from previously created "interrupt" stack frame, immediately before its cleanup. (on legacy x86 the pair of instructions int/iret handle enter and exit from kernel mode)
During entering to the kernel mode, immediately after CPU will have created "interrupt" stack frame, kernel pushes content of the rest of CPU registers to the kernel stack. Note that is saves values only for those registers, which can be used by kernel code. For example kernel doesn't save content of SSE registers just because it will never touch them. Similarly just before asking CPU to return control back to the user mode, kernel pops previously saved content back to the registers.
Note that in such systems as Windows and Linux there is a notion of system thread (frequently called kernel thread, I know it is confusing). System threads a kind of special threads, because they execute only in kernel mode and due to this have no user part of the stack. Kernel employs them for auxiliary housekeeping tasks.
Thread switch is performed only in kernel mode. That mean that both threads outgoing and incoming run in kernel mode, both uses their own kernel stacks, and both have kernel stacks have "interrupt" frames with pointers to the top of the user stacks. Key point of the thread switch is a switch between kernel stacks of threads, as simple as:
pushad; // save context of outgoing thread on the top of the kernel stack of outgoing thread
; here kernel uses kernel stack of outgoing thread
mov [TCB_of_outgoing_thread], ESP;
mov ESP , [TCB_of_incoming_thread]
; here kernel uses kernel stack of incoming thread
popad; // save context of incoming thread from the top of the kernel stack of incoming thread
Note that there is only one function in the kernel that performs thread switch. Due to this each time when kernel has stacks switched it can find a context of incoming thread on the top of the stack. Just because every time before stack switch kernel pushes context of outgoing thread to its stack.
Note also that every time after stack switch and before returning back to the user mode, kernel reloads the mind of CPU by new value of the top of kernel stack. Making this it assures that when new active thread will try to enter kernel in future it will be switched by CPU to its own kernel stack.
Note also that not all registers are saved on the stack during thread switch, some registers like FPU/MMX/SSE are saved in specially dedicated area in TCB of outgoing thread. Kernel employs different strategy here for two reasons. First of all not every thread in the system uses them. Pushing their content to and and popping it from the stack for every thread is inefficient. And second one there are special instructions for "fast" saving and loading of their content. And these instructions doesn't use stack.
Note also that in fact kernel part of the thread stack has fixed size and is allocated as part of TCB. (true for Linux and I believe for Windows too)

Access ebp when given eip

I am trying to develop a runtime stack tracer. I have a function that returns the EIP address whenever the program being traced segfaults. How can I get back to the ebp of the current function (the one during which the program under observation crashed) so that I can start tracing up?
There is no way to convert an instruction pointer to a stack frame pointer. The same function may be invoked many times (even recursively) with different stack addresses; that's the whole point of having a call stack. If you have a crash dump file (core file, etc.) it should contain a dump of all the registers. If you want the register values you must read them from here.
The current ebp and esp (and all other registers) at the time of the segfault is available in the ucontext, which is passed as the third argument to the signal handler. The details of what's where in the ucontext is OS and CPU specific.

Resources