I have been recently interested about Fibers in Windows, but I have hard time using it. The documentation involves function definitions and some example, but still some stuff are not clear to me. I see that CreateFiber definition is defined as:
LPVOID CreateFiber(
SIZE_T dwStackSize,
LPFIBER_START_ROUTINE lpStartAddress,
LPVOID lpParameter
);
So, we specify the stack size, the function for the fiber and possibly a parameter for the function. Now, my questions are:
1) Once fiber is created, I assume the provided functions execution doesn't immediately start, right? I believe one needs to call ConvertThreadToFiber first. But are there any other stuff needed to be done? I mean in the simplest case, how does defining, initiating, running and deleting a simple fiber looks like?
2) Is it possible somehow to check whether we are actually in the fiber? I mean whether fiber is executing inside some other part of the app? If yes, how?
3) Is it possible to get the memory location of the fiber's stack and the actual content of the fiber's stack at any moment we wish? If yes, how?
(Disclaimer: I've only written a few test programs that use fibers in order to verify that they were working properly while running under a performance profiler that I was working on at the time.)
1) As you say, a fiber does not run by itself. It only runs when another thread explicitly switches to it by calling SwitchToFiber. Execution then continues on that fiber until it calls SwitchToFiber and switches back to the original thread or another fiber.
2) It's unclear to me what you are asking here. If the fiber is the only one calling a particular function it can set some variable or call a function and you'll know it was there. If multiple fibers are calling the same function, maybe they could record their thread id and you'd be able to infer which fiber called the function. What's the use case here?
3) If the fiber is executing, it has access to its stack/registers in the normal way. I am not aware of a way to arbitrarily access the stack of a fiber that isn't currently scheduled to run on a thread, but I suppose you could record the address of the stack from within the fiber itself.
For what it's worth, I don't think the fiber support in Windows API is used much.
Related
I was designing a custom scheduler for linux-6.0.19 version. I am a beginner in Linux kernel development. I tried to implement it and am currently facing a difficulty that I cannot figure out.
I want to know how the usual Linux scheduler integrations deal with dead tasks. The issue which I am facing right now is when a process terminates, the kernel crashes (dereferenced null pointer).
Let us consider the following situation, suppose a process is currently on the corresponding runqueue substructure (of my scheduler) of the actual runqueue. Now, this program terminates. What happens after this?
Some code should possibly set the p->__state to TASK_DEAD and then is the process dequeued by a call to dequeue_task? Or as per the p->__state, the necessary dequeue of the task shall be done in the pick_next_task_my_sched or put_prev_task_my_sched accordingly? [If some other section of the kernel actually does the dequeue, then our pick_next_task_my_sched should not be concerned about picking up a dead-task. Also, on_rq shall be set to 0 and put_prev_task_my_sched should not be putting it again into the queue.]
In the file /kernel/sched/core.c I found the following mention in the function finish_task_switch:
A task struct has one reference for the use as "current". If a task dies, then it sets TASK_DEAD in tsk->state and calls schedule one last time. The schedule call will never return, and the scheduled task must drop that reference.
I am unable to figure out the code flow, which accomplishes this. Can anyone, point me to the code regions which does this part? Drop that reference : is the task dequeued? Because when I tried to figure out, the code flow, I just saw that certain fields of the cfs related data structures were being reset. I did not find an explicit call to dequeue_task.
I am trying to implement functionality in a linux 2.6.32.60 x86 kernel that would allow me to block all system calls based on a field I added in the task struct. This would basically be of the form:
task_struct ts;
if(ts-> added_field == 0)
//do system call normally
else
//don't do system call
I was wondering if I should do this directly in entry_32.S or if I would be able to modify the way the syscall table is called elsewhere. The problem with directly modifying entry_32.S is that I don't know if I can access the task struct that is making the call.
Thanks for the help!
The kernel already has a very similar feature, called seccomp (LWN article). You may want to consider basing your feature off of this, rather than implementing something new.
If I were to do this, I'd hook into __kernel_vsyscall() and just stop the dispatch if the task structure so indicated per your logic above.
Specifically, arch/i386/kernel/vsyscall-sysenter.S is shared among every process's address space and is the entry point through which all syscalls go. This is the spot just before the actual syscall is dispatched and, in my opinion, the place to put your hook. You are in the processes' address space, so you should have access to mm->current for your task structure. (See also arch/sh/kernel/vsyscall/vsyscall.c)
I'm programming a multithreading application in C under windows-XP.
I'm looking for a way to run a function, right after each context-switch between threads of the application (and just before the starting of the new thread).
To be more precise, I want to assign, in a certain memory-address, a different value, depending on to which thread I got into.
Any suggestions?
Running a function right after each context switch
This is something doomed from the beginning: you do not have control over context switches. What would happen if the OS switches when your function was just called? And then switches back? Another execution?
If what you want is just having variables with a specific content for each thread, look into Thread Local Storage, as other suggested.
If what you need is fine-grained and absolute control over sub-process computations and scheduling, use fibers. But this is NOT something you do with a light heart...
I want to alter the Linux kernel so that every time the current PID changes - i.e., a new process is switched in - some diagnostic code is executed (detailed explanation below, if curious). I did some digging around, and it seems that every time the scheduler chooses a new process, the function context_switch() is called, which makes sense (this is just from a cursory analysis of sched.c/schedule() ).
The problem is, the Linux scheduler is basically black magic to me right now, so I'd like to know if that assumption is correct. Is it guaranteed that, every time a new process is selected to get some time on the CPU, the context_switch() function is called? Or are there other places in the kernel source where scheduling could be handled in other situations? (Or am I totally misunderstanding all this?)
To give some context, I'm working with the MARSS x86 simulator trying to do some instrumentation and measurement of certain programs. The problem is that my instrumentation needs to know which executing process certain code events correspond to, in order to avoid misinterpreting the data. The idea is to use some built-in message passing systems in MARSS to pass the PID of the new process on every context switch, so it always knows what PID is currently in execution. If anyone can think of a simpler way to accomplish that, that would also be greatly appreciated.
Yes, you are correct.
The schedule() will call context_switch() which is responsible for switching from one task to another when the new process has been selected by schedule().
context_switch() basically does two things. It calls switch_mm() and switch_to().
switch_mm() - switch to the virtual memory mapping for the new process
switch_to() - switch the processor state from the previous process to the new process (save/restore registers, stack info and other architecture specific things)
As for your approach, I guess it's fine. It's important to keep things nice and clean when working with the kernel, and try to keep it relatively easy until you gain more knowledge.
I am studying about the windows programming, and i have some question.
I saw a security module that defends memory data.
if one process is going to change other process memory, it detects and turns off the process.
This is often used in anti-cheat engines in games or bank application programs(i live in Korea, so i think this is the best example of this. Almost every on-line games or bank application has self-defence algorithm.)
My question is, is there any APIs or functions that detects about this?
thanks.
P.S.
i can make an example,
if 0x01000000 memory data is 'A', some different process changed it to 'B'.
when i first thought about this, i thought that i have to make a thread to check the data and if it changes, turn off the process.
but i think this is not a good idea. any suggestions?
General answer to your question: no, there are no such API or functions.
But there are different methods where you can achieve same result.
1. Api hooking. You can Hook functions in system (such as WriteProcessMemory) and then check if somebody trying to change something in your process. More on this here.
2. Debugging. You can use debugging breakpoints on functions or memory change.
There's an API that allows you to monitor writing operations into a piece of the specific memory area.
UINT GetWriteWatch(
DWORD dwFlags,
PVOID lpBaseAddress,
SIZE_T dwRegionSize,
PVOID *lpAddresses,
ULONG_PTR *lpdwCount,
LPDWORD lpdwGranularity
);
When the API detects any writing operations, it appends the writing addresses into the arrays that you provided as the parameter of the API, until your array is full.