Linux how to know what task and process are running in between 2 point in an application - c

If I have a program written as follow:
void main() {
//some code
printf(" hi \n");
checkpoint1(); <-------- checkpoint1
//some Code with no dependency on other program/task or HW IO
checkpoint2(); <-------- checkpoint2
printf(" test end \n");
//some code
return;
}
is there any Linux API or a way in Linux that I can use to find out what task and process (including background programs, daemons and services) are running on all cores in between the checkpoints and for how long they are running ?
Edit: basically the problem is I have a program running in Linux where between the 2 checkpoint randomly will take too long to complete, and the program is not depending on other task/process or hardware IO, so I would like to debug this issue.
would be nice if there is a solution with detail or API or websites links to the solution.

Related

Nested Functions lead to Segmentation fault in C on kernel versions > 5.7

First SO question, so here it goes.
I'm not asking for someone to review the code, i want to get to the bottom of this.
It would be helpful if someone knew what change in the kernel could be responsible for the following.
In the University we were tasked to implement extended functionality in a modeled Operating System written in C (written by my professor), that models each core with a pthread.
Project github forked by me.
We had to implement the necessary functionality by implementing the required syscalls. (multithreading, sockets, pipes, mlfq, etc).
After implementing each functionality we had to confirm that it was working using the validate_api program.
Problem time:
The validate_api.c contains a lot of tests to check the functionality of the OS.
BOOT_TEST: bare-boots the machine and tests something.
A simple test for creating a new thread inside a process:
BOOT_TEST(test_create_join_thread,
"Test that a process thread can be created and joined. Also, that "
"the argument of the thread is passed correctly."
)
{
int flag = 0;
int task(int argl, void* args) {
ASSERT(args == &flag);
*(int*)args = 1;
return 2;
}
Tid_t t = CreateThread(task, sizeof(flag), &flag);
/* Success in creating thread */
ASSERT(t!=NOTHREAD);
int exitval;
/* Join should succeed */
ASSERT(ThreadJoin(t, &exitval)==0);
/* Exit status should be correct */
ASSERT(exitval==2);
/* Shared variable should be updates */
ASSERT(flag==1);
/* A second Join should fail! */
ASSERT(ThreadJoin(t, NULL)==-1);
return 0;
}
As you can see there is a nested function called task() which is the starting point of the thread that is going to be created using the createThread() syscall we implemented.
The problem is that, although the thread is created correctly, when scheduled to run, the program exits with segmentation fault and cannot access the memory of the task function, gdb doesn't even recognize it as a variable (in the thread struct field pointing to it). The weird thing is that this happens ONLY when using a kernel version newer than 5.7. I opened an issue in the original project's repo.
Running the actual OS and its programs it's fine with no issues whatsoever, only validate_api fails due to that nested function. If i move the task function into global scope then the test finishes successfully. Same goes for every other test that has a nested function inside.
Note: The project is finished (1 month now), i downgraded to 5.4 just to test my implementation.
Note2: I dont need help with the implementation of any functionality (the project is finished any way), i just want to figure out why it doesn't work on kernels > 5.7
Note3: I'm here because my prof. doesn't respond to my repeated emails regarding the issue
I tried compiling using -fno-stack-protector and with -z execstack with no luck. Also simple nested functions like:
int main(){
int foo(){
puts("Hello there");
}
foo();
}
work with any kernel
Machine Details:
Arch Linux - 5.10 / 5.4 LTS
GCC 10.2
Thank you
UPDATE:
The test joins the thread, so it never goes out of scope.

STM32 - Why is my function stopped and taken over by another?

I have a main file that's running on a STM32F4 chip with ThreadX running in the background (part of the WICED SDK for those of you who know). In this case I'm not use any of the ThreadX APIs.
My code is set up like this:
void func_a(void) {
...
interact_over_spi();
}
void func_b(void) {
...
}
int main() {
func_a();
func_b();
}
In func_a(), it's interacting with a serial NOR flash chip over SPI. But every time it executes, it seems like it's terminated early after a second or so. I noticed this because interact_over_spi() would be printing out messages to serial console, and then all of a sudden stop and the messages from func_b() starts to print.
I don't even know where to begin troubleshooting this. Is there a length limit on how long a function would run? Like I said, I'm using ThreadX by default and not using any thread management functions yet. I also noticed that the problem went away when I put the device under GDB. It seems like having breakpoints in the middle of the interact_over_spi() forces it to run till completion. Any ideas why?

How to control/restrict other process to run for very small amount of time in linux

The solution in the above link blocking-and-resuming-execution-of-an-independent-process-in-linux says, I can use ptrace to achieve my goal.
I have tried to run run the code from-
how ptrace work between 2 processes
, but not getting any output.
This is specifically asked to know how to use ptrace to make a program execute only few instructions like in debugger case.
How to use ptrace in following situation to restrict other process to access few instructions?
I have two independent C programs under linux. Program1 is running in CPU core-1 and Program-2 is running in CPU core-2.
Program-2 is executing shared library function, func-2, consists of 200 lines of instruction to perform operation (add+shift) on data .
shared library function
-------
func-2()
{
// code to perform operation (add+shift) on data
}
--------
Program2.
main()
{
while(1)
func-2();
}
program1
main()
{
while()
{
// ptrace
// OR
//kill STOP <pid of program2>
// kill CONT <pid of program2>
}}
I want to restrict program-2 from inside Program-1 or bash , so that Program-2 can execute only few instructions or restrict to run for 1-2 microsec, I can't add any code inside Program2.
Program-1 knows PID of program2 and base address of func-2.
I heard about ptrace which can be used to control other process and using ptrace it is possible to restrict process to execute only 1 instruction. For me even restrict process for 1-2 microsec ( 5-10 instructions) will be sufficient.
How can I control Program-2 which is running other CPU core? Any link to relevant documents is highly appreciated. Thanks in advance.
I am using gcc under linux.

Why doesn't Linux prevent spawning infinite number of processes and crashing?

With the very simple code below, my system (Ubuntu Linux 14.04) simply crashes not even letting my mouse respond. I had to force quit with the power button. I thought Linux is a stable OS tolerable of handling such basic program errors. Did I miss something?
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <semaphore.h>
void check(int isOkay){
if(!isOkay){
printf("error\n");
abort();
}
}
int main(void){
#define n 1000000
int array[n];
sem_t blocker;
int i;
while(1){
if(!fork()){
for(i = 0; i < n; ++i){
array[i] = rand();
}
check(sem_init(&blocker, 0, 0) == 0);
check(sem_wait(&blocker) == 0);
}
}
return 0;
}
Congratulations, you've discovered the fork bomb. There are shell one-liners that can wreak the same sort of havic with a lot less typing on your part.
It is in fact possible to limit the number of processes that a user can spawn using ulimit -- see the bottom of the linked wikipedia articles for details.
A desktop install of Ubuntu is not exactly a hardened server, though. It's designed for usability first and foremost. If you need a locked down system that can't crash, there are better options.
The command ulmit -u shows the maximum number of processes that you can start. However, do not start that many processes in the background: your machine would spend time switching between processes and wouldn't get around to getting actual work done.
The linux does its job of processing your request to create a process, it is for the user to implement his code based on this limit.
The main problem here is determining the best limit. A lot of software doesn't use fork() at all, so do you set the limit to something small like 5? Some software might create a new process whenever it receives a request from network, so do you set the limit to "max. number of network packets"? If you assume most software isn't buggy, then you'd be tempted to set the limit relatively high so that correct software works properly.
The other problem is one of scheduling priorities. In a well designed system things like the GUI would be "high priority" and if it wants CPU time it'd preempt normal/lower priority work immediately. If this was the case, a massive fork bomb running at normal/lower priority would have no effect on the system's ability to respond to the user, and the user would be able to kill the fork bomb without much problem.
Sadly, for a variety of reasons, the scheduler in Linux doesn't work like that. It does support priorities, but to use them you have to be a "real time" process and have to be running as root (which is a massive security disaster). Without sane priorities, Linux assumes that every forked process is as important as everything else, and the CPU/s end up busy doing the forking and there's no CPU time left to respond to the user.

how to print a string not disordered in vxworks multitask environment?

void print_task(void)
{
for(;;)
{
taskLock();
printf("this is task %d\n", taskIdSelf());
taskUnlock();
taskDelay(0);
}
}
void print_test(void)
{
taskSpawn("t1", 100,0,0x10000, (FUNCPTR)print_task, 0,0,0,0,0,0,0,0,0,0);
taskSpawn("t2", 100,0,0x10000, (FUNCPTR)print_task, 0,0,0,0,0,0,0,0,0,0);
}
the above code show:
this is task this is task126738208 126672144 this is task this is
task 126712667214438208
this is task this is task 1266721441 26738208 this is task 126672144
this is task
what is the right way to print a string in multitask?
The problem lies in taskLock();
Try semaphore or mutex instead.
The main idea to print in multi-threaded environment is using dedicated task that printout.
Normally in vxWorks there is a log task that gets the log messages from all tasks in the system and print to terminal from one task only.
The main problem in vxWorks logger mechanism is that the logger task use very high priority and can change your system timing.
Therefore, you should create your own low priority task that get messages from other tasks (using message queue, shared memory protected by mutex, …).
In that case there are 2 great benefits:
The first one, all system printout will be printed from one single task.
The second, and most important benefit, the real-time tasks in the system should not loss time using printf() function.
As you know, printf is very slow function that use system calls and for sure change the timing of your tasks according the debug information you add.
taskLock,
taskLock use as a command to the kernel, it mean to leave the current running task in the CPU as READY.
As you wrote in the example code taskUnlock() function doesn't have arguments. The basic reason is to enable the kernel & interrupts to perform taskUnlock in the system.
There are many system calls that perform task unlock (and sometimes interrupts service routing do it also)
Rather than invent a home-brew solution, just use logMsg(). It is the canonical safe & sane way to print stuff. Internally, it pushes your message onto a message queue. Then a separate task pulls stuff off the queue and prints it. By using logMsg(), you gain ability to print from ISR's, don't have interleaved prints from multiple tasks printing simultaneously, and so on.
For example:
printf("this is task %d\n", taskIdSelf());
becomes
logMsg("this is task %d\n", taskIdSelf(), 0,0,0,0,0,0);

Resources