Changing global variable using stat.c - c

I am trying to print logs dynamically. What I have done is I have a debug variable which I set in my own stat_my.c file. Below is show_stat function.
extern int local_debug_lk;
static int show_stat(struct seq_file *p, void *v)
{
int temp=0;
if(local_debug_lk == 0)
{
seq_printf(p,"local_debug_lk=0, enabling,int_num=%d\n",int_num);
local_debug_lk=1;
}
else
{
seq_printf(p,"local_debug_lk=:%d,int_num=%d\n",local_debug_lk,int_num);
while(temp<int_num){
seq_printf(p,"%d\n",intr_list_seq[temp]);
temp++;
}
local_debug_lk=0;
int_num=0;
}
return 0;
}
Driver file
int local_debug_lk, int_num;
isr_root(...){
/*
logic to extract IRQ number, saved in vect variable
*/
if(local_debug_lk && (int_num < 50000)){
intr_list_seq[int_num]=vect;
int_num++;
}
What I expect is when I do "cat /proc/show_stat", first it will enable local_debug_lk flag and whenever an interrupt occurs in driver file, it will be stored in intr_list_seq[] array. and when I do "cat /proc/stat_my" second time, it should print IRQ sequence and disable IRQ recording by setting local_debug_lk=0.
But…what's happening is, I am always getting
"local_debug_lk=0, enabling,int_num=0" log on cat; i.e. local_debug_lk is always zero; it never gets enabled.
Also, when my driver is not running, it works fine!
On two consecutive "cat /proc/stat_my", first value is set to 1 and then 0 again.
Is it possible my driver is not picking latest updated value of local_debug_lk variable?
Could you please let me know what I am doing wrong here?

It could be more calls to .show function than readings from the file (with cat /proc/show_stat). Moreover underlying system expects stable results from .show: if called with the same parameters, the function should print the same information to the seq_file.
Because of that, switching a flag in the .show function has a little sence, and making the function's output dependent on this flag is simply wrong.
Generally, changing any kernel state when a file is read is not what expected by the user. It is better to use write functionality for that.
Function .show actually prints information into temporary kernel buffer. If everything goes OK, information from the buffer is transmitted into user buffer and eventually is printed by cat. But if kernel buffer is too small, information printed into it is discarded. In that case underlying system allocates bigger buffer, and call .show again.
Also, .show is rerun if user buffer is too small to accomodate all information printed.

Related

Why does printing to stderr cause segmentation fault when dealing with ucontext?

I was working on a project for a course on Operating Systems. The task was to implement a library for dealing with threads, similar to pthreads, but much more simpler. The purpose of it is to practice scheduling algorithms. The final product is a .a file. The course is over and everything worked just fine (in terms of functionality).
Though, I got curious about an issue I faced. On three different functions of my source file, if I add the following line, for instance:
fprintf(stderr, "My lucky number is %d\n", 4);
I get a segmentation fault. The same doesn't happen if stdout is used instead, or if the formatting doesn't contain any variables.
That leaves me with two main questions:
Why does it only happen in three functions of my code, and not the others?
Could the creation of contexts using getcontext() and makecontext(), or the changing of contexts using setcontext() or swapcontext() mess up with the standard file descriptors?
My intuition says those functions could be responsible for that. Even more when given the fact that the three functions of my code in which this happens are functions that have contexts which other parts of the code switch to. Usually by setcontext(), though swapcontext() is used to go to the scheduler, for choosing another thread to execute.
Additionally, if that is the case, then:
What is the proper way to create threads using those functions?
I'm currently doing the following:
/*------------------------------------------------------------------------------
Funct: Creates an execution context for the function and arguments passed.
Input: uc -> Pointer where the context will be created.
funct -> Function to be executed in the context.
arg -> Argument to the function.
Return: If the function succeeds, 0 will be returned. Otherwise -1.
------------------------------------------------------------------------------*/
static int create_context(ucontext_t *uc, void *funct, void *arg)
{
if(getcontext(uc) != 0) // Gets a context "model"
{
return -1;
}
stack_t *sp = (stack_t*)malloc(STACK_SIZE); // Stack area for the execution context
if(!sp) // A stack area is mandatory
{
return -1;
}
uc->uc_stack.ss_sp = sp; // Sets stack pointer
uc->uc_stack.ss_size = STACK_SIZE; // Sets stack size
uc->uc_link = &context_end; // Sets the context to go after execution
makecontext(uc, funct, 1, arg); // "Makes everything work" (can't fail)
return 0;
}
This code is probably a little modified, but it is originally an online example on how to use u_context.
Assuming glibc, the explanation is that fprintf with an unbuffered stream (such as stderr by default) internally creates an on-stack buffer which as a size of BUFSIZE bytes. See the function buffered_vfprintf in stdio-common/vfprintf.c. BUFSIZ is 8192, so you end up with a stack overflow because the stack you create is too small.

where is file descriptor stored in process memory?

When a function A is called from a point of execution, internally it is a JMP statement to the address pointing to function A. So the current point of execution is saved onto the stack, the PC loads the address of the called function A and continues.
To get back to the point of execution after the function call, the function block should have equal push and pops onto the stack. Normally in C on exiting the function, the stack variables defined are destroyed(which I presume means popped off the stack), but I decided to define a file descriptor variable inside my function. The code is below:
void main() {
printf("In the beginning there was main()\n");
func_call();
printf("func_call complete\n");
while(1);
}
void func_call() {
int fp;
//Opening a file to get handle to it.
fp = open("stack_flush.c", O_RDONLY);
if (fp < 0 ) {
perror("fp could not open stack_flush.c");
return;
}
}
On running this program and checking lsof, I can see that the fd is still open upon exiting the function func_call().
stack_flu 3791 vvdnlt260 0u CHR 136,1 0t0 4 /dev/pts/1
stack_flu 3791 vvdnlt260 1u CHR 136,1 0t0 4 /dev/pts/1
stack_flu 3791 vvdnlt260 2u CHR 136,1 0t0 4 /dev/pts/1
stack_flu 3791 vvdnlt260 3r REG 8,3 526 24660187 /home/vvdnlt260/Nishanth/test_space/stack_flush.c
I checked the wikipedia entry for file descriptors and I found this:
To perform input or output, the process passes the file descriptor to the kernel through a system call, and the kernel will access the file on behalf of the process. The process does not have direct access to the file or inode tables.
From the above statement it's obvious that the file descriptor integer value is stored in process memory, but although it was defined in a function, the file descriptor was not local to the function as it did not get removed on function exit.
So my question is 2 fold:
1) If the file descriptor is part of the func_call() stack, then how does the code return to its pre function call execution point although it has not been popped off? Also in this case why does it persist after the function call exists?
2) If not part of the func_call() stack where does the file descriptor reside in the process memory?
The variable int fd; is only visible from the function func_call() and after this function finishes executing it will be popped of the stack and the memory will be overwritten probably when a new function is entered. The fact that you destroy some int value pointing to the file does not mean that you close said file. What if you did something like:
int global_fd;
void foo() {
int local_fd = open("bar.txt", O_RDONLY);
global_fd = local_fd;
}
And called foo()? Would You expect to not be able to use global_fd anymore afteer foo exits?
It is helpful to think in this case of the file descriptor as a of a pointer, You ask the kernel to give You the file, and it gives you a value that You can use as a token for this specific file, this token is what you use to let the kernel know on which file should the function like read or lseek act. When the token is passed around or destroyed the file remains open just as destroying the pointer does not free the allocated memory.
When you open a file there's a table in the kernel where descriptor of files are stored. So, when you opened your file, you created an entry in that table. If you don't close the file (with its descriptor) the entry is never deleted (it doesn't mean you cannot open the file again).
If the file descriptor is part of the func_call() stack, then how does the code return to its pre function call execution point although it has not been popped off? Also in this case why does it persist after the function call exists?
As far as I know, there's only one stack per process, not per function. So the fp variable is stored at the stack of the process and it's deleted from there when the function ends.
File descriptors are special. As you know, they're just ints. But they "contain" a fair amount of information about the file being read (the location of the file on disk, the position within the fie of the read/write pointer, etc.), so where is that information stored? The answer is that it's stored somewhere in the OS kernel. It's stored in the OS kernel because its the kernel's job to manage file I/O for you. When we say that the int referring to the open file is a "file descriptor" we mean that the int is referring to information stored somewhere else, sort of like a pointer. That word "descriptor" is important. Another word that's sometimes used for this sort of situation is "handle".
As you know, the memory for local variables is generally stored on the stack. When you return from a function, releasing the memory for the function's local variables is very simple -- they basically disappear along with the function's stack frame. And when they disappear, they do just disappear: there's no way (in C) to have some action associated with their disappearing. In particular, there's no way to have the effect of a call to close() for variables that happen to be file descriptors.
(If you want to have a cleanup action take place when a variable disappears, one way is to use C++, and use a class variable, and define an explicit destructor.)
A similar situation arises when you call malloc. In this function:
void f()
{
char *p = malloc(10);
}
we call malloc to allocate 10 bytes of memory and store the returned pointer in a local pointer variable p, which disappears when function f returns. So we lose the pointer to the allocated memory, but there's no call to free(), so the memory remains allocated. (This is an example of a memory leak.)

Why is the data write not reflected to the file using fprintf file stream

This is my program:
#include <stdio.h>
int main() {
FILE *logh;
logh = fopen("/home/user1/data.txt", "a+");
if (logh == NULL)
{
printf("error creating file \n");
return -1;
}
// write some data to the log handle and check if it gets written..
int result = fprintf(logh, "this is some test data \n");
if (result > 0)
printf("write successful \n");
else
printf("couldn't write the data to filesystem \n");
while (1) {
};
fclose(logh);
return 0;
}
When i run this program, i see that the file is getting created but it does not contain any data. what i understand i that there is data caching in memory before the data is actually written to the filesystem to avoid multiple IOs to increase performance. and I also know that i can call fsync/fdatasync inside the program to force a sync. but can i force the sync from outside without having to change the program?
I tried running sync command from Linux shell but it does not make the data to appear on the file. :(
Please help if anybody knows any alternative to do the same.
One useful information: I was researching some more on this and finally found this, to remove internal buffering altogether, the FILE mode can be set to _IONBF using int setvbuf(FILE *stream, char *buf, int mode, size_t size)
The IO functions usingFILE pointers cache the data to be written in an internal buffer within the program's memory until they decide to perform a system call to 'really' write it (which is for normal files usually when the size of the data cached reaches BUFSIZ).
Until then, there is no way to force writing from outside the progam.
The problem is that your program does not close the file because of your while statement. Remove these lines:
while (1) {
};
If the intent is to wait forever, then close the file with fclose before executing the while statement.

Global variable increment but when I try to decrement it doesn't work

I have a program that communicate trough a TCP socket, with a server and a client.
Besides other things I have a buffer with pendent requests from the client, and I have also one thread that prints the requests that are being placed in the buffer by the main-thread.
So, for example, I have 3 requests for print 3 files, and the printer_thread have to print the 3 files one after the other. For doing this, I have a function "get", that get the the file to print and a function "put" that put the files in the buffer. When I do the get of something of the buffer it works pretty well and the printing of the file works too.
The problem arises when the client want to know how many files are in the buffer to be printed. I need to have a counter that any time that a put a thing in the buffer it increment, and any time that I get something it decrement, something easy.
But it doesn't work, my program only increment the variable and doesn't make any decrement.
int count = 0;
struct prodcons buffer;
/* some other code that is not important for now and works well */
void main_thread(int port_number){
/* more code */
put(&buffer, f_open);
count++; ------> it increment every time that I do a put
nw = myWriteSocket(sc, "File has been Queued.", ARGVMAX);
/* more code */
void *printing(void *arg){
/* variables and other code that works */
file_desc = get(&buffer);
count--; ---> now it never decrement, but the get is working because the files are printed
int main (int argc, char *argv[]) {
/* more code */
pthread_create(&printer_thread,NULL,printing, (void *)terminal);
main_thread(port_number);
What can be the problem? Why the get is working and all is working too and the count-- doesn't???
Sorry if the question is not well structured.

change thread name on linux (htop)

I have a multithread application and I would like that htop (as example) shows a different name per each thread running, at the moment what it shows is the "command line" used to run the main.
I have tried using
prctl(PR_SET_NAME, .....)
but it works only with top and with that call is only possible specify names up to 16 bytes.
I guess the trick is to modify the /proc/PID/cmdline content but that is a readonly field.
Anyone knows how to achieve it ?
Since version 0.8.4, htop has an option: Show custom thread names
Press F2 and select the Display options menu. You should see:
You have to distinguish between per-thread and per-process setting here.
prctl(PR_SET_NAME, ...) sets the name (up to 16 bytes) on a per-thread basis, and you can force "ps" to show that name with the c switch (ps Hcx for example). You can do the same with the c switch in top, so I assume htop has similar functionality.
What "ps" normally shows you (ps Hax for example) is the command line name and arguments you started your program with (indeed what /proc/PID/cmdline tells you), and you can modify those by directly modifying argv[0] (up to its original length), but that is a per-process setting, meaning you cannot give different names to different threads that way.
Following is the code I normally use to change the process name as a whole:
// procname is the new process name
char *procname = "new process name";
// Then let's directly modify the arguments
// This needs a pointer to the original arvg, as passed to main(),
// and is limited to the length of the original argv[0]
size_t argv0_len = strlen(argv[0]);
size_t procname_len = strlen(procname);
size_t max_procname_len = (argv0_len > procname_len) ? (procname_len) : (argv0_len);
// Copy the maximum
strncpy(argv[0], procname, max_procname_len);
// Clear out the rest (yes, this is needed, or the remaining part of the old
// process name will still show up in ps)
memset(&argv[0][max_procname_len], '\0', argv0_len - max_procname_len);
// Clear the other passed arguments, optional
// Needs to know argv and argc as passed to main()
//for (size_t i = 1; i < argc; i++) {
// memset(argv[i], '\0', strlen(argv[i]));
//}

Categories

Resources