How to portably share a variable between threads/processes? - c

I have a server that spawns a new process or thread for every incoming request and I need to read and write a variable defined in this server from both threads and processes. Since the server program needs to work both on UNIX and Windows I need to share the variable in a portable way, but how do I do it?
I need to use the standard C library or the native syscalls, so please don’t suggest third party libraries.

shared memory is operating system specific. On Linux, consider reading shm_overview(7) and (since with shared memory you always need some way to synchronize) sem_overview(7).
Of course you need to find out the similar (but probably not equivalent) Windows function calls.
Notice that threads are not the same as processes. Threads by definition share a common single address space. With threads, the main issue is then mostly synchronization, often using mutexes (e.g. pthread_mutex_lock etc...). On Linux, read a pthread tutorial & pthreads(7)
Recall that several libraries (glib, QtCore, Poco, ...) provide useful abstractions above operating system specific functionalities, but you seem to want avoiding them.
At last, I am not at all sure that sharing a variable like you ask is the best way to achieve your goals (I would definitely consider some message passing approach with an event loop: pipe(7) & poll(2), perhaps with a textual protocol à la JSON).

Related

Is it possible for one process to change the value of a variable in another process at runtime?

Is it possible for one executable (process) to modify the value of a variable of another executable (process) during runtime?
Yes, it is possible. For e.g. The Linux operating system provides ptrace system call's by which you can not only examine but change the tracee's memory. From ptrace [emphasis added]:
The ptrace() system call provides a means by which one process (the "tracer") may observe and control the execution of another process (the "tracee"), and examine and change the tracee's memory and registers. It is primarily used to implement breakpoint debugging and system call tracing.
Possible, yes, but not guaranteed.
In normal operation, the address space of each process is completely separate.
Processes can use shared memory to allow access to the same memory region in different processes (noting that the address at which the region is visible in each process is not necessarily the same). In most operating systems, you can even map files that way.
Operating systems provide various facilities to help with development and debugging. In Linux and BSDs (originally from Unix System V), the ptrace interface is probably the most powerful. In general, the interface works between processes running under the same user ID, and requires superuser privileges to use otherwise. (In Linux, depending on the kernel configuration, it may also be possible to manipulate the memory contents each process sees directly, via /proc/PID/mem. This too has similar security considerations.)
In Linux, a process can call prctl(PR_SET_DUMPABLE, 0uL) to make itself and its children not ptraceable. This is common for example when a privileged service starts a helper process that does something on behalf of an unprivileged user, but that helper process needs to be secure against manipulation by that user (for example, the helper process returns some privileged data the user should not be able to spoof or fake).
(In fact, if a process changes its identity via the seteuid(), setegid(), setfsuid(), setfsgid(), or related calls, or the process was executed as a setuid/setgid binary, or gained extra capabilities based on filesystem capabilities for its binary, the kernel automatically does an equivalent of the above prctl() call, disallowing ptracing such processes.)
The dynamic linker in Linux can also be used to interpose or inject code to any (non-setuid/setgid/gaining-capabilities) process the user starts, by specifying the paths to those extra dynamic libraries in the LD_PRELOAD environment variable. This allows things like replacing standard C library functions with your own wrappers. The ELF executable file format also supports "constructors" and "destructors"; functions that get automatically called when the binary is executed and exits (before and after main(), that is). These allow you to essentially inject a small service to any processes you start (that run with your own user account and your user privileges), that you can remotely connect to, and do mischief to the process, like redirecting files, or changing data stored at some memory addresses.
As you can see, the better question is how and when one process can change the value of a variable in another process at runtime. And the answer to that depends on the situation. The most common case is to have the two processes talk to each other -- interprocess communication --, so that the target process actually does the modification when asked to do so by the other process. The solutions vary depending on the exact situation -- and the OS used, of course; my answer here is specific to Linux, but similar or related features are available in all operating systems, they just vary a bit.

passing variables to a process from linux kernel

I want to make a program that will gather information about the keystrokes of a user (keycode, press and release times) and will use them as a biometric for authenticating the user continuously. My approach is to gather the keystrokes using a kernel module (because you can't just kill a kernel module), than the kernel module will send the information to another process that will analyze the data gathered by the kernel module, it will save it to a database and will return an answer to the kernel (the user is authenticated or not) and the kernel will lock the computer if the user is not authenticated. the whole module will not be distributed.
my questions are:
1. How can I call a process from the kernel and also send him the data?
2. how can I return a message to the kernel from the process?
#basile-starynkevitch 's answer and his arguments notwithstanding there is an approach you can take that is perfectly correct and technically allowed by the linux kernel.
Register a keyboard notifier call back function using the kernel call register_keyboard_notifier() in your kernel module. As a matter of fact it's designed for exactly this!
Your notifier call back function will look something like:
int keysniffer_callback(struct notifier_block *notifier_block,
unsigned long scancode,
void *param)
{
// do something with the scancode
return NOTIFY_OK;
}
See https://www.kernel.org/doc/Documentation/input/notifier.txt for starters.
I want to make a program that will gather information about the keystrokes of a user
That should go in practice into your display server, which you did not mention (Xorg, Wayland, MIR, ...?). Details matter a big lot!
My approach is to gather the keystrokes using a kernel module
I strongly believe this is a wrong approach, you don't need any kernel module.
I want to make a program that gathers data about the user keystrokes
Then use ordinary Unix machinery. The keyboard is some character device (and you could have several keyboards, or none, or some virtual one...) and you could read(2) from it. If you want to code a keylogger, please tell that explicitly.
(be aware that a keylogger or any other cyberspying activity can be illegal when used without consent and without permission; in most countries, that could send you to jail: in France, Article 323-1 du Code Pénal punishes that by at least 2 years of jail; and most other countries have similar laws)
the kernel module will send the information to another process [....] it will save it to a database
This is practically very difficult to get (and you look confused). Databases are in user-land (e.g. some RDBMS like PostGresSQL, or some library accessing files like sqlite). Notice that a kernel driver cannot (easily and reliably) even access to files.
All application programs (and most daemons & servers) on Linux are started with execve(2) (e.g. by some unix shell process, or by some daemon, etc...), and I see no reason for you to make an exception. However, some programs (mostly init, but also a few others, e.g. /sbin/hotplug) are started by the kernel, but this is exceptional (and should be avoided, and you don't need that).
How can I call a process from the kernel
You should not do that. I see no reason for your program to avoid being started by execve from some other process (perhaps your init, e.g. systemd).
and also send him the data?
Your process, as all other processes, is interacting with the kernel thru system calls (listed in syscalls(2)). So your application program could use read(2), write(2), poll(2) etc.. Be aware of netlink(7).
how can I return a message to the kernel from the process?
You don't. Use system calls, initiated by application code.
the kernel will lock the computer if the user is not authenticated.
This does not have any sense. Screen locking is a GUI artefact (so is not done by kernel code, but by ad-hoc daemon processes). Of course some processes do continue to run when locking is enabled. And many processes are daemons or servers which don't belong to "the" user (and continue to run when "the computer is locked"). At heart, Linux & POSIX is a multi-user and multi-tasking operating system. Even on a desktop Linux system used by a single physical person, you have dozens of users (i.e. uid-s many of them specialized to a particular feature, look into your /etc/passwd file, see passwd(5)) and more than a hundred processes (each having its pid), use top(1) or ps(1) as ps auxw to list them.
I believe you have the wrong approach. Take first several days or weeks to understand more about Linux from the application point of view. So read some book about Linux programming, e.g. ALP or something newer. Read also something like: Operating Systems: Three Easy Pieces
Be aware that in practice, most Linux systems having a desktop environment are using some display server. So the (physical) keyboard is handled by the X11 or Wayland server. You need to read more about your display server (with X11, things like EWMH).
Hence, you need to be much more specific. You are likely to need to interact with the display server, not the kernel directly.
At last, a rule of thumb is to avoid bloating your kernel with extra and useless driver code. You very probably can do your thing in userland entirely.
So, spend a week or more reading about OSes & Linux before coding a single line of code. Avoid kernel modules, they will bite you, and you probably don't need them (but you might need to hack your display server or simply your window manager; of course details are different with X11 and with Wayland). Read also about multiseat Linux systems.
At last, most Linux distributions are made of free software, whose source code you can study. So take time to look into the source code of relevant software for your (ill-defined) goals. Use also strace(1) to understand the system calls dynamically done by commands and processes.

How does N<->1 threading model work?

In continuation to question, This is an additional query on N-1 threading model.
It is taught that, before designing an application, selection of threading model need to be taken care.
In N-1 threading model, a single kernel thread is available to work on behalf of each user process. OS scheduler gives a single CPU time slice to this kernel thread.
In user space, programmer would use either POSIX pthread or Windows CreateThread() to spawn multiple threads within a user process. As the programmer used POSIX pthread or Windows CreateThread() the kernel is aware of the user-land threads and each thread is considered for processor time assignment by the scheduler. SO, that means every user thread will get a kernel thread.
My question:
So, How does N-1 threading model looks possible to exist? It would be 1-1 threading model. Please clarify.
In user space, programmer would use either POSIX pthread or Windows CreateThread() to spawn multiple threads within a user process. As the programmer used POSIX pthread or Windows CreateThread() the kernel is aware of the user-land threads and each thread is considered for processor time assignment by the scheduler. SO, that means every user thread will get a kernel thread.
That's how 1-to-1 threading works.
This doesn't have to be the case. A platform can implement pthread_create, CreateThread, or whatever other "create a thread" function it offers that does whatever it wants.
My question:
So, How does N-1 threading model looks possible to exist? It would be 1-1 threading model.
Please clarify.
Precisely as you explained in the beginning of your question -- when the programmer creates a thread, instead of creating a thread the kernel is aware of, it creates a thread that the userland scheduler is aware of, still using a single kernel thread for the entire process.
Short answer: there is more than Windows and Linux.
Slightly longer answer (EDITED):
Many programming languages and frameworks introduce multithreading to the programmer. At the same time, they aim to be portable, i.e., it is not known, whether any target plattform does support threads at all. Here, the best way is to implement a N:1 threading, either in general, are at least for the backends without threading support.
The classic example is Java: the language supports multithreading, while JVMs exist even for very simple embedded plattforms, that do not support threads. However, there are JVMs (actually, most of them) that use kernel threads (e.g. AFIK, the JVM by Sun/Oracle).
Another reason that a language/plattform does not want to transfer the threading control completely to the operating system are sometimes special implementation features as reactor modells or global language locks. Here, the objective is to use information on execution special patterns in the user runtime system (which does the local scheduling) that the OS scheduling has no access to.
Does [1:1 threading] add more space occupancy on User process virtual
address space because of these kernel threads?
Well, in theory, execution flow (processes, threads, etc.) and address space are independent concepts. One can find all kinds of mapping between processes (here used as a general term) and memory spaces: 1:1, n:1, 1:n, n:n. However, the classic approach of threading is that several threads of a process share the memory space of the task (that is the owner of the memory space). And thus, there is usually no difference between user threads and kernel threads regarding the memory space. (One exception is, e.g., the Erlang-VM: here, there exist user threads with isolated memory spaces).

What are the disadvantages of Linux's message queues?

I am working on a message queue used to communication among process on embedded Linux. I am wondering why I'm not using the message queues provided by Linux as following:
msgctl, msgget msgrcv, msgsnd.
instead of creating shared memory, and sync up with semaphore?
What's the disadvantage of using this set of functions directly on a business embedded product?
The functions msgctl(), msgget(), msgrcv(), and msgsnd() are the 'System V IPC' message queue functions. They'll work for you, but they're fairly heavy-weight. They are standardized by POSIX.
POSIX also provides a more modern set of functions, mq_close(), mq_getattr(), mq_notify(), mq_open(), mq_receive(), mq_send(), mq_setattr(), and mq_unlink() which might be better for you (such an embarrassment of riches).
However, you will need to check which, if either, is installed on your target platforms by default. Especially in an embedded system, it could be that you have to configure them, or even get them installed because they aren't there by default (and the same might be true of shared memory and semaphores).
The primary advantage of either set of message facilities is that they are pre-debugged (probably) and therefore have concurrency issues already resolved - whereas if you're going to do it for yourself with shared memory and semaphores, you've got a lot of work to do to get to the same level of functionality.
So, (re)use when you can. If it is an option, use one of the two message queue systems rather than reinvent your own. If you eventually find that there is a performance bottleneck or something similar, then you can investigate writing your own alternatives, but until then — reuse!
System V message queues (the ones manipulated by the msg* system calls) have a lot of weird quirks and gotchas. For new code, I'd strongly recommend using UNIX domain sockets.
That being said, I'd also strongly recommend message-passing IPC over shared-memory schemes. Shared memory is much easier to get wrong, and tends to go wrong much more catastrophically.
Message passing is great for small data chunks and where immutability needs to be maintained, as message queues copy data.
A shared memory area does not copy data on send/receive and can be more efficient for larger data sets at the tradeoff of a less clean programming model.
The disadvantages message queues are miniscule - some system call and copying overhead - which amount to nothing for most applications. The benefits far outweigh that overhead. Synchronization is automatic and they can be used in a variety of ways: blocking, non-blocking, and since in linux the message queue types are implemented as file descriptors they can even be used in select() calls for multiplexing. In the POSIX variety, which you should be using unless you have a really compelling need to use SYSV queues, you can even automatically generate threads or signals to process the queue items. And best of all they are fully debugged.
Message queue and shared memory are different. And it is upto the programmer and his requirement to select which to use. In shared memory you have to be bit careful in reading and writing. And the processes should be synchronized. So the order of execution is very important in shared memory. In shared memory, there is no way to find whether the reading value is newly written value or the older one. And there is no explicit mechanism to wait.
Message queue and shared memory are different. And it is upto the programmer and his requirement to select which to use. There are predefined functions to make your life easy in message queue.

Inter-program communication for an arbitrary number of programs

I am attempting to have a bunch of independent programs intelligently allocate shared resources among themselves. However, I could have only one program running, or could have a whole bunch of them.
My thought was to mmap a virtual file in each program, but the concurrency is killing me. Mutexes are obviously ineffective because each program could have a lock on the file and be completely oblivious of the others. However, my attempts to write a semaphore have all failed, since the semaphore would be internal to the file, and I can't rely on only one thing writing to it at a time, etc.
I've seen quite a bit about named pipes but it doesn't seem to be to be a practical solution for what I'm doing since I don't know how many other programs there will be, if any, nor any way of identifying which program is participating in my resource-sharing operation.
You could use a UNIX-domain socket (AF_UNIX) - see man 7 unix.
When a process starts up, it tries to bind() a well-known path. If the bind() succeeds then it knows that it is the first to start up, and becomes the "resource allocator". If the bind() fails with EADDRINUSE then another process is already running, and it can connect() to it instead.
You could also use a dedicated resource allocator process that always listens on the path, and arbitrates resource requests.
Not entirely clear what you're trying to do, but personally my first thought would be to use dbus (more detail). Should be easy enough within that framework for your processes/programs to register/announce themselves and enumerate/signal other registered processes, and/or to create a central resource arbiter and communicate with it. Readily available on any system with gnome or KDE installed too.

Resources