i am interested in the C programming, lately. I like how you only have a 'minimal' set of functions and datatypes (the C standard library) and still you can create almost everything with it.
But now to my question:
How do you make simple event-handling in C? I have read about the signals.h header and this would be what i am looking for... if there were signals exclusivly reserved for the user. But i can never be sure that the environment unexpectedly raises one of the signals that i can use with the C standard library.
Okay... there is the extended signals header in linux/unix with 2(?) signals for the user... but i can imagine situations where you need more...
Besides i want to learn writing C platform independent. I heard about "emulating signals" by listening to a socket... but that would also not be platform independent.
Is there any way to write a C program that has to handle events without getting platform dependent only by help of the standard C library?
Thank you for any hints;
Yeap, that is exactly what Unix designed for, 2 user signals. Supposedly it all depends on what you use signal for. If you are just to relaying some events asynchronously, use sockets will do. Look up for event-loop. You can even create unlimited complexity behind that. Signals are a very special group of functions for OS specific reasons, such as somebody is trying to kill you. In that respect, the options should be limited in order to trim down overhead for OS operations.
My suggestion is to stay away from signals, unless you know very specifically what you are using it for. Signal is used for OS to communicate with you, not for you to communicate with yourself, although from many different places. And there are only defined reasons why OS want to give you a call. Hence, I tend to think the original 2 user defined signals are more than enough.
Unfortunately I think you are going to run into platform dependencies here. You can write a multithreaded application, where one thread waits for some input and then sends a message / makes a call when that input has arrived (such as waiting for an input string on a console). But that is not baked into C99, and you would have to rely on platform dependent third party libraries. Here is a useful post on that subject. I know this isn't the answer you want, but I hope it helps.
C: Multithreading
edit: C11 supports multithreading natively, see
http://en.cppreference.com/w/c/header
I haven't used this yet.
Related
I'm trying to write a C program that is able to test the performance of other programs by passing in input and testing the output without having to restart the program every time it runs. Co-workers and I are writing sudoku solvers, and I'm writing the program to test how fast each one runs by solving numerous puzzles, which could all be in different languages, and I don't want to penalize people for using languages, like Java, that are really slow to start up. Ideally, this program will start the sudoku solver program, keep it running, and continually pass in a new puzzle via stdin and test the output in stdout.
Here's pseudocode of what I want to do:
start a sudoku solver in another process
once process is running
pass puzzle string into child stdin
wait until output comes into stdout
repeat until end time limit ends
close process
I've messed around with popen, but I couldn't figure out how to write to the child process stdin. I've done a bunch of poking around the internet, and I haven't been able to figure it out.
Any suggestions on how to accomplish this? I'm running this on a Linux box. It doesn't have to be stdin and stdout for communication, but that would be the easiest for everyone else.
This is more a long comment than an answer, but your question is really too broad and ill-defined, and I'm just giving some hints.
You first need to understand how to start, manage, and communicate with child processes. An entire Unix programming book is needed to explain that. You could read ALP or some newer book. You need to be able to write a Unix shell-like program. Become familiar with many syscalls(2) including fork(2), pipe(2), execve(2), dup2(2), poll(2), waitpid(2) and a dozen others. See also signal(7) & time(7).
You also need to discuss with your colleagues some conventions and protocol about these sudoku programs and how your controlling program would communicate with them (and the evil is in the details). For example, your pseudo-code is mentioning "pass puzzle string" but you don't define what that exactly means (what if the string contains newlines, or weird characters?). Read also about inter-process communication.
(You might want to have more than one sudoku process running. You probably don't want a buggy sudoku client to break your controlling program. This is unclear in your question)
You could want to define a text-based protocol (they are simpler to debug and use than binary protocols). Details matter a lot, so document it precisely (probably using some EBNF notation). You might want to use textual formats like JSON, YAML, S-expressions. You could take inspiration from SMTP, HTTP, JSONRPC etc (or perhaps choose to use one of them).
Remember that pipe(7)-s, fifo(7)-s and tcp(7)-s socket(7)-s are just a stream of bytes without any message boundaries. Any message organization above these should be a documented convention (and it might happen that the message would be fragmented, so you need careful buffering). See also this.
(I recommend making some free software sample implementation of your protocol)
Look also into similar work, perhaps SAT competition (or chess contests programs, I don't know the details).
Read also something about OSes, like Operating Systems: Three Easy Pieces
I'm trying to implement a communication protocol in C. I need to implement a timer (so that if after some time an ACK has not been received yet, the sender will assume the packet has been lost and will send it again).
In a C-looking-pseudocode I would like to have something like this:
if (!ack_received(seqn) && timer_expired(seqn)) {
send_packet(seqn);
start_timer(seqn);
}
Note: seqn is the sequence number of the packet being sent. Each packet needs a personal timer.
How to implement timer_expired and start_timer? Is there a way to do it without using several threads?
Can I implement a single threaded timer in C?
Probably not in pure portable C99 (or single-threaded C11, see n1570).
But in practice, you'll often code for some operating system, and you'll then get some ways to have timers. On Linux, read time(7) first. You'll probably also want to use a multiplexing call such as poll(2) (to which you give a delay). And learn more about other system calls, so read intro(2), syscalls(2) and some good Linux programming book (perhaps the old ALP, freely downloadable).
BTW, it seems that you are coding something network related. You practically need some API for that (e.g. Berkeley sockets), hence you'll probably use something similar to an OS.
Many event loops are single-threaded but are providing some kind of timers.
Or perhaps (if you don't have any OS) you are coding some freestanding C for some small embedded hardware platform (e.g. Arduino-like). Then you have some ways to poll network inputs and setup timers.
depends of the architecture of your system it can be done more or less elegant way.
In the simple program with a single thread just declare the table containing starting timestamps. So the function will just check the difference between the current timestamp, the saved one and the timeout value. You need to implement of course an another function which will initialize the table element for the particular timeout counter.
Say I want to change the behavior of kill for educational reasons. If a user directly types it in the shell, then nothing will happen. If some other program/entity-who-is-not-the-user calls it, it performs normally. A wrapping if-statement is probably sufficient, but what do I put in that if?
Edit I don't want to do this in the shell. I'm asking about kernel programming.
In line 2296 of the kernel source, kill is defined. I will wrap an if statement around the code inside. In that statement, there should be a check to see whether the one who called this was the user or just some process. The check is the part I don't know how to implement.
Regarding security
Goal:
Block the user from directly calling kill from any shell
Literally everything else is fine and will not be blocked
While other answers are technically true, I think they're being too strict regarding the question. What you want to do it not possible to do in a 100% reliable way, but you can get pretty close by making some reasonable assumptions.
Specifically if you define an interactive kill as:
called by process owned by a logged in user
called directly from/by a process named like a shell (it may be a new process, or it may be a built-in operation)
called by a process which is connected to a serial/pseudo-terminal (possibly also belonging to the logged in user)
then you can check for each of those properties when processing a syscall and make your choice that way.
There are ways this will not be reliable (sudo + expect + sh should work around most of these checks), but it may be enough to have fun with. How to implement those checks is a longer story and probably each point would deserve its own question. Check the documentation about users and pty devices - that should give you a good idea.
Edit: Actually, this may be even possible to implement as a LKM. Selinux can do similar kind of checks.
It looks you are quite confused and do not understand what exactly a system call is and how does a Linux computer works. Everything is done inside some process thru system calls.
there should be a check to see whether the one who called this was directly done by the user or just some process
The above sentence has no sense. Everything is done by some process thru some system call. The notion of user exists only as an "attribute" of processes, see credentials(7) (so "directly done by the user" is vague). Read syscalls(2) and spend several days reading about Advanced Linux Programming, then ask a more focused question.
(I really believe you should not dare patching the kernel without knowing quite well what the ALP book above is explaining; then you would ask your question differently)
You should spend also several days or weeks reading about Operating Systems and Computer Architecture. You need to get a more precise idea of how a computer works, and that will take times (perhaps many years) and any answer here cannot cover all of it.
When the user types kill, he probably uses the shell builtin (type which kill and type kill) and the shell calls kill(2). When the user types /bin/kill he is execve(2) a program which will call kill(2). And the command might not come from the terminal (e.g. echo kill $$ | sh, the command is then coming from a pipe, or echo kill 1234|at midnight the kill is happening outside of user interaction and without any user interactively using the computer, the command being read from some file in /var/spool/cron/atjobs/, see atd(8)) In both cases the kernel only sees a SYS_kill system call.
BTW, modifying the kernel's behavior on kill could affect a lot of system software, so be careful when doing that. Read also signal(7) (some signals are not coming from a kill(2)).
You might use isatty(STDIN_FILENO) (see isatty(3)) to detect if a program is run in a terminal (no need to patch the kernel, you could just patch the shell). but I gave several cases where it is not. You -and your user- could also write a desktop application (using GTK or Qt) calling kill(2) and started on the desktop (it probably won't have any terminal attached when running, read about X11).
See also the notion of session and setsid(2); recent systemd based Linuxes have a notion of multi-seat which I am not familiar with (I don't know what kernel stuff is related to it).
If you only want to change the behavior of interactive terminals running some (well identified) shells, you need only to change the shell -with chsh(1)- (e.g. patch it to remove its kill builtin, and perhaps to avoid the shell doing an execve(2) of /bin/kill), no need to patch the kernel. But this won't prohibit the advanced user to code a small C program calling kill(2) (or even code his own shell in C and use it), compile his C source code, and run his freshly compiled ELF executable. See also restricted shell in bash.
If you just want to learn by making the exercise to patch the kernel and change its behavior for the kill(2) syscall, you need to define what process state you want to filter. So think in terms of processes making the kill(2) syscall, not in terms of "user" (processes do have several user ids)
BTW, patching the kernel is very difficult (if you want that to be reliable and safe), since by definition it is affecting your entire Linux system. The rule of thumb is to avoid patching the kernel when possible .... In your case, it looks like patching the shell could be enough for your goals, so prefer patching the shell (or perhaps patching the libc which is practically used by all shells...) to patching the kernel. See also LD_PRELOAD tricks.
Perhaps you just want the uid 1234 (assuming 1234 is the uid of your user) to be denied by your patched kernel using the kill(2) syscall (so he will need to have a setuid executable to do that), but your question is not formulated this way. That is probably simple to achieve, perhaps by adding in kill_ok_by_cred (near line 692 on Linux 4.4 file kernel/signal.c) something as simple as
if (uid_eq(1234, tcred->uid))
return 0;
But I might be completely wrong (I never patched the kernel, except for some drivers). Surely in a few hours Craig Ester would give a more authoritative answer.
You can use aliases to change the behavior of commands. Aliases are only applied at interactive shells. Shell scripts ignore them. For example:
$ alias kill='echo hello'
$ kill
hello
If you want an alias to be available all the time, you could add it to ~/.bashrc (or whatever the equivalent file is if your shell isn't bash).
I am aware that one cannot listen for, detect, and perform some action upon encountering context switches on Windows machines via managed languages such as C#, Java, etc. However, I was wondering if there was a way of doing this using assembly (or some other language, perhaps C)? If so, could you provide a small code snippet that gives an idea of how to do this (as I am relatively new to kernel programming)?
What this code will essentially be designed to do is run in the background on a standard Windows UI and listen for when a particular process is either context switched in or out of the CPU. Upon hearing either of these actions, it will send a signal. To clarify, I am looking to detect only the context switches directly involving a specific process, not any context switches. What I ultimately would like to achieve is to be able to notify another machine (via the internet signal) whenever a specific process begins making use of the CPU, as well as when it ceases doing so.
My first attempt at doing this involved simply calculating the CPU usage percentage of the specific process, but this ultimately proved to be too course-grained to catch the most minute calculations. For example, I wrote a test program that simply performed the operation 2+2 and placed the answer inside of an int. The CPU usage method did not pick up on this. Thus, I am looking for something lower level, hence the origin of this question. If there are potential alternatives, I would be more than happy to field them.
There's Event Tracing for Windows (ETW), which you can configure to receive messages about a variety of events occurring in the system.
You should be able to receive messages about thread scheduling events. The CSwitch class of events is for that.
Sorry, I don't know any good ETW samples that you could easily reuse for your task. Read MSDN and look around.
Simon pointed out a good link explaining why ETW can be useful. Very enlightening: http://randomascii.wordpress.com/2012/05/11/the-lost-xperf-documentationcpu-scheduling/
Please see the edits below. In particular #3, ETW appears to be the way to go.
In theory you could install your own trap handler for the old int 2Eh and the new sysenter. However, in practice this isn't going to be as easy anymore as it used to be because of Patchguard (since Vista) and signing requirements. I'm not aware of any other generic means to detect context switches, meaning you'd have to roll your own. All context switches of the OS go through call gates (the aforementioned trap handlers) and ReactOS allows you to peek behind the scenes if you feel uncomfortable with debugging/disassembling.
However, in either case there shouldn't be a generic way to install something like this without kernel mode privileges (usually referred to as ring 0) - anything else would be a security flaw in Windows. I'm not aware of a Windows-supplied method to achieve what you want either.
The book "Undocumented Windows NT" has a pretty good chapter about the exact topic (although obviously targeted at the old int 2Eh method).
If you can live with hooking only certain functions, you may be able to get away with some filter driver(s) or user-mode API hooking. Depends on your exact requirements.
Update: reading your updated question, I think you need to read up on the internals, in particular on the concept of IRQLs (not to be confused with IRQs from DOS times) and the scheduler. The problem is that there can - and usually will - be literally hundreds of context switches every second. However, your watcher process (the one watching for context switches) will, like any user-mode process be preemptable. This means that there is no way for you to achieve real-time signaling or anything close to it, which puts a big question mark on the method.
What is it actually that you want to achieve? The number of context switches doesn't really give you anything. Every single SEH exception will cause a context switch. What is it that you are interested in? Perhaps performance counters cater your needs better?
Update 2: the sheer amount of context switches even for a single thread will be flabbergasting within a single second. So assuming you'd install your own trap handler, you'd still end up (adversely) affecting all other threads on the system (after all you'd catch every context switch and then see whether it's the process/threads you care about and then do your thing or pass it on).
If you could tell us what you ultimately want to achieve, not with the means already pre-defined, we may be able to suggest alternatives.
Update 3: so apparently I was wrong in one respect here. Windows comes with something on board that signals context switches. And ETW can be harnessed to tap into those. Thanks to Simon for pointing out.
Is there a C function that doesn't wait for input but if there is one, it detects it?
What I'm trying to do here is continue a loop endlessly until any key is pressed.
I'm a newbie, and all the input functions I've learned so far waits for the user to input something..
I hope I'm clear, although if I'm not I'm happy to post the code..
WIndows kbhit( ) does exactly this non-blocking keyboard char-ready check, and there's a kbhit( ) for Linux over here
Since nobody's stated it clearly....
The important thing to note is that the standard library provided by C does not provide the capability you're looking for. Achieving it, then, requires the use of third party libraries and/or special knowledge about the operating system you're using.
Typically, you'll have some of those third-party libraries available. If you were using Visual Studio, for example, you would be able to use http://msdn.microsoft.com/en-us/library/58w7c94c(v=VS.100).aspx. I'm not sure what's available to you with your setup.
you should use select or poll
You might also want to check signal() if all you need is a way to stop the loop and run your end of the program function.
It depends what you exactly want to do, but in general:
A) You keep your program single-threaded and check input through a non-blocking input read.
B) You spawn a different thread that will handle the input and communicate the results back to the main thread.