Ncurses programs in pseudo-terminals - c

In my continuing attempt to understand how pseudo-terminals work, I have written a small program to try to run bash.
The problem is, my line-breaking seems to be off. (The shell prompt only appears AFTER I press enter.)
Furthermore, I still cannot properly use ncurses programs, like vi. Can anyone tell me how to setup the pseudo-terminal for this?
My badly written program can be found here, I encourage you to compile it. The operating system is GNU/Linux, thanks.
EDIT: Compile like this: gcc program.c -lutil -o program
EDIT AGAIN: It looks like the issue with weird spacing was due to using printf(), still doesn't fix the issue with ncurses programs though.

There are several issues in your program. Some are relatively easy to fix - others not so much:
forkpty() and its friends come from BSD and are not POSIX compatible. They should be avoided for new programs. From the pty(7) manual page:
Historically, two pseudoterminal APIs have evolved: BSD and System V. SUSv1 standardized a pseudoterminal API based on the System V API, and this API should be employed in all new programs that use pseudoterminals.
You should be using posix_openpt() instead. This issue is probably not critical, but you should be aware of it.
You are mixing calls to raw system calls (read(), write()) and file-stream (printf(), fgets()) functions. This is a very good way to confuse yourself. In general you should choose one approach and stick with it. In this case, it's probably best to use the low-level system calls (read(), write()) to avoid any issues that would arise from the presence of the I/O buffers that the C library functions use.
You are assuming a line-based paradigm for your terminals, by using printf() and fgets(). This is not always true, especially when dealing with interactive programs like vim.
You are assuming a C-style single-byte null-terminated string paradigm. Terminals normally deal with characters and bytes - not strings. And while most character set encodings avoid using a zero byte, not all do so.
As a result of (2), (3) and (4) above, you are not using read() and write() correctly. You should be using their return values to determine how many bytes they processed, not string-based functions like strlen().
This is the issue that, in my opinion, will be most difficult to solve: You are implicitly assuming that:
The terminal (or its driver) is stateless: It is not. Period. There are at least two stateful controls that I suspect are the cause of ncurses-based programs not working correctly: The line mode and the local echo control of the terminal. At least these have to match between the parent/master and the slave terminal in order to avoid various strange artifacts.
The control-interface of a terminal can be passed-through just by passing the bytes back and forth: It is not always so. Modern virtual terminals allow for a degree of out-of-band control via ioctl() calls, as described for Linux here.
The simplest way to deal with this issue is probably to set the parent terminal to raw mode and let the slave pseudo-terminal driver deal with the awkward details.
You may want to have a look at this program which seems to work fine. It comes from the book The Linux Programming Interface and the full source code is here. Disclaimer: I have not read the book, nor am I promoting it - I just found the program using Google.

Related

Why does system() exist?

Many papers and such mention that calls to 'system()' are unsafe and unportable. I do not dispute their arguments.
I have noticed, though, that many Unix utilities have a C library equivalent. If not, the source is available for a wide variety of these tools.
While many papers and such recommend against goto, there are those who can make an argument for its use, and there are simple reasons why it's in C at all.
So, why do we need system()? How much existing code relies on it that can't easily be changed?
sarcastic answer Because if it didn't exist people would ask why that functionality didn't exist...
better answer
Many of the system functionality is not part of the 'C' standard but are part of say the Linux spec and Windows most likely has some equivalent. So if you're writing an app that will only be used on Linux environments then using these functions is not an issue, and as such is actually useful. If you're writing an application that can run on both Linux and Windows (or others) these calls become problematic because they may not be portable between system. The key (imo) is that you are simply aware of the issues/concerns and program accordingly (e.g. use appropriate #ifdef's to protect the code etc...)
The closest thing to an official "why" answer you're likely to find is the C89 Rationale. 4.10.4.5 The system function reads:
The system function allows a program to suspend its execution temporarily in order to run another program to completion.
Information may be passed to the called program in three ways: through command-line argument strings, through the environment, and (most portably) through data files. Before calling the system function, the calling program should close all such data files.
Information may be returned from the called program in two ways: through the implementation-defined return value (in many implementations, the termination status code which is the argument to the exit function is returned by the implementation to the caller as the value returned by the system function), and (most portably) through data files.
If the environment is interactive, information may also be exchanged with users of interactive devices.
Some implementations offer built-in programs called "commands" (for example, date) which may provide useful information to an application program via the system function. The Standard does not attempt to characterize such commands, and their use is not portable.
On the other hand, the use of the system function is portable, provided the implementation supports the capability. The Standard permits the application to ascertain this by calling the system function with a null pointer argument. Whether more levels of nesting are supported can also be ascertained this way; assuming more than one such level is obviously dangerous.
Aside from that, I would say mainly for historical reasons. In the early days of Unix and C, system was a convenient library function that fulfilled a need that several interactive programs needed: as mentioned above, "suspend[ing] its execution temporarily in order to run another program". It's not well-designed or suitable for any serious tasks (the POSIX requirements for it make it fundamentally non-thread-safe, it doesn't admit asynchronous events to be handled by the calling program while the other program is running, etc.) and its use is error-prone (safe construction of command string is difficult) and non-portable (because the particular form of command strings is implementation-defined, though POSIX defines this for POSIX-conforming implementations).
If C were being designed today, it almost certainly would not include system, and would either leave this type of functionality entirely to the implementation and its library extensions, or would specify something more akin to posix_spawn and related interfaces.
Many interactive applications offer a way for users to execute shell commands. For instance, in vi you can do:
:!ls
and it will execute the ls command. system() is a function they can use to do this, rather than having to write their own fork() and exec() code.
Also, fork() and exec() aren't portable between operating systems; using system() makes code that executes shell commands more portable.

Are functions such as printf() implemented differently for Linux and Windows

Something I still don't fully understand. For example, standard C functions such as printf() and scanf() which deal with sending data to the standard output or getting data from the standard input. Will the source code which implements these functions be different depending on if we are using them for Windows or Linux?
I'm guessing the quick answer would be "yes", but do they really have to be different?
I'm probably wrong , but my guess is that the actual function code be the same, but the lower layer functions of the OS that eventually get called by these functions are different. So could any compiler compile these same C functions, but it is what gets linked after (what these functions depend on to work on lower layers) is what gives us the required behavior?
Will the source code which implements these functions be different
depending on if we are using them for Windows or Linux?
Probably. It may even be different on different Linuxes, and for different Windows programs. There are several distinct implementations of the C standard library available for Linux, and maybe even more than one for Windows. Distinct implementations will have different implementation code, otherwise lawyers get involved.
my guess is that the actual function code be the same, but the lower
layer functions of the OS that eventually get called by these
functions are different. So could any compiler compile these same C
functions, but it is what gets linked after (what these functions
depend on to work on lower layers) is what gives us the required
behavior?
It is conceivable that standard library functions would be written in a way that abstracts the environment dependencies to some lower layer, so that the same source for each of those functions themselves can be used in multiple environments, with some kind of environment-specific compatibility layer underneath. Inasmuch as the GNU C library supports a wide variety of environments, it serves as an example of the general principle, though Windows is not among the environments it supports. Even then, however, the environment distinction would be effective even before the link stage. Different environments have a variety of binary formats.
In practice, however, you are very unlikely to see the situation you describe for Windows and Linux.
Yes, they have different implementations.
Moreover you might be using multiple different implementations on the same OS. For example:
MinGW is shipped with its own implementation of standard library which is different from the one used by MSVC.
There are many different implementations of C library even for Linux: glibc, musl, dietlibc and others.
Obviously, this means there is some code duplication in the community, but there are many good reasons for that:
People have different views on how things should be implemented and tested. This alone is enough to "fork" the project.
License: implementations put some restrictions on how they can be used and might require some actions from the end user (GPL requires you to share your code in some cases). Not everyone can follow those requirements.
People have very different needs. Some environments are multithreaded, some are not. printf might need or might not need to use some thread synchronization mechanisms. Some people need locale support, some don't. All this can bloat the code in the end, not everyone is willing to pay for things they do not use. Even strerror is vastly different on different OSes.
Aforementioned synchronization mechanisms are usually OS-specific and work in specific ways. Same can be said about locale handling, signal handling and other things, including the actual data writing and reading.
Some implementations add non-standard extensions that can make your life easier. Not all of those make sense on other OSes. For example glibc adds 'e' mode specifier to open file with O_CLOEXEC flag. This doesn't make sense for Windows.
Many complex things cannot be implemented in pure C and require some compiler-specific extensions. This can tie implementation to a limited number of compilers.
In the end, it is much simpler to have many C libraries, than trying to create a one-size-fits-all implementation.
As you say the higher level parts of the implementation of something like printf, like the code used to format the string using the arguments, can be written in a cross-platform way and be shared between Linux and Windows. I'm not sure if there's a C library that actually does it though.
But to interact with the hardware or use other operating system facilities (such as when printf writes to the console), the libc implementation has to use the OS's interface: the system calls. And these are very different between Windows and Unix-likes, and different even among Unix-likes (POSIX specifies a lot of them but there are OS specific extensions). For example here you can find system call tables for Linux and Windows.
There are two parts to functions like printf(). The first part parses the format string, and assembles an array of characters ready for output. If this part is written in C, there's no reason preventing it being common across all C libraries, and no reason preventing it being different, so long the standard definition of what printf() does is implemented. As it happens, different library developers have read the standard's definition of printf(), and have come up with different ways of parsing and acting on the format string. Most of them have done so correctly.
The second part, the bit that outputs those characters to stdout, is where the differences come in. It depends on using the kernel system call interface; it's the kernel / OS that looks after input/output, and that is done in a specific way. The source code required to get the Linux kernel to output characters is very different to that required to get Windows to output characters.
On Linux, it's usual to use glibc; this does some elaborate things with printf(), buffering the output characters in a pipe until a newline is output, and only then calling the Linux system call for displaying characters on the screen. This means that printf() calls from separate threads are neatly separated, each being on their own line. But the same program source code, compiled against another C library for Linux, won't necessarily do the same thing, resulting in printf() output from different threads being all jumbled up and unreadable.
There's also no reason why the library that contains printf() should be written in C. So long as the same function calling convention as used by the C compiler is honoured, you could write it in assembler (though that'd be slightly mad!). Or Ada (calling convention might be a bit tricky...).
Will the source code which implements these functions be different
Let us try another point-of-view: competition.
No. Competitors in industry are not required by the C spec to share source code to issue a compliant compiler - nor would various standard C library developers always want to.
C does not require "open source".

standard I/O library or Low-Level for unix / linux development

I'm brushing up on unix calls so this might seem a naive question (on vacation and just bored). I know that there's standard i/o in C but it always seems like the low-level calls (write, read, open) are used in practice for UNIX-like systems(just checked a couple of open-source projects). Is standard-io used in practice much? Are there cutoffs or specific reasons why the low-level api is used more? Or am I making a bad assumption from a few cherry-picked cases regarding low-level being more popular? I understand standard i/o is a C language element but seems like they achieve same thing and that low-level is used more.
The stdio(3) library don't cover all the abilities available on Linux. In particular, socket(2) and other low level functionalities (e.g. polling with poll(2) etc...) is not provided by <stdio.h> functions. However, <stdio.h> functions usually give buffering which is practically very important for performance reasons. So calling write(2) for every single byte would be very inefficient. Use fflush(3) to flush <stdio.h> buffers.
Read Advanced Linux Programming for more.
In practice, mixing <stdio.h> functions and low-level syscalls (like read(2), write(2), mmap(2), poll(2), fcntl(2) ...) with <stdio.h> functions is often (but not always) impractical. See also fileno(3). So people may choose to code at the syscall level.
However, when  <stdio.h> functions are enough it is convenient to use them.
Also <stdio.h> is standardized in the C11 standard, but write etc... only in POSIX ....
FWIW, I tend to use stdio in 3 main areas:
(1) Where it is easy and practical to take advantage of stdio's buffering.
(2) text files where fgets and the like are more convenient to use than homegrown lower level functions which do the same thing.
(3) output formatting. fprintf when it is practical; sprintf and write when it isn't. I rarely use input formatting like fscanf but that might have more to do with the kinds of applications I encounter and when I do I usually try to wangle a way to write it in C++. (Totally personal preference.)
Thing is, in the "everything is (kind of) like a file" posix world, you tend to be using file descriptors for a lot of different calls so after awhile stdio becomes slightly cumbersome unless it offers something compelling. Those things, for me, are listed above.

Can stdio be used while coding for a Kernel...?

I need to build a OS, a very small and basic one, with actually least functionality, coded in C.
Probably a CUI OS which does some memory management and has at least a text editor and a calculator, its just going to be a experimentation about how to make a code that has full and direct control over your hardware.
Still I'll be requiring an interface, that will need input/output functions like printf(&args), scanf(&args). Now my basic question is should I use existing headers or go for coding actually from scratch, and why so ?
I'd be more than very thankful to you guys for and help.
First, you can't link against anything from libc ... you're going to have to code everything from scratch.
Now having worked on a micro-kernel myself, I would not use the actual stdio headers that come with libc since they are going to be cluttered with a lot of extra information that will be either irrelevant for your OS, or will create compiler errors due to missing definitions, etc. What I would do though is keep the function signatures for these standard functions the same ... so in the end you would have a file called stdio.h for your OS, but it would be a very stripped down header file with the basic minimum requirements for your needs, and only having the standard I/O functions you need, with the correct standard signatures.
Keep in mind on the back-end, i.e., in your stdio.c file, you're going to have to point these functions to a custom console-driver or some other type of character drive for your display. Either that, or you could just use them as wrappers for some other kernel-level display printing routine. You are also going to want to make sure that even though you may use a #include <stdio.h> directive in your other OS code modules to access these printing functions, you do not link against libc. This can be done using gcc -ffreestanding.
Just retarget newlib.
printf, scanf, etc relies on implementation specific funcions to get a single char or print a single char. You can then make your stdin and stdout the UART 1 for example.
Kernel itself would not require the printf and scanf functions, if you do not want to keep the kernel in kernel mode and work the apps you have planned for. But for basic printf and scanf features, you can write your own printf and scanf functions, which would provide basic support for printing ans taking input. I do not have much experience on this, but you can try make a console buffer, where the keyboard driver puts the read in ASCII characters (after conversion from scan codes), and then make the printf and scanf work on it. I have one basic implementation were i have wrote a gets instead of scanf and kept things simple. To get integer output you can write an atoi function to convert the string to a number.
To port in other libraries, you need to make the components which the libraries depend on. You need to make the decision if you can code in those support in the kernel so that the libraries could be ported in. If it is more difficult then coding some basic input output functions i think won't be bad at this stage,

Unbuffered I/O in ANSI C

For the sake of education, and programming practice, I'd like to write a simple library that can handle raw keyboard input, and output to the terminal in 'real time'.
I'd like to stick with ansi C as much as possible, I just have no idea where to start something like this. I've done several google searches, and 99% of the results use libraries, or are for C++.
I'd really like to get it working in windows, then port it to OSX when I have the time.
Sticking with Standard C as much as possible is a good idea, but you are not going to get very far with your adopted task using just Standard C. The mechanisms to obtain characters from the terminal one at a time are inherently platform specific. For POSIX systems (MacOS X), look at the <termios.h> header. Older systems use a vast variety of headers and system calls to achieve similar effects. You'll have to decide whether you are going to do any special character handling, remembering that things like 'line kill' can appear at the end of the line and zap all the characters entered so far.
For Windows, you'll need to delve into the WIN32 API - there is going to be essentially no commonality in the code between Unix and Windows, at least where you put the 'terminal' into character-by-character mode. Once you've got a mechanism to read single characters, you can manage common code - probably.
Also, you'll need to worry about the differences between characters and the keys pressed. For example, to enter 'ï' on MacOS X, you type option-u and i. That's three key presses.
To set an open stream to be non-buffered using ANSI C, you can do this:
#include <stdio.h>
if (setvbuf(fd, NULL, _IONBF, 0) == 0)
printf("Set stream to unbuffered mode\n");
(Reference: C89 4.9.5.6)
However, after that you're on your own. :-)
This is not possible using only standard ISO C. However, you can try using the following:
#include <stdio.h>
void setbuf(FILE * restrict stream, char * restrict buf);
and related functions.
Your best bet though is to use the ncurses library.

Resources