Is glib usable in an unobtrusive way? - c

I was looking for a good general-purpose library for C on top of the standard C library, and have seen several suggestions to use glib. How 'obtrusive' is it in your code? To explain what I mean by obtrusiveness, the first thing I noticed in the reference manual is the basic types section, thinking to myself, "what, am I going to start using gint, gchar, and gprefixing geverything gin gmy gcode gnow?"
More generally, can you use it only locally without other functions or files in your code having to be aware of its use? Does it force certain assumptions on your code, or constraints on your compilation/linking process? Does it take up a lot of memory in runtime for global data structures? etc.

The most obtrustive thing about glib is that any program or library using it is non-robust against resource exhaustion. It unconditionally calls abort when malloc fails and there's nothing you can do to fix this, as the entire library is designed around the concept that their internal allocation function g_malloc "can't fail"
As for the ugly "g" types, you definitely don't need any casts. The types are 100% equivalent to the standard types, and are basically just cruft from the early (mis)design of glib. Unfortunately the glib developers lack much understanding of C, as evidenced by this FAQ:
Why use g_print, g_malloc, g_strdup and fellow glib functions?
"Regarding g_malloc(), g_free() and siblings, these functions are much safer than their libc equivalents. For example, g_free() just returns if called with NULL.
(Source: https://developer.gnome.org/gtk-faq/stable/x908.html)
FYI, free(NULL) is perfectly valid C, and does the exact same thing: it just returns.

I have used GLib professionally for over 6 years, and have nothing but praise for it. It is very light-weight, with lots of great utilities like lists, hashtables, rand-functions, io-libraries, threads/mutexes/conditionals and even GObject. All done in a portable way. In fact, we have compiled the same GLib-code on Windows, OSX, Linux, Solaris, iOS, Android and Arm-Linux without any hiccups on the GLib side.
In terms of obtrusiveness, I have definitely "bought into the g", and there is no doubt in my mind that this has been extremely beneficial in producing stable, portable code at great speed. Maybe specially when it comes to writing advanced tests.
And if g_malloc don't suit your purpose, simply use malloc instead, which of course goes for all of it.

Of course you can "forget about it elsewhere", unless of course those other places somehow interact with glib code, then there's a connection (and, arguable, you're not really "elsewhere").
You don't have to use the types that are just regular types with a prepended g (gchar, gint and so on); they're guaranteed to be the same as char, int and so on. You never need to cast to/from gint for instance.
I think the intention is that application code should never use gint, it's just included so that the glib code can be more consistent.

Related

Why there is so many functions in string.h library that are "not recommended for use"?

There is something I try to understand about C origins, why there are functions that are not recommended for use in most of SO questions. Like strtok or strncpy, they are simply not safe to work with. Evrywhere I see recomendations to write my own implementation. Why wouldn't the standard change strncpy for example to BSD strlcpy, but is left instead with these "monsters"?
C is a product of the early 1970s, and it shows. Many of the iffier library functions were written when the C user community was very small and limited to academia, most of whom were experienced programmers.
By the time the first standard was released in 1989, those original library functions were already entrenched in 10 to 15 years' worth of legacy code (not the least of which was the Unix operating system and most of its tools). The committee in charge of standardization was loath to break the existing codebase, so those functions were incorporated into the standard pretty much as-is; all that really changed was adding prototype syntax to the declarations and changing char * to void * where necessary (malloc, memcpy, memset, etc.).
AFAIK, only one library function has actually been removed from the language since standardization - gets. The mayhem caused by that one library call is scarier than the prospect of breaking what is by now almost 40 years' worth of legacy code.
There is a LOT of legacy "C" and "C++" code out there. If they removed all the "unsafe" functions from the "C" runtime libraries, it would be prohibitive for many developers to upgrade their compilers because all the old code wouldn't build any more.
Sometimes they will give "deprecated" compiler messages (MSFT is fond of this) so you will find and change to using the new, safer functions.
New code should use the "safe" functions, of course, but many of us are stuck with old compilers and legacy code to maintain :)
They still exist because of historical ancestral relationship with the "old system" / "codes" that still use them - i.e. to support "Backward Compatibility"
Own implementation is suggested to make the programmer use their own logic at their own risk as no one can know much better about their environment then the programmer himself, as for example, strtok is not thread safe.
It's all just dogma. Use the functions just be aware that they're indifferent to your goals in that they might not work in all circumstances (ie strtok and multi-threading) or they expect conditions to be caught before/after usage (ie strncpy and missing termination characters).

Is it a good idea to compile a language to C?

All over the web, I am getting the feeling that writing a C backend for a compiler is not such a good idea anymore. GHC's C backend is not being actively developed anymore (this is my unsupported feeling). Compilers are targeting C-- or LLVM.
Normally, I would think that GCC is a good old mature compiler that does performs well at optimizing code, therefore compiling to C will use the maturity of GCC to yield better and faster code. Is this not true?
I realize that the question greatly depends on the nature of the language being compiled and on other factors such that getting more maintainable code. I am looking for a rather more general answer (w.r.t. the compiled language) that focuses solely on performance (disregarding code quality, ..etc.). I would be also really glad if the answer includes an explanation on why GHC is drifting away from C and why LLVM performs better as a backend (see this) or any other examples of compilers doing the same that I am not aware of.
Let me list my two biggest problems with compiling to C. If this is a problem for your language depends on what kind of features you have.
Garbage collection When you have garbage collection you may have to interrupt regular execution at just about any point in the program, and at this point you need to access all pointers that point into the heap. If you compile to C you have no idea where those pointers might be. C is responsible for local variables, arguments, etc. The pointers are probably on the stack (or maybe in other register windows on a SPARC), but there is no real access to the stack. And even if you scan the stack, which values are pointers? LLVM actually addresses this problem (thought I don't know how well since I've never used LLVM with GC).
Tail calls Many languages assume that tail calls work (i.e., that they don't grow the stack); Scheme mandates it, Haskell assumes it. This is not the case with C. Under certain circumstances you can convince some C compilers to do tail calls. But you want tail calls to be reliable, e.g., when tail calling an unknown function. There are clumsy workarounds, like trampolining, but nothing quite satisfactory.
While I'm not a compiler expert, I believe that it boils down to the fact that you lose something in translation to C as opposed to translating to e.g. LLVM's intermediate language.
If you think about the process of compiling to C, you create a compiler that translates to C code, then the C compiler translates to an intermediate representation (the in-memory AST), then translates that to machine code. The creators of the C compiler have probably spent a lot of time optimizing certain human-made patterns in the language, but you're not likely to be able to create a fancy enough compiler from a source language to C to emulate the way humans write code. There is a loss of fidelity going to C - the C compiler doesn't have any knowledge about your original code's structure. To get those optimizations, you're essentially back-fitting your compiler to try to generate C code that the C compiler knows how to optimize when it's building its AST. Messy.
If, however, you translate directly to LLVM's intermediate language, that's like compiling your code to a machine-independent high-level bytecode, which is akin to the C compiler giving you access to specify exactly what its AST should contain. Essentially, you cut out the middleman that parses the C code and go directly to the high-level representation, which preserves more of the characteristics of your code by requiring less translation.
Also related to performance, LLVM can do some really tricky stuff for dynamic languages like generating binary code at runtime. This is the "cool" part of just-in-time compilation: it is writing binary code to be executed at runtime, instead of being stuck with what was created at compile time.
Part of the reason for GHC's moving away from the old C backend was that the code produced by GHC was not the code gcc could particularly well optimise. So with GHC's native code generator getting better, there was less return for a lot of work. As of 6.12, the NCG's code was only slower than the C compiled code in very few cases, so with the NCG getting even better in ghc-7, there was no sufficient incentive to keep the gcc backend alive. LLVM is a better target because it's more modular, and one can do many optimisations on its intermediate representation before passing the result to it.
On the other hand, last I looked, JHC still produced C and the final binary from that, typically (exclusively?) by gcc. And JHC's binaries tend to be quite fast.
So if you can produce code the C compiler handles well, that is still a good option, but it's probably not worth jumping through too many hoops to produce good C if you can easier produce good executables via another route.
One point that hasn't been brought up yet is, how close is your language to C? If you're compiling a fairly low-level imperative language, C's semantics may map very closely to the language you're implementing. If that's the case, it's probably a win, because the code written in your language is likely to resemble the kind of code someone would write in C by hand. That was definitely not the case with Haskell's C backend, which is one reason why the C backend optimized so poorly.
Another point against using a C backend is that C's semantics are actually not as simple as they look. If your language diverges significantly from C, using a C backend means you're going to have to keep track of all those infuriating complexities, and possibly differences between C compilers as well. It may be easier to use LLVM, with its simpler semantics, or devise your own backend, than keep track of all that.
Aside form all the codegenerator quality reasons, there are also other problems:
The free C compilers (gcc, clang) are a bit Unix centric
Support more than one compiler (e.g. gcc on Unix and MSVC on Windows) requires duplication of effort.
compilers might drag in runtime libraries (or even *nix emulations) on Windows that are painful. Two different C runtimes (e.g. linux libc and msvcrt) to base on complicate your own runtime and its maintenance
You get a big externally versioned blob in your project, which means a major version transition (e.g. a change of mangling could hurts your runtime lib, ABI changes like change of alignment) might require quite some work. Note that this goes for compiler AND externally versioned (parts of the) runtime library. And multiple compilers multiply this. This is not so bad for C as backend though as in the case where you directly connect to (read: bet on) a backend, like being a gcc/llvm frontend.
In many languages that follow this path, you see Cisms trickle through into the main language. Of course this won't happy to you, but you will be tempted :-)
Language functionality that doesn't directly map to standard C (like nested procedures,
and other things that need stack fiddling) are difficult.
If something is wrong, users will be confronted with C level compiler or linker errors that are outside their field of experience. Parsing them and making them your own is painful, specially with multiple compilers and -versions
Note that point 4 also means that you will have to invest time to just keep things working when the external projects evolve. That is time that generally doesn't really go into your project, and since the project is more dynamic, multiplatform releases will need a lot of extra release engineering to cater for change.
So in short, from what I've seen, while such a move allows a swift start (getting a reasonable codegenerator for free for many architectures), there are downsides. Most of them are related to loss of control and poor Windows support of *nix centric projects like gcc. (LLVM is too new to say much on long term, but their rhetoric sounds a lot like gcc did ten years ago). If a project you are hugely dependent on keeps a certain course (like GCC going to win64 awfully slow), then you are stuck with it.
First, decide if you want to have serious non *nix ( OS X being more unixy) support, or only a Linux compiler with a mingw stopgap for Windows? A lot of compilers need first rate Windows support.
Second, how finished must the product become? What's the primary audience ? Is it a tool for the open source developer that can handle a DIY toolchain, or do you want to target a beginner market (like many 3rd party products, e.g. RealBasic)?
Or do you really want to provide a well rounded product for professionals with deep integration and complete toolchains?
All three are valid directions for a compiler project. Ask yourself what your primary direction is, and don't assume that more options will be available in time. E.g. evaluate where projects are now that chose to be a GCC frontend in the early nineties.
Essentially the unix way is to go wide (maximize platforms)
The complete suites (like VS and Delphi, the latter which recently also started to support OS X and has supported linux in the past) go deep and try maximize productivity. (support specially the windows platform nearly completely with deep levels of integration)
The 3rd party projects are less clear cut. They go more after self-employed programmers, and niche shops. They have less developer resources, but manage and focus them better.
As you mentioned, whether C is a good target language depends very much on your source language. So here's a few reasons where C has disadvantages compared to LLVM or a custom target language:
Garbage Collection: A language that wants to support efficient garbage collection needs to know extra information that interferes with C. If an allocation fails, the GC needs to find which values on the stack and in registers are pointers and which aren't. Since the register allocator is not under our control we need to use more expensive techniques such as writing all pointers to a separate stack. This is just one of many issues when trying to support modern GC on top of C. (Note that LLVM also still has some issues in that area, but I hear it's being worked on.)
Feature mapping & Language-specific optimisations: Some languages rely on certain optimisations, e.g., Scheme relies on tail-call optimisation. Modern C compilers can do this but are not guaranteed to do this which could cause problems if a program relies on this for correctness. Another feature that could be difficult to support on top of C is co-routines.
Most dynamically typed languages also cannot be optimised well by C-compilers. For example, Cython compiles Python to C, but the generated C uses calls to many generic functions which are unlikely to be optimised well even by latest GCC versions. Just-in-time compilation ala PyPy/LuaJIT/TraceMonkey/V8 are much more suited to give good performance for dynamic languages (at the cost of much higher implementation effort).
Development Experience: Having an interpreter or JIT can also give you a much more convenient experience for developers -- generating C code, then compiling it and linking it, will certainly be slower and less convenient.
That said, I still think it's a reasonable choice to use C as a compilation target for prototyping new languages. Given that LLVM was explicitly designed as a compiler backend, I would only consider C if there are good reasons not to use LLVM. If the source-level language is very high-level, though, you most likely need an earlier higher-level optimisation pass as LLVM is indeed very low-level (e.g., GHC performs most of its interesting optimisations before generating calling into LLVM). Oh, and if you're prototyping a language, using an interpreter is probably easiest -- just try to avoid features that rely too much on being implemented by an interpreter.
Personally I would compile to C. That way you have a universal intermediary language and don't need to be concerned about whether your compiler supports every platform out there. Using LLVM might get some performance gains (although I would argue the same could probably be achieved by tweaking your C code generation to be more optimizer-friendly), but it will lock you in to only supporting targets LLVM supports, and having to wait for LLVM to add a target when you want to support something new, old, different, or obscure.
As far as I know, C can't query or manipulate processor flags.
This answer is a rebuttal to some of the points made against C as a target language.
Tail call optimizations
Any function that can be tail call optimized is actually equivalent to an iteration (it's an iterative process, in SICP terminology). Additionally, many recursive functions can and should be made tail recursive, for performance reasons, by using accumulators etc.
Thus, in order for your language to guarantee tail call optimization, you would have to detect it and simply not map those functions to regular C functions - but instead create iterations from them.
Garbage collection
It can be actually implemented in C. You can create a run-time system for your language which consists of some basic abstractions over the C memory model - using for example your own memory allocators, constructors, special pointers for objects in the source language, etc.
For example instead of employing regular C pointers for the objects in the source language, a special structure could be created, over which a garbage collection algorithm could be implemented. The objects in your language (more accurately, references) - could behave just like in Java, but in C they could be represented along with meta-information (which you wouldn't have in case you were working just with pointers).
Of course, such a system could have problems integrating with existing C tooling - depends on your implementation and trade-offs that you're willing to make.
Lacking operations
hippietrail noted that C lacks rotate operators (by which I assume he meant circular shift) that are supported by processors. If such operations are available in the instruction set, then they can be added using inline assembly.
The frontend would in this case have to detect the architecture which it's running for and provide the proper snippets. Some kind of a fallback in the form of a regular function should also be provided.
This answer seems to be addressing some core issues seriously. I'd like to see some more substantiation on which problems exactly are caused by C's semantics.
There's a particular case where if you're writing a programming language with strong security* or reliability requirements.
For one, it would take you years to know a big enough subset of C well enough that you know all the C operations you will choose to employ in your compilation are safe and don't evoke undefined behaviour. Secondly, you'd then have to find an implementation of C that you can trust (which would mean a tiny trusted code base, and probably wont be very efficient). Not to mention you'll need to find a trusted linker, OS capable of executing compiled C code, and some basic libraries, all of which would need to be well-defined and trusted.
So in this case you might as well either use assembly language, if you care about about machine independence, some intermediate representation.
*please note that "strong security" here is not related at all to what banks and IT businesses claim to have
Is it a good idea to compile a language to C?
No.
...which begs one obvious question: why do some still think compiling via C is a good idea?
Two big arguments in favour of misusing C in this fashion is that it's stable and standardised:
For GCC, it's C or bust (but there is work underway which may allow other options).
For LLVM, there's the routine breaking of backwards-compatibility in its IR and APIs - what would you prefer: spending time on improving your project or chasing after LLVM?
Providing little more than a promise of stability is somewhat ironic, considering the intended purpose of LLVM.
For these and other reasons, there are various half-done, toy-like, lab-experiment, single-site/use, and otherwise-ignominious via-C backends scattered throughout cyberspace - being abandoned, most have succumbed to bit-rot. But there are some projects which do manage to progress to the mainstream, and their success is then used by via-C supporters to further perpetuate this fantasy.
But if you're one of those supporters, feel free to make fantasy into reality - there's that work happening in GCC, or the resurrected LLVM backend for C. Just imagine it: two well-built, well-maintained via-C backends into which the sum of all prior knowledge can be directed.
They just need you.

C setjmp.h and ucontext.h, which is better?

Hi I'm need to jump from a place to another...
But I would like to know which is better to use, setjmp or ucontext, things like:
Are setjmp and ucontext portable?
My code is thread safe using these library?
Why use one instead another?
Which is fast and secure?
...(Someone please, can answer future question that I forgot to put here?)
Please give a little more information that I'm asking for, like examples or some docs...
I had searching on the web, but I only got exception handling in C like example of setjmp, and I got nothing about ucontex.h, I got that it was used for multitask, what's the difference of it and pthread?
Thanks a lot.
setjmp is portable (ISO C89 and C99) and ucontext (obsolescent in SUSv3 and removed from SUSv4/POSIX 2008) is not. However ucontext was considerably more powerful in specification. In practice, if you used nasty hacks with setjmp/longjmp and signal handlers and alternate signal handling stacks, you could make these just about as powerful as ucontext, but they were not "portable".
Neither should be used for multithreading. For that purpose POSIX threads (pthread functions). I have several reasons for saying this:
If you're writing threaded code, you might as well make it actually run concurrently. We're hitting the speed limits of non-parallel computing and future machines will be more and more parallel, so take advantage of that.
ucontext was removed from the standards and might not be supported in future OS's (or even some present ones?)
Rolling your own threads cannot be made transparent to library code you might want to use. It might break library code that makes reasonable assumptions about concurrency, locking, etc. As long as your multithreading is cooperative rather than async-signal-based there are probably not too many issues like this, but once you've gotten this deep into nonportable hacks things can get very fragile.
...and probably some more reasons I can't think of right now. :-)
On the portability matter, setjmp() is portable to all hosted C implementations; the <ucontext.h> functions are part of the XSI extensions to POSIX - this makes setjmp() significantly more portable.
It is possible to use setjmp() in a thread-safe manner. It doesn't make much sense to use the ucontext functions in a threaded program - you would use multiple threads rather than multiple contexts.
Use setjmp() if you want to quickly return from a deeply-nested function call (this is why you find that most examples show its use for exception handling). Use the ucontext functions for implementing user-space threads or coroutines (or don't use them at all).
The "fast and secure" question makes no sense. The implementations are typically as fast as it is practical to make them, but they perform different functions so cannot be directly compared (the ucontext functions do more work, so will typically be slightly slower).
Note that the ucontext functions are listed as obsolescent in the two most recent editions of POSIX. The pthreads threading functions should generally be used instead.
setjmp/longjmp are only intended to restore a "calling" context, so you can use it only to do a "fast exit" from a chain of subroutines. Different uses may or may not work depending on the system, but in general these functions are not intended to do this kind of things. So "ucontext" is better. Also have a look to "fibers" (native on Windows). Here a link to an article that may be helpful:
How to implement a practical fiber scheduler?
Bye!

Is there something to replace the <ucontext.h> functions?

The user thread functions in <ucontext.h> are deprecated because they use a deprecated C feature (they use a function declaration with empty parentheses for an argument).
Is there a standard replacement for them? I don't feel full-fledged threads are good at implementing cooperative threading.
If you really want to do something like what the ucontext.h functions allow, I would keep using them. Anything else will be less portable. Marking them obsolescent in POSIX seems to have been a horrible mistake of pedantry by someone on the committee. POSIX itself requires function pointers and data pointers to be the same size and for function pointers to be representable cast to void *, and C itself requires a cast between function pointer types and back to be round-trip safe, so there are many ways this issue could have been solved.
There is one real problem, that converting the int argc, ... passed into makecontext into a form to pass to the function cannot be done without major assistance from the compiler unless the calling convention for variadic and non-variadic functions happens to be the same (and even then it's rather questionable whether it can be done robustly). This problem however could have been solved simply by deprecating the use of makecontext in any form other than makecontext(ucp, func, 1, (void *)arg);.
Perhaps a better question though is why you think ucontext.h functions are the best way to handle threading. If you do want to go with them, I might suggest writing a wrapper interface that you can implement either with ucontext.h or with pthreads, then comparing the performance and bloat. This will also have the advantage that, should future systems drop support for ucontext.h, you can simply switch to compiling with the pthread-based implementation and everything will simply work. (By then, the bloat might be less important, the benefit of multi-core/SMP will probably be huge, and hopefully pthread implementations will be less bloated.)
Edit (based on OP's request): To implement "cooperative threading" with pthreads, you need condition variables. Here's a decent pthreads tutorial with information on using them:
https://computing.llnl.gov/tutorials/pthreads/#ConditionVariables
Your cooperative multitasking primitive of "hand off execution to thread X" would go something like:
self->flag = 0;
other_thread->flag = 1;
pthread_mutex_lock(other_thread->mutex);
pthread_cond_signal(other_thread->cond);
pthread_mutex_unlock(other_thread->mutex);
pthread_mutex_lock(self->mutex);
while (!self->flag)
pthread_cond_wait(self->cond, self->mutex);
pthread_mutex_unlock(self->mutex);
Hope I got that all right; at least the general idea is correct. If anyone sees mistakes please comment so I can fix it. Half of the locking (other_thread's mutex) is probably entirely unnecessary with this sort of usage, so you could perhaps make the mutex a local variable in the task_switch function. All you'd really be doing is using pthread_cond_wait and pthread_cond_signal as "go to sleep" and "wake up other thread" primitives.
For what it's worth, there's a Boost.Context library that was recently accepted and needs only to be merged into an official Boost release. Boost.Context addresses the same use cases as the POSIX ucontext family: low-overhead cooperative context switching. The author has taken pains with performance issues.
No, there is no standard replacement for them.
You options are
continue to use <ucontext.h>even though they contain obsolete C.
switch to pthreads
write your own co-thread library
use an existing (and possibly not-so-portable) co-thread library such as http://swtch.com/libtask/ , though many of such libraries are implemented on top of ucontext.h
The Open Group Base Specifications Issue 6
IEEE Std 1003.1, 2004 Edition
Still lists makecontext() and swapcontext() with the same deprecated syntax. I have not seen anything more recent.

Safer Alternatives to the C Standard Library

The C standard library is notoriously poor when it comes to I/O safety. Many functions have buffer overflows (gets, scanf), or can clobber memory if not given proper arguments (scanf), and so on. Every once in a while, I come across an enterprising hacker who has written his own library that lacks these flaws.
What are the best of these libraries you have seen? Have you used them in production code, and if so, which held up as more than hobby projects?
I use GLib library, it has many good standard and non standard functions.
See https://developer.gnome.org/glib/stable/
and maybe you fall in love... :)
For example:
https://developer.gnome.org/glib/stable/glib-String-Utility-Functions.html#g-strdup-printf
explains that g_strdup_printf is:
Similar to the standard C sprintf() function but safer, since it calculates the maximum space required and allocates memory to hold the result.
This isn't really answering your question about the safest libraries to use, but most functions that are vulnerable to buffer overflows that you mentioned have safer versions which take the buffer length as an argument to prevent the security holes that are opened up when the standard methods are used.
Unless you have relaxed the level of warnings, you will usually get compiler warnings when you use the deprecated methods, suggesting you use the safer methods instead.
I believe the Apache Portable Runtime (apr) library is safer than the standard C library. I use it, well, as part of an apache module, but also for independent processes.
For Windows there is a 'safe' C/C++ library.
You're always at liberty to implement any library you like and to use it - the hard part is making sure it is available on the platforms you need your software to work on. You can also use wrappers around the standard functions where appropriate.
Whether it is really a good idea is somewhat debatable, but there is TR24731 published by the C standard committee - for a safer set of C functions. There's definitely some good stuff in there. See this question: Do you use the TR 24731 Safe Functions in your C code?, which includes links to the technical report.
Maybe the first question to ask is if your really need plain C? (maybe a language like .net or java is an option - then e.g. buffer overflows are not really a problem anymore)
Another option is maybe to write parts of your project in C++ if other higher level languages are not an option. You can then have a C interface which encapsulates the C++ code if you really need C.
Because if you add all the advanced functions the C++ standard library has build in - your C code would only be marginally faster most times (and contain a lot more bugs than an existing and tested framework).

Resources