Is it correct to use (parts of) GLib without calling g_main_loop_run? If so, how to identity which parts of GLib I can use like this?
I'm mostly interested in (as referred to by https://developer.gnome.org/glib/2.34/index.html):
GLib Data Types;
GLib Utilities.
Common sense tells me that there should be nothing there to require GMainLoop (except Timers, may be?), but I'm a complete GLib newbie, and somehow didn't find any explicit statement in the docs when GMainLoop is required and when not.
From "GLib Core Application Support" section I'd like to use Message Logging, but not sure about it interaction with main loop.
For those wondering about why, I use FUSE/osxfuse, which already has its main loop, and I'm not sure how easy it is to deconstruct it and integrate into GMainLoop.
Also, I welcome alternative C library suggestions. Looking through GLib docs I rather like it, but I feel uneasy about it trying to be a framework, rather than a set of libraries.
Very little of the GLib code requires the main loop, timers for example are implemented using the system's normal timestamp.
The code that does require the main loop will reference it, such as the IO Channels. Even then you can see that it's possible to use the IO Channels with or without the main loop, it's your choice.
Related
I was looking for a good general-purpose library for C on top of the standard C library, and have seen several suggestions to use glib. How 'obtrusive' is it in your code? To explain what I mean by obtrusiveness, the first thing I noticed in the reference manual is the basic types section, thinking to myself, "what, am I going to start using gint, gchar, and gprefixing geverything gin gmy gcode gnow?"
More generally, can you use it only locally without other functions or files in your code having to be aware of its use? Does it force certain assumptions on your code, or constraints on your compilation/linking process? Does it take up a lot of memory in runtime for global data structures? etc.
The most obtrustive thing about glib is that any program or library using it is non-robust against resource exhaustion. It unconditionally calls abort when malloc fails and there's nothing you can do to fix this, as the entire library is designed around the concept that their internal allocation function g_malloc "can't fail"
As for the ugly "g" types, you definitely don't need any casts. The types are 100% equivalent to the standard types, and are basically just cruft from the early (mis)design of glib. Unfortunately the glib developers lack much understanding of C, as evidenced by this FAQ:
Why use g_print, g_malloc, g_strdup and fellow glib functions?
"Regarding g_malloc(), g_free() and siblings, these functions are much safer than their libc equivalents. For example, g_free() just returns if called with NULL.
(Source: https://developer.gnome.org/gtk-faq/stable/x908.html)
FYI, free(NULL) is perfectly valid C, and does the exact same thing: it just returns.
I have used GLib professionally for over 6 years, and have nothing but praise for it. It is very light-weight, with lots of great utilities like lists, hashtables, rand-functions, io-libraries, threads/mutexes/conditionals and even GObject. All done in a portable way. In fact, we have compiled the same GLib-code on Windows, OSX, Linux, Solaris, iOS, Android and Arm-Linux without any hiccups on the GLib side.
In terms of obtrusiveness, I have definitely "bought into the g", and there is no doubt in my mind that this has been extremely beneficial in producing stable, portable code at great speed. Maybe specially when it comes to writing advanced tests.
And if g_malloc don't suit your purpose, simply use malloc instead, which of course goes for all of it.
Of course you can "forget about it elsewhere", unless of course those other places somehow interact with glib code, then there's a connection (and, arguable, you're not really "elsewhere").
You don't have to use the types that are just regular types with a prepended g (gchar, gint and so on); they're guaranteed to be the same as char, int and so on. You never need to cast to/from gint for instance.
I think the intention is that application code should never use gint, it's just included so that the glib code can be more consistent.
I have a large code base of quite old C code on an embedded system and unfortunately there are no automated test cases/suites. This makes restructuring and refactoring code a dangerous task.
Manually writing test cases is very time consuming, so I thought that it should be possible to automate at least some part of this process for instance by tracing all the function calls and recording of the input and output values. I could then use these values in the test cases (this would not work for all but at least for some functions). It would probably also be possible to create mock functions based on the gathered data.
Having such test cases would make refactoring a less dangerous activity.
Are there any solutions that already can do this? What would be the easiest way to get this to work if I had to code it myself?
I thought about using ctags to find the function definitions, and wrapping them in a function that records the parameter values. Another possibility would probably be a gcc compiler plugin.
There is a gcc option "-finstrument-functions", which mechanism you can use to define your own callbacks for each funtion's entry/exit.
Google it and you can find many good examples.
[Edit] with this gcc option's call back you can only track the function's entry/exit,not the params. but with some tricks you may also track the params. (walk through the current frame pointer to get the param on the stack).
Here is an article talk about the idea of the implementation:
http://linuxgazette.net/151/melinte.html
Furthermore, depends on your embedded system, on linux you can try something like ltrace to show the params(like the strace way). There are many tools do the function trace work either in userspace or kernelspace on linux, ftrace/ust/ltrace/utrace/strace/systemtap/. Anyway, if you do not add any hard debugging code, it's not possible to display the params in the correct way. If you accept the efforts to add entry/exit debugging infomation, then it's much easier.
Also here is a similar thread talk about this problem.
Tool to trace local function calls in Linux
I am doing some research into platform independent code and found mention of the dlfcn API. It was the first time I came across mention of it and did further research into it. Now hopefully my lack of experience/understanding of platform independent code as well as compiling/linking isn't going to show in this post but to me the dlfcn API just lets us do the same dynamic linking programmatically that the ld utility does. If I have misconceptions please correct me as I would like to know. Regarding what I think I know about the ld utility and the dlfcn API I have some questions.
What are the advantages of using either the ld utility vs. dlfcn API to dynamically link?
My first thought was that the dlfcn API seems like a waste of my time since I need to request pointers to the functions vs. having ld examine a symbol table for undefined symbols and then linking them. Similarly ld does everything for me while I have to do everything by hand with the dlfcn API (i.e. open/load the library, get a function pointer, close the library, etc.). But on second glance I thought that there may be some advantages. One being that we can load a library out of memory after we are done using it.
In this way memory could be saved if we knew we didn't need to utilize a library the whole time. I am unsure if there is any "memory/library" management for libraries dynamically linked by ld? Similarly I am unsure of what scenarios/environments would we be interested in using the dlfcn API to save said memory as it seems this wouldn't be a problem in modern day systems. I presume one would be the usage of the library on a system with very very very limited resources (maybe some embedded system?).
What other advantages or disadvantages may there be?
What "coding pattern" is used for platform independent code in regards to dynamic linking?
If I was making platform independent code that depended on system calls I could see myself achieving platform independent code by coding in one of three styles:
Logical branching directly in my libraries code via macros. Something like:
void myAwesomeFunction()
{
...
#if defined(_MSC_VER)
// Call some Windows system call
#elif defined(__GNUC__)
// Call some Unix system call
...
}
Create generic system call functions and use those in my libraries code. Something like:
OS_Calls.h
void OS_openFile(string myFile)
{
...
#if defined(_MSC_VER)
// Call Windows system call to open file
#elif defined(__GNUC__)
// Call Unix system call to open file
...
}
MyAwesomeFunctions.cpp
#include "OS_Calls.h"
void myAwesomeFunction()
{
...
OS_openFile("my awesome file");
...
}
Similar to one but add a layer of abstraction by using the dlfcn API
MyLibraryLoader.h
void* GetLibraryFunction(void* lib, char* funcName)
{
...
return dlsym(lib, funcName);
}
MyAwesomeFunctions.cpp
#include "MyLibraryLoader.h"
void myAwesomeFunction()
{
Result result = GetLibraryFunction(someLib, someFunc)(arguments...);
}
What ones are typically used and why? And if there are any others that aren't listed and preferred to mine please let me know.
Thanks for reading this post. I will keep it updated so that it may serve as a future informative reference.
dlfcn and ld does not solve the same problem: in fact you can use both in your project.
The dlfcn API is meant to support plugin architectures, in which you define an interface which modules should implement. An application can then load different implementations of that interface, for various reasons (extensibility, customization, etc.).
ld, well, links the libraries your application request, but does that at compile time, not at runtime time. It doesn't support in any way plugin architectures, since ld links objects specified in the command line.
Of course you can only use the dlfcn API, but it is not meant to be used in that way and, of course, using it in that way would be a huge pain in your rectum.
For your second question, I think the best pattern is the second one.
Branching "directly in the code" can be confusing, because it's not immediately obvious what the two branches accomplish, something which is well-defined if you define a proper abstraction and you implement it using multiple branches for each supported architecture.
Using the dlfcn API is pretty pointless, because you don't have a uniform interface to call (that's exactly the argument that supports the second pattern), so it just adds bloats in your code.
HTH
I don't think dynamic linkage helps you much with platform independence.
Your second option seems like a reasonable way to be platform independence. Most of the code just calls your platform independent wrappers, while a small part of it is "dirty" with ifdefs.
I don't see how dynamic loading helps here.
Some pros and cons for dynamic loading:
1. Cons:
a. Not the "straightforward" way, requires more work.
b. Prevents standard tools (e.g. ldd) from analyzing dependenies (thus helping you undersatnd what you need to successfully run).
2. Pros:
a. Allows loading only what you need (e.g. depending on command line arguments), or unloading what you don't. This can save memory.
b. Lets you generate library names in more complicated ways (e.g. read a list of plugins from a configuration file).
How does one call Go code in C from threads that weren't created by Go?
What do I assign to a C function pointer such that threads not created by Go can call that pointer and enter into Go code?
Update0
I don't want to use SWIG.
The callbacks will be coming from threads Go hasn't seen before. Neither cgo/life nor anything in pkg/runtime demonstrates this behaviour AFAICT.
You can do this, but the solution is relatively slow (about 22µs per call on my machine).
The answer is for the C code to use C thread primitives to communicate with another goroutine that will actually run the callback.
I have created a Go package that provides this functionality: rog-go.googlecode.com/hg/exp/callback.
There is an example package demonstrating its use here. The example demonstrates a call back to an arbitrary Go closure from a thread created outside of the Go runtime. Another example is here. This demonstrates a typical C callback interface and layers a Go callback on top of it.
To try out the first example:
goinstall rog-go.googlecode.com/hg/exp/example/looper
cd $GOROOT/src/pkg/rog-go.googlecode.com/hg/exp/example/looper
gotest
To try out the second example:
goinstall rog-go.googlecode.com/hg/exp/example/event
cd $GOROOT/src/pkg/rog-go.googlecode.com/hg/exp/example/event
gotest
Both examples assume that pthreads are available. Of course, this is just a stop-gap measure until cgo is fixed, but the technique for calling arbitrary Go closures in a C callback will be applicable even then.
Here is the documentation for the callback package:
PACKAGE
package callback
import "rog-go.googlecode.com/hg/exp/callback"
VARIABLES
var Func = callbackFunc
Func holds a pointer to the C callback function.
When called, it calls the provided function f in a
a Go context with the given argument.
It can be used by first converting it to a function pointer
and then calling from C.
Here is an example that sets up the callback function:
//static void (*callback)(void (*f)(void*), void *arg);
//void setCallback(void *c){
// callback = c;
//}
import "C"
import "rog-go.googlecode.com/hg/exp/callback"
func init() {
C.setCallback(callback.Func)
}
I'll assume you mean from C code compiled with gcc?
IIRC, this either can't be done or can't easily be done using 6g+cgo and friends. Go uses a different calling convention (as well as the segmented stacks and such).
However, you can write C code for [685]c (or even [685]a) and call into go easily using package·function() (you can even call methods IIRC). See the Source of the runtime package for examples.
Update:
Coming back to this question after the update, and giving it some more thought. This can't be done in a standard fashion using 6c or cgo. Especially because the threads are not started by the go runtime, the current implementation would fail. The scheduler would suddenly have a thread under its control that it does not know about; additionally, that thread would be missing some thread-local variables the go runtime uses for managing stacks and some other things. Also, if the go function returns a value (or several) the C code can't access it on the currently supported platforms, as go returns values on the stack (you could access them with assembly though). With these things in mind, I do believe you could still do this using channels. It would require your C code to be a little too intimate with the inner workings of the go runtime, but it would work for a given implementation. While using channels may not be the solution you're looking for, it could possibly fit more nicely with the concepts of Go than callbacks. If your C code reimplemented at least the sending methods in The channel implementation (that code is written for 6c, so it would have to be adapted for gcc most likely, and it calls the go runtime, which we've determined can't be done from a non-go thread), you should be able to lock the channel and push a value to it. The go scheduler can continue to manage it's own threads, but now it can receive data from other threads started in C.
Admittedly, it's a hack; I haven't looked close enough, but it would probably take a few other hacks to get it working (I believe the channels themselves maintain a list of the goroutines that are waiting on them [EDIT: confirmed: runtime·ready(gp);], so you'd need something in your go code to wake up the receiving channel or to warranty the go code won't receive on the channel until you've already pushed a value). However, I can't see any reason this can't work, whereas there are definite reasons that running code generated by 6g on a thread created in C can't.
My original answer still holds though: barring an addition to the language or runtime, this can't yet be done the way you'd like (I'd love to be proven wrong here).
You can find a real-world application of rog's callback package in these bindings for the PortAudio audio I/O library: http://code.google.com/p/portaudio-go/. Might make it easier to understand..
(Thanks for implementing that, rog. It's just what I needed!)
I have a function which is called explicitly by 4 other functions in my code base. Then in turn each of these functions is called by at least 10 other functions throughout my code. I know that I could, by hand, trace one of these function calls to the main function of my program (which has 30 function calls) but it seems like this would be a better job for the computer. I just want to know which of the functions in main() is calling this buried function.
Does anyone know of any software that could help?
Also, using a debugger is out of the question. That would have been too easy. The software only runs on a hand held device.
doxygen, correctly configured, is able to output an HTML document with navigable caller list and called-by list for every function in your code. You can generate call graphs as well.
Comment it out (or better, comment out its prototype) and try to compile your program. You should see, where it is referenced.
If your platform has an API to capture backtraces, I would just instrument up the function to use those and log them to a file for later analysis. There's no guarantee that this will find all callers (or callers-of-...-of-callers), but if you exercise all of the programs features while logging like this, you should find "most" of them. For relatively simple programs, it is possible to find all callers this way.
Alternatively, many sampling tools can get you this information.
However, I have a suspicion that you may be on a platform that doesn't have a lot of these features, so a static source-analysis tool (like mouviciel suggested) is likely your best option. Assuming that you can make it work for you, this has the added benefit that it should find all callers, not just most of them.
http://cscope.sourceforge.net/ I think this also can be useful.
I second mouviciel's suggestion of using doxygen for getting this info. The downside is that doxygen is working on the source code. You can only see what functions CAN POTENTIALLY call your function, not the ones that are ACTUALLY CALLING your function. If you are using Linux and you can change the source code of the function in question, you can obtain this info using the backtrace() and the backtrace_symbols() functions.