Where can I find documentation on uv_poll_init? - libuv

I'm looking at a libuv example at https://github.com/benfleis/samples/blob/master/libuv/stdio/stdio_poll.c and trying to understand it.
I mostly understand it, but I'm having some trouble with the uv_poll_init's at the bottom, and I can't find any documentation.
Can someone point me to some documentation on it?
Thanks!

The official documentation is in the form of comments in the include/uv.h header file, which provides the following documentation for uv_poll_init():
Initialize the poll watcher using a file descriptor.
However, some better documentation that covers the concept of watchers, can be found here. In short:
uv_poll_init(loop, &stdin_watcher, STDIN_FILENO);
Initializes stdin_watcher to observe STDIN_FILENO. When the watcher is started, all of its callbacks will be invoked within the context of loop.
Here is a basic pseudo-flow of the program:
stdout_cb:
write whatever is in log_buf to stdout
stop listening for when I can write to stdout
log:
write message to log_buf
have stdout_watcher listen for when its file becomes writeable
when it becomes writable, stdout_cb will be called
stdint_cb:
read from stdin
call log
set_non_blocking:
set file descriptor as non-blocking
main:
set stdin/out to nonblocking
get handle to default event loop
initialize stdint_watcher, it will listen to stdin and its callback will
run within the default loop
initialize stdout_watcher, it will listen to stdout and its callback will
run within the default loop
have stdin_watcher listen for when its file becomes readable
when it becomes readable, stdin_cb will be called
run the event loop until no more work exists
in this case, when both watchers are not running (i.e. stopped)

Newest and latest and greatest documentation: http://docs.libuv.org/en/latest/

Related

How to initialize stdout/stderr in a subsystem=windows program WITHOUT calling AllocConsole()?

So when trying to use the stdin/stdout/stderr streams in a Windows GUI app, one typically has to call AllocConsole (or AttachConsole) in order to initialize those streams for use. There are lots of posts on here on what you need to do AFTER calling AllocConsole (i.e. use freopen_s on the respective streams, etc).
I have a program where I want to redirect stdout and stderr to an anonymous pipe. I have a working example where I call:
AllocConsole();
FILE* fout;
FILE* ferr;
freopen_s(&fout, "CONOUT$", "r+", stdout);
freopen_s(&ferr, "CONOUT$", "r+", stderr);
HANDLE hreadout;
HANDLE hwriteout;
HANDLE hreaderr;
HANDLE hwriteerr;
SECURITY_ATTRIBUTES sao = { sizeof(sao),NULL,TRUE };
SECURITY_ATTRIBUTES sae = { sizeof(sae),NULL,TRUE };
CreatePipe(&hreadout, &hwriteout, &sao, 0);
CreatePipe(&hreaderr, &hwriteerr, &sae, 0);
SetStdHandle(STD_OUTPUT_HANDLE, hwriteout);
SetStdHandle(STD_ERROR_HANDLE, hwriteerr);
This snippet successfully sets stdout and stderr to the write ends of the anonymous pipes and I can capture the data.
However calling AllocConsole will spawn a Conhost.exe - this is the actual black window that pops to the screen. I don't have a use for this and most importantly, I would like to avoid the process creation of a child conhost.exe under my program.
So the question is, how can I fool Windows into thinking it has a console attached/manually setup the initial stdout and stderr file streams so that I can then redirect them as I have done already? I have looked at the AllocConsole call in a debugger as well as GetStdHandle and SetStdHandle to try and get a sense of what is going on, but my RE skills are lacking.
Without AllocConsole, the freopen_s calls fail with error 6, Invalid Handle. GetStdHandle also returns a NULL handle. Calling SetStdHandle succeeds (based on it's return code and calling GetLastError), however this doesn't appear to actually get things set up where I need them as I don't receive any output in my pipe.
Any ideas?
Use the SetStdHandle function to assign your pipe HANDLE values to STD_INPUT_HANDLE and STD_OUTPUT_HANDLE.

How does the Linux Kernel know which file descriptor to write input events to?

I would like to know the mechanism in which the Linux Kernel knows which file descriptor (e.g. /dev/input/eventX) to write the input to. For example, I know that when the user clicks the mouse, an interrupt occurs, which gets handled by the driver and propagated to the Linux input core via input_event (drivers/input/input.c), which eventually gets written to the appropriate file in /dev/input/. Specifically, I want to know which source files I need to go through to see how the kernel knows which file to write to based on the information given about the input event. My goal is to see if I can determine the file descriptors corresponding to specific input event codes before the kernel writes them to the /dev/input/eventX character files.
You may go through two files:
drivers/input/input.c
drivers/input/evdev.c
In evdev.c, evdev_init() will call input_register_handler() to initialize input_handler_list.
Then in an input device driver, after initialize input_dev, it will call:
input_register_device(input_dev)
-> get device kobj path, like /devices/soc/78ba000.i2c/i2c-6/6-0038/input/input2
-> input_attach_handler()
-> handler->connect(handler, dev, id);
-> evdev_connect()
In evdev_connect(), it will do below:
1. dynamic allocate a minor for a new evdev.
2. dev_set_name(&evdev->dev, "event%d", dev_no);
3. call input_register_handle() to connect input_dev and evdev->handle.
4. create a cdev, and call device_add().
After this, you will find input node /dev/input/eventX, X is value of dev_no.

How can a Ruby C extension store a proc for later execution?

Goal: allow c extension to receive block/proc for delayed execution while retaining current execution context.
I have a method in c (exposed to ruby) that accepts a callback (via VALUE hash argument) or a block.
// For brevity, lets assume m_CBYO is setup to make a CBYO module available to ruby
extern VALUE m_CBYO;
VALUE CBYO_add_callback(VALUE callback)
{
if (rb_block_given_p()) {
callback = rb_block_proc();
}
if (NIL_P(callback)) {
rb_raise(rb_eArgError, "either a block or callback proc is required");
}
// method is called here to add the callback proc to rb_callbacks
}
rb_define_module_function(m_CBYO, "add_callback", CBYO_add_callback, 1);
I have a struct I'm using to store these with some extra data:
struct rb_callback
{
VALUE rb_cb;
unsigned long long lastcall;
struct rb_callback *next;
};
static struct rb_callback *rb_callbacks = NULL;
When it comes time (triggered by an epoll), I iterate over the callbacks and execute each callback:
rb_funcall(cb->rb_cb, rb_intern("call"), 0);
When this happens I am seeing that it successfully executes the ruby code in the callback, however, it is escaping the current execution context.
Example:
# From ruby including the above extension
CBYO.add_callback do
puts "Hey now."
end
loop do
puts "Waiting for signal..."
sleep 1
end
When a signal is received (via epoll) I will see the following:
$> Waiting for signal...
$> Waiting for signal...
$> Hey now.
$> // process hangs
$> // Another signal occurs
$> [BUG] vm_call_cfunc - cfp consistency error
Sometimes, I can get more than one signal to process before the bug surfaces again.
I found the answer while investigating a similar issue.
As it turns out, I too was trying to use native thread signals (with pthread_create) which are not supported with MRI.
TLDR; the Ruby VM is not currently (at the time of writing) thread safe. Check out this nice write-up on Ruby Threading for a better overall understanding of how to work within these confines.
You can use Ruby's native_thread_create(rb_thread_t *th) which will use pthread_create behind the scenes. There are some drawbacks that you can read about in the documentation above the method definition. You can then run the callback with Ruby's rb_thread_call_with_gvl method. Also, I haven't done it here, but it might be a good idea to create a wrapper method so you can use rb_protect to handle exceptions the callback may raise (otherwise they will be swallowed by the VM).
VALUE execute_callback(VALUE callback)
{
return rb_funcall(callback, rb_intern("call"), 0);
}
// execute the callback when the thread receives signal
rb_thread_call_with_gvl(execute_callback, data->callback);

Close device/socket in VxWorks

Is there a way to close the device/socket in VxWorks programmatically?
Meaning say I have the devices /tyco/0, /tyco/1 and /tyco/2 and I want to close/shutdown /tyco/1 and /tyco/2.
I would like to do something like remove("/tyco/1"). Something that would prevent even an open("/tyco/1") call later on in the code or from an outside source from opening the socket.
All devices available to VxWorks are part of the device list. The device list is accessible using the iosLib.
I've used the following code a lot to remove devices to generate errors in order to test my programs:
DEV_HDR *pDevice;
pDevice = iosDevFind("/xyz", NULL);
if (pDevice != NULL)
{
iosDevDelete(pDevice);
}
This works for all devices listed by the devs command which in your case will also work for "/tyco". I doubt that you can inhibit open calls to "/tyco/1" and "/tyco/2" but allow calls to "/tyco/0" using that method since it works on "devices".
If "/tyco/0" is your serial interface to the VxWorks shell then the method from above will work. Because removing a device from the device list will cause all following open calls to that device to fail but will not close already opened devices...

How to create multiple network namespace from a single process instance

I am using following C function to create multiple network namespaces from a single process instance:
void create_namespace(const char *ns_name)
{
char ns_path[100];
snprintf(ns_path, 100, "%s/%s", "/var/run/netns", ns_name);
close(open(ns_path, O_RDONLY|O_CREAT|O_EXCL, 0));
unshare(CLONE_NEWNET);
mount("/proc/self/ns/net", ns_path, "none", MS_BIND , NULL);
}
After my process creates all the namspaces and I add a tap interface to any of the one network namespace (with ip link set tap1 netns ns1 command), then I actually see this interface in all of the namespaces (presumably, this is actually a single namespace that goes under different names).
But, if I create multiple namespaces by using multiple processes, then everything is working just fine.
What could be wrong here? Do I have to pass any additional flags to the unshare() to get this working from a single process instance? Is there a limitation that a single process instance can't create multiple network namespaces? Or is there a problem with mount() call, because /proc/self/ns/net is actually mounted multiple times?
Update:
It seems that unshare() function creates multiple network namespaces correctly, but all the mount points in /var/run/netns/ actually reference to the first network namespace that was mounted in that direcotry.
Update2:
It seems that the best approach is to fork() another process and execute create_namespace() function from there. Anyway, I would be glad to hear a better solution that does not involve fork() call or at least get a confirmation that would prove that it is impossible to create and manage multiple network namespaces from a single process.
Update3:
I am able to create multiple namespaces with unshare() by using the following code:
int main() {
create_namespace("a");
system("ip tuntap add mode tap tapa");
system("ifconfig -a");//shows lo and tapA interface
create_namespace("b");
system("ip tuntap add mode tap tapb");
system("ifconfig -a");//show lo and tapB interface, but does not show tapA. So this is second namespace created.
}
But after the process terminates and I execute ip netns exec a ifconfig -a and ip netns exec b ifconfig -a it seems that both commands were suddenly executed in namespace a. So the actual problem is storing the references to the namespaces (or calling mount() the right way. But I am not sure, if this is possible).
Network Namespaces are, by design, created with a call to clone, and it can be modified after by unshare. Take note that even if you do create a new network namespace with unshare, in fact you just modify network stack of your running process. unshare is unable to modify network stack of other processes, so you won't be able to create another one only with unshare.
In order to work, a new network namespace needs a new network stack, and so it needs a new process. That's all.
Good news is that it can be made very lightweight with clone, see:
Clone() differs from the traditional fork() system call in UNIX, in
that it allows the parent and child processes to selectively share or
duplicate resources.
You are able to divert only on this network stack (and avoid memory space, table of file descriptors and table of signal handlers). Your new network process can be made more like a thread than a real fork.
You can manipulate them with C code or with Linux Kernel and/or LXC tools.
For instance, to add a device to new network namespace, it's as simple as:
echo $PID > /sys/class/net/ethX/new_ns_pid
See this page for more info about CLI available.
On the C-side, one can take a look at lxc-unshare implementation. Despite its name it uses clone, as you can see (lxc_clone is here). One can also look at LTP implementation, where the author has chosen to use fork directly.
EDIT: There is a trick that you can use to make them persistent, but you will still need to fork, even temporarily.
Take a look at this code of ipsource2 (I have removed error checking for clarity):
snprintf(netns_path, sizeof(netns_path), "%s/%s", NETNS_RUN_DIR, name);
/* Create the base netns directory if it doesn't exist */
mkdir(NETNS_RUN_DIR, S_IRWXU|S_IRGRP|S_IXGRP|S_IROTH|S_IXOTH);
/* Create the filesystem state */
fd = open(netns_path, O_RDONLY|O_CREAT|O_EXCL, 0);
[...]
close(fd);
unshare(CLONE_NEWNET);
/* Bind the netns last so I can watch for it */
mount("/proc/self/ns/net", netns_path, "none", MS_BIND, NULL)
If you execute this code in a forked process, you'll be able to create new network namespace at will. In order to delete them, you can simply umount and delete this bind:
umount2(netns_path, MNT_DETACH);
if (unlink(netns_path) < 0) [...]
EDIT2: Another (dirty) trick would be simply to execute "ip netns add .." cli with system.
You only have to bind mount /proc/*/ns/* if you need to access these namespaces from another process, or need to get handle to be able to switch back and forth between the two. It is not needed to use multiple namespaces from a single process.
unshare does create new namespace.
clone and fork by default do not create any new namespaces.
there is one "current" namespace of each kind assigned to a process. It can be changed by unshare or setns. Set of namespaces (by default) is inherited by child processes.
Whenever you do open(/proc/N/ns/net), it creates inode for this file,
and all subsequent open()s will return file that is bound to the
same namespace. Details are lost in the depths of kernel dentry cache.
Also, each process has only one /proc/self/ns/net file entry, and
bind mount does not create new instances of this proc file.
Opening those mounted files are exactly the same as opening
/proc/self/ns/net file directly (which will keep pointing to the
namespace it pointed to when you first opened it).
It seems that "/proc/*/ns" is half-baked like this.
So, if you only need 2 namespaces, you can:
open /proc/1/ns/net
unshare
open /proc/self/ns/net
and switch between the two.
For more that 2 you might have to clone(). There seems to be no way to create more than one /proc/N/ns/net file per process.
However, if you do not need to switch between namespaces at runtime, or to share them with other processes, you can use many namespaces like this:
open sockets and run processes for main namespace.
unshare
open sockets and run processes for 2nd namespace (netlink, tcp, etc)
unshare
...
unshare
open sockets and run processes for Nth namespace (netlink, tcp, etc)
Open sockets keep reference to their network namespace, so they will not be collected until sockets are closed.
You can also use netlink to move interfaces between namespaces, by sending netlink command on source namespace, and specifying dst namespace either by PID or namespace FD (the later you don't have).
You need to switch process namespace before accessing /proc entries that depend on that namespace. Once "proc" file is open, it keeps reference to the namespace.

Resources