Is DBus only meant for IPC - dbus

I am investigating on DBus message Q. Just a few queries:
1) Does DBus guarantees messages to be ordered( FIFO ) ?
2) Can DBus be used for Distributed System( processes lying in different systems )
Thanks...

1) Yes, message order is preserved, though different senders may race. There are some D-Bus libraries that mess up the order to implement synchronous method calls.
2) No / sort-off. D-Bus most commonly is implemented on top of UNIX domain sockets, that are by definition limited to one host. D-Bus specification allows D-Bus to be ran on top of TCP. Thus one node can work as a messaging hub. So, not really distributed.
Also, many D-Bus libraries don't allow contacting arbitrary busses (if "connect" call only supports "system" and "session", those refer to the two default UNIX domain socket on Linux systems)

1) No, sender sets (arbitrary) message serial number and it's other side responsibility to set same number in reply serial field
2) Yes, as long as your bus daemon able to understand nodes location from service name and route messages accordingly. Not sure about dbus-daemon capabilities

Related

In the following scenario am I the server or the client?

So I have a PC connected to a micro-controller via a serial cable and an Ethernet cable. Initially the PC sends a byte across the serial cable to the micro-controller. This results in the micro-controller sending back a UDP datagram via the Ethernet cable.
I want to know whether the code running on my PC should be a server or a client?
Per Wikiepdia Client/Server:
The server component provides a function or service to one or many
clients, which initiate requests for such services
And Master/Slave:
Master/slave is a model of asymmetric communication or control where
one device or process controls one or more other devices or processes
and serves as their communication hub
The above scenario looks like the Master/Slave. In the initial, 'idle' case, there is no "SERVER" that is waiting ("listening") for requests. Only when the PC activate the micro-controller they will start communication (via UDP).
You could use either term depending on what you were talking about. As other people have noted, client and server are terms used to describe how distinct parties are involved in a service. The terms can be useful in some situations (e.g. a web server and the browser as a client) but in other situations it's a less useful term (e.g. peer-to-peer protocols).
Presumably you're on stackoverflow because you're dealing with code.
In this case it's useful to be more precise and I'd suggest using terms to match whatever primitives are exposed by your language. Most will use/expose Posix sockets as their standard API, and hence you'd want to talk about/use connect or accept (potentially after binding first). Note that these calls work across TCP and UDP (except accept), but the semantics of sending and recving on the resulting connected sockets will obviously be different.

Is client-client communication possible in a fork-based server?

I am implementing a very basic C server that allows clients to chat. Right now I am using fork(), but I am having trouble having clients see each others' messages.
It also seems that all clients get the same file descriptor from accept(). Basically, I have a while loop where I test if someone wants to connect with select(), accept() their connection, and fork(). After that I read input and try to pass them to all users (whom I am keeping in a list). I can copy/paste my code if necessary.
So, is it possible to have the clients communicate with processes, or do I have to use pthreads?
Inter-process communication -IPC- (in general) don't care about client vs server (except at connect phase). A given process can have both a client and a server role (on different sockets), and would use poll(2) or the older select on several sockets in some event loop.
Notice that processes have each their own virtual address space, while threads share the same virtual address space (the one of their containing process). Read some pthread tutorial, and some book on POSIX programming (perhaps the old ALP). Be aware that a lot of information regarding processes can be queried on Linux thru /proc/ (see proc(5) for more). In particular, the virtual address space of process of pid 1234 can be obtained thru /proc/1234/maps and its opened file descriptors thru /proc/1234/fd/ and /proc/1234/fdinfo/ etc....
However, it is simpler to code a common server keeping the shared state, and dispatching messages to clients.
You could design a protocol where the clients have some way to initiate that IPC. For example, if all the processes are on the same machine, you could have a protocol which transmits a file path used as unix(7) socket address, or as fifo(7), and each "client" process later initiate (with some connect) a direct communication with another "client". It might be unwise to do so.
Look also into libraries like 0mq. They often are free software, so you can study their source code.

How bind works internally in kernel space?

Can anyone help me in tracing bind() system call in socket programming. I would like to know what happens when bind() is called, in kernel space. Like which are the structures it updates and what functions are invoked in lower level
The bind(2) system call just configures the local side's address parameters that a socket will use once you have connected (or sendto(2)). If you don't use it, the kernel selects defaults for it, depending on the underlying protocol.
The exact procedure bind(2) follows depends on the protocol family you are working on, as bind will behave differently depending if you are using PF_UNIX, PF_INET, PF_PACKET, PF_XNS, etc.
For example, in Unix sockets, you'll get your socket associated to an inode in the filesystem (an inode that supports unix sockets, of course), so clients have a path to connect to (in Unix sockets, addresses are paths in the filesystem). In TCP/IP sockets, you can fix the local IP address or the local IP port your socket can listen on (to accept connections) or you can force a IP address and/or port to connect from, to a server.
For a deeper understanding of networking sockets internals, I recommend you reading the excellent book from W.R. Stevens "TCP/IP Illustrated Vol 2. The implementation," describing the implementation of BSD sockets in NET2. It's old, but still the best explanation ever made. For a good introduction of the BSD socket system calls use, there's also an excellent book (for a long time it was indeed also the best system call reference for BSD unix system calls) by W.R.Stevens: "UNIX network programming, Vol 1 (2ND Ed): The sockets API." Both are two jewels everyone should have available at work.

Is a bus always necessary for dbus

I am trying to use the low level c-api of DBUS to implement a server-client over sockets. My question is .. is it necessary that a bus should be used always for dbus communication. And does a BUS just means an extra instance of dbus-daemon.
Yes, you need a bus for DBus communication. The bus is a communication channel, nothing more. More buses do not mean more instances of the Dbus daemon, it only means more communication channels.
In a system, you usually have one DBus daemon with one or more buses. Each bus is used for some class of messages (defined in your application).
2 applications can communicate via DBus, bypassing the daemon, by specifying the name of the client to which you want to send the signal/method (the DBus standard allows it). However, I don't think there is a DBus binding that offers this feature. But if you want to use the raw C API of the DBus, you can implement it yourself. You can check this discussion for more information on the topic.
Not sure about C API, but you can have client and server connecting directly using my node.js dbus implementation. Here is an example

Is sending data via UDP sockets on the same machine reliable?

If i use UDP sockets for interprocess communication, can i expect that all send data is received by the other process in the same order?
I know this is not true for UDP in general.
No. I have been bitten by this before. You may wonder how it can possibly fail, but you'll run into issues of buffers of pending packets filling up, and consequently packets will be dropped. How the network subsystem drops packets is implementation-dependent and not specified anywhere.
In short, no. You shouldn't be making any assumptions about the order of data received on a UDP socket, even over localhost. It might work, it might not, and it's not guaranteed to.
No, there is no such guarantee, even with local sockets. If you want an IPC mechanism that guraantees in-order delivery you might look into using full-duplex pipes with popen(). This opens a pipe to the child process that either can read or write arbitrarily. It will guarantee in-order delivery and can be used with synchronous or asynchronous I/O (select() or poll()), depending on how you want to build the application.
On unix there are other options such as unix domain sockets or System V message queues (some of which may be faster) but reading/writing from a pipe is dead simple and works. As a bonus it's easy to test your server process because it is just reading and writing from Stdio.
On windows you could look into Named Pipes, which work somewhat differently from their unix namesake but are used for precisely this sort of interprocess communication.
Loopback UDP is incredibly unreliable on many platforms, you can easily see 50%+ data loss. Various excuses have been given to the effect that there are far better transport mechanisms to use.
There are many middleware stacks available these days to make IPC easier to use and cross platform. Have a look at something like ZeroMQ or 29 West's LBM which use the same API for intra-process, inter-process (IPC), and network communications.
The socket interface will probably not flow control the originator of the data, so you will probably see reliable transmission if you have higher level flow control but there is always the possibility that a memory crunch could still cause a dropped datagram.
Without flow control limiting kernel memory allocation for datagrams I imagine it will be just as unreliable as network UDP.

Resources