I currently am working with the DMXSerial library written for arduino.
This library can be used, depending on how it is initialised as a transmitter, or as a sender.
The transmitter should be initialised as followed:
DMXSerial.init(DMXController);
Whereas the initialisation for a receiver is as followed:
DMXSerial.init(DMXReceiver);
I now want to create an implementation that receives and controls.
Does anybody have an idea how to do this without missing certain important interrupts or timing constraints?
That library doesn't look like it will easily do bidirectional. But, since DMX512 is a simple serial protocol, there's nothing stopping you from writing your own routines that manipulate the UART directly. The library will be a great guide for this.
Now, having said that: what kind of situation do you have where you want a device to both control and receive? The DMX512 protocol is explicitly unidirectional, and at the physical layer it's a daisy-chain network, which prevents multiple masters on the bus (and inherently creates a unidirectional bus). If you are a slave and you are manipulating the bus, you risk clobbering incoming packets from the master. If you are clever about it, and queue the incoming packets, you could then perhaps safely retransmit both the incoming data and your own data, but be aware that this is a decidedly nonstandard (and almost certainly standards-violating) behavior.
Related
I'm trying to make a custom packet using C using the TCP/IP protocol. When I say custom, I mean being able to change any value from the packet; ex: MAC, IP address and so on.
I tried searching around but I can't find anything that is actually guiding me or giving me example source codes.
How can I create a custom packet or where should I look for guidance?
A relatively easy tool to do this that is portable is libpcap. It's better known for receiving raw packets (and indeed it's better you play with that first as you can compare received packets with your hand crafted ones) but the little known pcap_sendpacket will actually send a raw packet.
If you want to do it from scratch yourself, open a socket with AF_PACKET and SOCK_RAW (that's for Linux, other OS's may vary) - for example see http://austinmarton.wordpress.com/2011/09/14/sending-raw-ethernet-packets-from-a-specific-interface-in-c-on-linux/ and the full code at https://gist.github.com/austinmarton/1922600 . Note you need to be root (or more accurately have the appropriate capability) to do this.
Also note that if you are trying to send raw tcp/udp packets, one problem you will have is disabling the network stack automatically processing the reply (either by treating it as addressed to an existing IP address or attempting to forward it).
Doing this sort of this is not as simple as you think. Controlling the data above the IP layer is relatively easy using normal socket APIs, but controlling data below is a bit more involved. Most operating systems make changing lower-level protocol information difficult since the kernel itself manages network connections and doesn't want you messing things up. Beyond that, there are other platform differences, network controls, etc that can play havoc on you.
You should look into some of the libraries that are out there to do this. Some examples:
libnet - http://libnet.sourceforge.net/
libdnet - http://libdnet.sourceforge.net/
If your goal is to spoof packets, you should read up on network-based spoofing mitigation techniques too (for example egress filtering to prevent spoofed packets from exiting a network).
Per http://www.solacesystems.com/blog/kernel-bypass-revving-up-linux-networking:
[...]a network driver called OpenOnload that use “kernel bypass” techniques to run the application and network driver together in user space and, well, bypass the kernel. This allows the application side of the connection to process many more messages per second with lower and more consistent latency.
[...]
If you’re a developer or architect who has fought with context switching for years kernel bypass may feel like cheating, but fortunately it’s completely within the rules.
What are the functions needed to do such kernel bypassing?
A TCP offload engine will "just work", no special application programming needed. It doesn't bypass the whole kernel, it just moves some of the TCP/IP stack from the kernel to the network card, so the driver is slightly higher level. The kernel API is the same.
TCP offload engine is supported by most modern gigabit interfaces.
Alternatively, if you mean "running code on a SolarFlare network adapter's embedded processor/FPGA 'Application Onload Engine'", then... that's card-specific. You're basically writing code for an embedded system, so you need to say which kind of card you're using.
Okay, so the question is not straight forward to answer without knowing how the kernel handles the network stack.
In generel the network stack is made up of a lot of layers, with the lowest one being the actual hardware, typically this hardware is supported by means of drivers (one for each network interface), the nic's typically provide very simple interfaces, think recieve and send raw data.
On top of this physical connection, with the ability to recieve and send data is a lot of protocols, which are layered as well, near the bottem is the ip protocol, which basically allows you to specify the reciever of your information, while at the top you'll find TCP which supports stable connections.
So in order to answer your question, you most first figure out which part of the network stack you'll need to replace, and what you'll need to do. From my understanding of your question it seems like you'll want to keep the original network stack, and then just sometimes use your own, and in that case you should really just implement the strategy pattern, and make it possible to state which packets should be handled by which toplevel of the network stack.
Depending on how the network stack is implemented in linux, you may or may not be able to achieve this, without kernel changes. In a microkernel architecture, where each part of the network stack is implemented in its own service, this would be trivial, as you would simply pipe your lower parts of the network stack to your strategy pattern, and have this pipe the input to the required network toplevel layers.
Do you perhaps want to send and recieve raw IP packets?
Basically you will need to fill in headers and data in a ip-packet.
There are some examples here on how to send raw ethernet packets:
:http://austinmarton.wordpress.com/2011/09/14/sending-raw-ethernet-packets-from-a-specific-interface-in-c-on-linux/
To handle TCP/IP on your own, i think that you might need to disable the TCP driver in a custom kernel, and then write your own user space server that reads raw ip.
It's probably not that efficient though...
Assuming I would like to avoid the overhead of the linux kernel in handling incoming packets and instead would like to grab the packet directly from user space. I have googled around a bit and it seems that all that needs to happen is one would use raw sockets with some socket options. Is this the case? Or is it more involved than this? And if so, what can I google for or reference in order to implement something like this?
There are many techniques for networking with kernel bypass.
First, if you are sending messages to another process on the same machine, you can do so through a shared memory region with no jumps into the kernel.
Passing packets over a network without involving the kernel gets more interesting, and involves specialized hardware that gets direct access to user memory. This idea is called RDMA.
Here's one way it can work (this is what InfiniBand hardware does). The application registers a memory buffer with the RDMA hardware. This buffer is pinned in physical memory, since swapping it out would obviously be bad (since the hardware will keep writing to the physical memory region). A control region is also mapped into userspace memory. When an application is ready to use the buffer to send or receive a message, it writes a command to the control region. The hardware takes the data from a registered buffer on one end, and places it into another registered buffer at the other end.
Clearly, this is too low level, so there are abstractions that make programming RDMA hardware easier. OFED verbs are one such abstraction.
The InfiniBand software stack has one extra interesting bit: the Sockets Direct Protocol (SDP) that is used for compatibility with existing applications. It works by inserting an LD_PRELOAD shim that translates standard socket API calls into IB verbs.
InfiniBand is just what I'm most familiar with. RoCE/iWARP hardware is very similar from the programmer's perspective, but uses a different transport than InfiniBand (TCP using an offload engine in iWarp, Ethernet in RoCE). There are/were also other approaches to RDMA (Quadrics, for example).
Say there are two programs running on a computer (for the sake of simplification, the only user programs running on linux) one of which calls recv(), and one of which is using pcap to detect incoming packets. A packet arrives, and it is detected by both the program using pcap, and by the program using recv. But, is there any case (for instance recv() returning between calls to pcap_next()) in which one of these two will not get the packet?
I really don't understand how the buffering system works here, so the more detailed explanation the better - is there any conceivable case in which one of these programs would see a packet that the other does not? And if so, what is it and how can I prevent it?
AFAIK, there do exist cases where one would receive the data and the other wouldn't (both ways). It's possible that I've gotten some of the details wrong here, but I'm sure someone will correct me.
Pcap uses different mechanisms to sniff on interfaces, but here's how the general case works:
A network card receives a packet (the driver is notified via an interrupt)
The kernel places that packet into appropriate listening queues: e.g.,
The TCP stack.
A bridge driver, if the interface is bridged.
The interface that PCAP uses (a raw socket connection).
Those buffers are flushed independently of each other:
As TCP streams are assembled and data delivered to processes.
As the bridge sends the packet to the appropriate connected interfaces.
As PCAP reads received packets.
I would guess that there is no hard way to guarantee that both programs receive both packets. That would require blocking on a buffer when it's full (and that could lead to starvation, deadlock, all kinds of problems). It may be possible with interconnects other than Ethernet, but the general philosophy there is best-effort.
Unless the system is under heavy-load however, I would say that the loss rates would be quite low and that most packets would be received by all. You can decrease the risk of loss by increasing the buffer sizes. A quick google search tuned this up, but I'm sure there's a million more ways to do it.
If you need hard guarantees, I think a more powerful model of the network is needed. I've heard great things about Netgraph for these kinds of tasks. You could also just install a physical box that inspects packets (the hardest guarantee you can get).
If i use UDP sockets for interprocess communication, can i expect that all send data is received by the other process in the same order?
I know this is not true for UDP in general.
No. I have been bitten by this before. You may wonder how it can possibly fail, but you'll run into issues of buffers of pending packets filling up, and consequently packets will be dropped. How the network subsystem drops packets is implementation-dependent and not specified anywhere.
In short, no. You shouldn't be making any assumptions about the order of data received on a UDP socket, even over localhost. It might work, it might not, and it's not guaranteed to.
No, there is no such guarantee, even with local sockets. If you want an IPC mechanism that guraantees in-order delivery you might look into using full-duplex pipes with popen(). This opens a pipe to the child process that either can read or write arbitrarily. It will guarantee in-order delivery and can be used with synchronous or asynchronous I/O (select() or poll()), depending on how you want to build the application.
On unix there are other options such as unix domain sockets or System V message queues (some of which may be faster) but reading/writing from a pipe is dead simple and works. As a bonus it's easy to test your server process because it is just reading and writing from Stdio.
On windows you could look into Named Pipes, which work somewhat differently from their unix namesake but are used for precisely this sort of interprocess communication.
Loopback UDP is incredibly unreliable on many platforms, you can easily see 50%+ data loss. Various excuses have been given to the effect that there are far better transport mechanisms to use.
There are many middleware stacks available these days to make IPC easier to use and cross platform. Have a look at something like ZeroMQ or 29 West's LBM which use the same API for intra-process, inter-process (IPC), and network communications.
The socket interface will probably not flow control the originator of the data, so you will probably see reliable transmission if you have higher level flow control but there is always the possibility that a memory crunch could still cause a dropped datagram.
Without flow control limiting kernel memory allocation for datagrams I imagine it will be just as unreliable as network UDP.