I need to define a communication protocol with a Linux device driver. Protobufs look very nice, and there is an active C port.
Is it possible to use protobufs in a Linux device driver?
Obviously the vanilla c code will not work as it makes malloc calls, etc. Is there protobufs implementation that targets the kernel?
If there is a drop in solution, how much effort is it to port a C library for use in the kernel?
Bonus question: Are the answers significantly different when writing with windows drivers?
In theory, you could do this - but there really isn't any point in doing so. Protocol Buffers was created to ease the task of transferring data between different machines and languages that use different representations for binary data - but the interface between a kernel driver and userspace is on the same machine (and typically the same language - a C language library is usually used on the userspace side, even when writing application code in another language).
This means that the different representation issue doesn't arise - you can simply define structs in header files and pass those across the kernel/userspace boundary.
Related
I'm trying to implement an echo TCP server as a loadable kernel module.
Should I use sock_create, or sock_create_kern?
Should I use accept, or kernel_accept?
I mean it does make sense that I should use kernel_accept for example; but I don't know why. Can't I use normal sockets in the kernel?
The problem is, you are trying to shoehorn an user space application into the kernel.
Sockets (and files and so on) are things the kernel provides to userspace applications via the kernel-userspace API/ABI. Some, but not all, also have an in-kernel callable, for cases when another kernel thingy wishes to use something provided to userspace.
Let's look at the Linux kernel implementation of the socket() or accept() syscalls, in net/socket.c in the kernel sources; look for SYSCALL_DEFINE3(socket, and SYSCALL_DEFINE3(accept,, SYSCALL_DEFINE4(recv,, and so on.
(I recommend you use e.g. Elixir Cross Referencer to find specific identifiers in the Linux kernel sources, then look up the actual code in one of the official kernel Git trees online; that's what I do, anyway.)
Note how pointer arguments have a __user qualifier: this means the data pointed to must reside in user space, and that the functions will eventually use copy_from_user()/copy_to_user() to retrieve or set the data. Furthermore, the operations access the file descriptor table, which is part of the process context: something that normally only exist for userspace processes.
Essentially, this means your kernel module must create an userspace "process" (enough of one to satisfy the requirements of crossing the userspace-kernel boundary when using kernel interfaces) to "hold" the memory and file descriptors, at minimum. It is a lot of work, and in the end, it won't be any more performant than an userspace application would be. (Linux kernel developers have worked on this for literally decades. There are some proprietary operating systems where doing stuff in "kernel space" may be faster, but that is not so in Linux. The cost to do things in userspace is some context switches, and possibly some memory copies (for the transferred data).)
In particular, the TCP/IP and UDP/IP interfaces (see e.g. net/ipv4/udp.c for UDP/IPv4) do not seem to have any interface for kernel-side buffers (other than directly accessing the rx/tx socket buffers, which are in kernel memory).
You have probably heard of TUX web server, a subsystem patch to the Linux kernel by Ingo Molnár. Even that is not a "kernel module server", but more like a subsystem that an userspace process can use to implement a server that runs mostly in kernel space.
The idea of a kernel module that provides a TCP/IP and/or UDP/IP server, is simply like trying to use a hammer to drive in screws. It will work, after a fashion, but the results won't be pretty.
However, for the particular case of an echo server, it just might be possible to bolt it on top of IPv4 (see net/ipv4/) and/or IPv6 (see net/ipv6/) similar to ICMP packets (net/ipv4/icmp.c, net/ipv6/icmp.c). I would consider this route if and only if you intend to specialize in kernel-side networking stuff, as otherwise everything you'd learn doing this is very specialized and not that useful in practice.
If you need to implement something kernel-side for an exercise or something, I'd recommend steering away from "application"-type ideas (services or similar).
Instead, I would warmly recommend developing a character device driver, possibly implementing some kind of inter-process communications layer, preferably bus-style (i.e., one sender, any number of recipients). Something like that has a number of actual real-world use cases (both hardware drivers, as well as stranger things like kdbus-type stuff), so anything you'd learn doing that would be real-world applicable.
(In fact, an echo character device -- which simply outputs whatever is written to it -- is an excellent first target. Although LDD3 is for Linux kernel 2.6.10, it should be an excellent read for anyone diving into Linux kernel development. If you use a more recent kernel, just remember that the example code might not compile as-is, and you might have to do some research wrt. Linux kernel Git repos and/or a kernel source cross referencer like Elixir above.)
In short sockets are just a mechanism that enable two processes to talk, localy or remotely.
If you want to send some data from kernel to userspace you have to use kernel sockets sock_create_kern() with it's family of functions.
What would be the benefit of TCP echo server as kernel module?
It makes sense only if your TCP server provides data which is otherwise not accessible from userspace, e.g. read some post-mortem NVRAM which you can't read normally and to send it to rsyslog via socket.
I need to make labview communicate with a C/C++ application. Both the applications run on the same machine. What is the IPC mechanism with lower overhead and highest speed available in LabView?
TCP, UDP, ActiveX, DDE, file transactions, or perhaps just directly calling a dll are the solutions that come to mind.
First I'd just call a dll if you can manage with that. Assuming you're tied in to using two separate applications then:
I'd use TCP or UDP. File transactions are clunky but easy to implement, DDE is older but might be viable (I'd recommend against it).
Basic TCP/IP in Labview
TCP/IP and UDP in Labview
Calling a dll from Labview
Have you investigated straight up TCP or UDP?
It'll make it easy if you ever need to separate the applications onto different machines later on down the road. Implementation is pretty straight forward too, although it may not be the fastest throughput.
What speeds are we talking about here?
NI has provided a thorough document explaining that: Using External Code in LabVIEW [pdf]. In brief, you can use:
Shared Libraries (on windows they are called DLLs). According to the above document, any
language can be used to write DLLs as long as the DLLs can be called
using one of the calling conventions LabVIEW supports, either
stdcall or C."
Code Interface Node (CIN), which is a block diagram node that links C/C++ source
code to LabVIEW.
.NET technology.
Note that "Shared Libraries" and "Code Interface Node" are supported on Windows, Max OS X, Linux and Solaris.
I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance.
The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway.
Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas?
Thx!
I do agree with you to some extent - you do want the target system and the system on which you are developing in the interim should be as close as possible (it is better if they can match). Nevertheless the idea with cross-platform is to get you started with the firmware development while the hardware is being designed. Instead of doing it on Linux - what I would do is to use Embedded OS simulator. Here are the steps
- Step 1: Identify the OS for the Embedded System; make sure that OS has a simulator that runs on PC (Win or Linux) Typical Embedded OS with Simulator include VxWorks, μC/OS-II, QNX, uClinux ... Agreeing on the OS means that the hardware design team knows that the OS is the right match for the hardware that is being designed and there is a consensus that the hardware + OS + Application being designed will meet the requirements of the system that is being developed.
- Step 2: Use this simulator to develop the application until the hardware that is being designed is brought up.
- Step 3: Once the first version of the hardware is ready and has been powered up - you can run your application with minimum changes - mostly likely no changes to the code, but changes to the linker/library being used is likely.
The idea of cross-platform if done correct has immense advantages - it helps remove serializing your project development activities.
Given that you mention it is a TCP/IP application - check for Berkeley Sockets support and you use it. Usually this API should not matter if you are using a Simulator, in the extreme case if you have to change the OS for whatever reason your Berkeley Sockets based application is likely to be better portable.
Just assume you can use the standard BSD socket library (system calls are socket(), bind(), accept(), connect(), recv(), send(), with various options). Any OS with a TCP/IP stack will support this standard API.
There may be some caveats that you will run into if your embedded system uses a run to completion type TCP/IP stack like *u*IP, but those will be easily solvable.
Also only use POSIX file I/O (fopen, fread, fwrite, printf, etc). But keep in mind your target may not have a filesystem.
If using a simulator was not an option I would try to wrap the Linux functions up in interfaces that match those of the embedded system, if possible. That way any extra bulk in the system will be on the Linux development system (which is not resource constrained). Various embedded OSes and TCP/IP stacks can have vastly different architectures, so how easy this is can range from nearly impossible to no work at all.
If it turns out that writing wrapper libraries to make Linux look like the embedded system is too difficult then I suggest at least trying to keep the embedded OS in mind while writing the Linux version so that you can try to at least write some functions so that they work on both systems.
If it doesn't take too long writing a Linux version of at least part of the code may help you to shake out a few flaws in the overall design, at the very least. At most it will allow you to more quickly test changes to the system since loading code onto an embedded device often takes more time than you would like. It may also be easier to debug on your development machine.
Some embedded OSes will run on x86, and it would not surprise me if some of them have drivers that allow them to be run in virtual machines, so this may be an option as well.
Another thing to consider is the endian-ness and the word size of the development machine verses the embedded system. If these differ then you need to keep this in mind as you code. Getting this type of thing right when you originally write the code is easier than going back and trying to fix code, in my opinion.
I want to send data or packets at particular IP address using ANSI C standard so that my code will be platform independent. How is it possible in windows OS without using windows libraries like winsock etc.? Kindly give me some guidelines or hints.
i don't think it's possible to create platform independent socket code because though ANSI C is a standard, well-defined language and network communications are invariably a feature provided by the operating system and will vary from OS to OS. This means that your code will have differences between platforms. The best you could do is mitigate these differences by constructing a clever API/library to limiting the code you need to re/write when porting.
I'm writing a small kernel for my programs in C.
This is not (at the moment) an OS kernel, it's merely a way for me to keep track of input and output in programs without relying on external source (i.e. stdio.h). You might ask me why I'd ever want to do this; it's just so I know how this works, and so that I have more, and more (end goal is total) control of program flow.
I was wondering if anyone knows some tutorials on input and output in C (with inline asm?) without relying on any other code.
There is a lot of room between the bare metal and stdio. You have said you aren't writing an OS kernel, but not whether or not you are running under an OS.
Running directly on hardware without an OS, you will still want to encapsulate all of your I/O operations in a module, even if you don't formally define a device driver interface and framework for all of your I/O modules to follow. This is hugely architecture dependent, and makes you responsible for knowing all of the details of interaction with every I/O device you might ever use. For some devices, this can quickly become a huge development effort. That isn't a problem for embedded systems, but running on commercial hardware this way is neither easy nor recommended.
Running within an OS, you probably don't get (and shouldn't want to get) access to the actual hardware registers and interrupts. If you are developing a custom I/O device, the best practice is to make it conform to existing standards so that you need as little low level custom software for it as possible. This is why you see a lot of custom user interface gadgets connecting via USB and identifying themselves as HIDs (Human Interface Devices). As a HID, the existing USB drivers take care of the physical layer, and the OS-supplied HID driver takes care of the logical interface, providing a very simple high level access API to the application.
One of the operating system's key roles is to provide a consistent I/O API across all devices. Generally, that takes the form of open(), close(), read(), write(), and ioctl() functions (the names vary, but some form of at least the first four will always exist). The OS layer is quite raw, however. Typically, an OS call is forwarded without much processing to a device driver, which then forwards the data on to the device. Usually, the OS low level calls block the caller until they complete, and often they have restrictions on the sizes of the buffers that make sense. For instance, raw access to a disk device is usually required to be for an integral number of disk blocks at a time.
And don't forget about things like file systems and network protocols... all of which are made much more reliable and compatible by encapsulation within an operating system.
Even if it is acceptable to call read() and write() for single characters, that is usually not the best performance possible. Operating system calls are relatively expensive, and if you can read multiple characters in a single call, your performance can go way up.
That is the origin of the stdio library for C, and various other buffering libraries in other environments. The stdio library provides a buffering layer that isolates the C code from the block size of the underlying hardware. Even on an entirely home-grown operating system where you have full control over all the devices, something like C stdio will still be valuable.
Writing your own stdio replacement is a highly valuable exercise, even if you don't use it in production code, and is one I would recommend to anyone wanting to learn about what really goes on between printf() and scanf() and the terminal or files.
One valuable resource is the book The Standard C Library by P.J. Plauger. In it, the author presents an implementation of the complete C runtime library specified in the ANSI standard. His discussion of the specific implementation choices he made is valuable and apropos to the context of this question, and the discussions of why some of the standard library features were specified is interesting as well.
This sort of thing is very architecture specific. To put it simply, your I/O devices will raise hardware interrupts to the CPU. The CPU will call the code associated with the interrupt which will deal with it appropriately; for an input device it will fetch the data that is available from the device, for an output device the interrupt usually means that the device is ready to send the next piece.
The old 8088/8086 CPU architecture is a nice simple place to start to get your head around this. Typically, the BIOS would be where the hardware interrupts would have been handled, but it was always possible to write your own. ;)