Difference between Serial Port and Named Pipe - c

Is there a difference between a Serial Port stream and a Named Pipe (FIFO)? Especially in regards to Linux?
My understanding is that both:
Are full duplex
Can be read/written by process that are unrelated (as opposed to how regular Pipes work)
The only difference I can think of are:
Serial Ports have file descriptors to actual hardware (that the hardware is reading/writting to) whereas a Named Pipe is just a 'file' created on the kernal to store a data stream then 2 (or more?) processes can connect to and read/write.
Any other differences if any?
Also if I have one named pipe that I create in one process P1 (and another of my processes P2 connects to it) - can P1 use that one file descriptor to write and read to this named pipe? And P2 can do the same (both read and write). Or do I need to create 2 named pipes if I want P1 to be able to write and read to P2? The practical use is that P1 will write commands to P2 and also read results of those commands from P2 aswell.

Serial ports are for distinct machines to communicate with each other, not for IPC within the same machine. You can configure serial hardware for loopback, but the highest data rates supported by serial port hardware do not come anywhere close to the speed of any modern interconnect -- not USB or eSATA (for other interfaces with "serial" in their names) nor network interconnects such as ethernet (even wireless). Serial port speed is not even in the same solar system as a FIFO's.
As far as other characteristics go,
serial ports will be presented to the system as device files, and FIFOs also will be presented as files
as such, each can be opened concurrently by multiple unrelated processes, for both reading and writing
however, you need special privileges to create a serial port special file, plus actual hardware behind it for it to be useful, whereas anyone can make a FIFO
communication through serial ports is bi-directional; it can be full-duplex, but half-duplex modes are available as well.
FIFOs are unidirectional, but you can use them in pairs if necessary. In principle, one process could both write to and read from a FIFO, but it would need to be very careful if it wanted to avoid consuming its own messages and to avoid deadlocking.
Bottom line: for bidirectional IPC within one machine, FIFOs are far superior to serial ports. You should also consider a socket interface.

Related

creating a serial loopback under VxWorks

I'm fairly new to VxWorks OS and hence wouldn't mind explanations in case I differ in my understanding of things under the hood when they differ from more traditional OSes like Linux and the likes. With that out of the way, let me begin my actual question.
I am trying to create a loop-back test for testing changes I made to the serial UART driver on the board. Since I do not want to use a cross cable to actually short two UART ports externally, I've connected both of those ports to my dev machine. One configured as an output port from the dev machine's perspective (and consequently as an Input port on the board) and the other an input port (an output port on the board). I am actually doing the loopback using a shared memory buffer which I am protecting using a mutex. So there are two threads on the board, one of which reads from the input port, copies data to the shared memory and the other reads from the memory and sends it over the output port.
And I am using regular open, read and write calls in my VxWorks application (by the way I think it is part of the application code as I call the functions from usrAppInit.c not withstanding the fact that I can even call driver routines from here! (Is it because of a flat memory model vis-a-vis Linux?? Anyhow).
Now I these ports on VxWorks have been opened in a non blocking mode and here's the code snippet which configures one of the ports:
if( (fdIn = open(portstrIn, O_RDONLY | O_NOCTTY, 0) ) == ERROR) {
return 1;
}
if(((status = ioctl(fdIn, FIOSETOPTIONS, OPT_RAW))) == ERROR)
{
return 1;
}
/* set the baud rate to 115200
*/
if((status = ioctl(fdIn, FIOBAUDRATE, 115200)) == ERROR)
{
return 1;
}
/* set the HW options
*/
if((status = ioctl(fdIn, SIO_HW_OPTS_SET, (CS8 | 0 | 0))) == ERROR)
{
return 1;
}
And similarly the output port is also configured. These two are part of two separate tasks spawned using taskSpawn and have the same priority of 100. However what I am annoyed by, is that when I write to the in port from my dev machine (using a python script), the read call on the board get's sort of staggered (I wonder if that's the right way to refer to it). It is most likely due to the short availability of hardware buffer space on the UART input buffer (or some such). This is usually not much of a problem if that is all I am doing.
However, as explained earlier, I am trying to copy the whole received character stream into a common memory buffer (guarded by a mutex of course) which is then read by another task and then re-transmitted over another serial port (sort of an in memory loopback if you will)
In lieu of that aforementioned staggering of the read calls, I thought of holding the mutex as long as there are characters to be read from the Inport and once there are no chars to be read, release the mutex and since this is VxWorks, do an explicit taskDelay(0) to schedule the next ready task (my other task). However since this is a blocking read, I am (as expected) stuck on the read call due to which my other thread never gets a chance to execute.
I did think of checking if the buffer was full and then doing the explicit task switch however if any of you have a better idea, I'm all ears.
Also just to see how this staggered read thing works from the perspective of the kernel, I timed it using a time(NULL) call just before and right after the read. So surprisingly, the very first chunk shows up a number, every other chunk after that (if it's a part of the same data block coming from the outside) shows 0. Could anyone explain that as well?
Keen to hear
I don't have 50 rep points for commenting, but without a loopback cable attached, the only way you can arrive at testing serial loopback behavior is to switch the uart into loopback mode. This often means making changes to the specific hardware part driver.

How to open a serial port with read/write lines reversed?

I know how to open a serial port using 'open' function:
open("/dev/portname", flags)
But I want two programs to open this port but with reversed read/write lines. For example when program 2 writes something to the port, program 1 can read it.
If you're using a Unix-like operating system, and if you don't need full serial port semantics, named pipes can be quite useful for doing this sort of thing.
If you need more control, you could perhaps use a pair of pseudoterminals, with a third program running in the background shuttling characters between the master ends.
And do see the related question "Virtual Serial Port for Linux" that the StackOverflow machinery already found for you.
You cannot typically do that in software.
Such things are normally done by hardware, and that is what cross-over cables and "null-modem" cables are good for.

Tell Slave Port Name of Pseudo Terminal

I am coding a linux process that will read input from a serial stream (a GPS module) and perform some actions based on this input.
When developing the program I intend to use a Pseudo Terminal (BSD API) so I can send 'dummy' GPS ascii data to my process and test it. So my master will be my 'GPS Device' and my slave will be my actual linux process that handles the GPS data.
I don't want to fork my process but have 2 different programs (the master and the slave). This way I can separate the code nicely. How can I tell me slave what port name to connect to? Ie; /dev/ttp0 or etc?
Maybe I am using Pseudo Terminal's wrong and should fork them?
ways to pass info (the port number) between processes.
1) use msgsnd()
2) use a pipe()
3) use a mmap area
there are several other methods. I prefer the msgsnd
4) Link to it with a soft link with a fixed name.
For example: /tmp/gpsdevice -> /dev/pts/2. This is trivial to do in the master with symlink.

Linux C Programming: Concurrent reads/writes to same file descriptor

I am writing a program that interfaces with a particular serial device. The serial device has two channels, and a hardware rx and tx buffer for each channel. Basically, at any given time , you can read/write to either channel on the device.
I am trying to read data from a channel, validate it (and perhaps use some of the data), and then transmit it. Reads are accomplished with iotctl calls to the device, while writes are accomplished with a call to the write() system call.
The main issue I have is with data throughput. I'd like to have an individual thread handle reading and writing for each channel (i.e., a read thread and write thread for each of the two channels). However, I have hit a snag. Everything on the device, from Linux's perspective is accessed via one single device, and I'm not sure that Linux notes that the device has multiple channels.
As a result, currently I open a single file descriptor to the device and perform my reads and writes serially. I'd like to go to the threaded approach, but I'm wondering if concurrent ioctl() and write() calls would cause issues. I understand that read() and write() and not thread safe, but I was wondering if there's any way around that (perhaps calling open() twice, one with read privileges, one with write privileges).
Thanks for your help. Also, I want to avoid having to write my own driver, but that may be an inevitable conclusion...
Also, as a side note, I'm particularly concerned that the device has extremely small hardware buffers. Is there any way to determine how much space the OS uses for a software buffer for data? That is, can I determine whether or not the OS has it's own buffer that is used to prevent overflow of the hardware buffer? The device in question is an I2C UART Bridge.
You can use semaphore to make a mutual exclusion between read/write thread
sem_t sync_rw;
/*init semaphore */
err=sem_init(&sync_rw,0,1); /* shared between thread and initialized with 1 */
if( err != 0 )
{
perror("cannot init semaphore \n");
return -1;
}
in thread write function you do this :
sem_wait(&sync_rw);
write(...)
sem_post(&sync_rw);
same for thread reader :
sem_wait(&sync_rw);
iotctl(...)
sem_post(&sync_rw);
finally :
sem_destroy(&sync_rw);

Can i use select to send data on multiple interfaces as fast as the interface can process

I am an experienced network programmer and am faced with a situation where i need some advice.
I am required to distribute some data on several outgoing interfaces (via different tcp socket connections, each corresponding to each interface). However, the important part is, i should be able to send MORE/most of the data on the interface with better bandwidth i.e. the one that can send faster.
The opinion i had was to use select api (both unix and windows) for this purpose. I have used select, poll or even epoll in the past. But it was always for READING from multiple sockets whenever data is available.
Here i intend to write successive packets on several interfaces in sequence, then monitor each of them for write descriptors (select parameter), then which ever is available (means it was able to send the packet first), i would keep sending more packets via that descriptor.
Will i be able to achieve my intension here? i.e. if i have an interface with 10Mbps link vs another one with 1Mbps, i hope to be able to get most of the packets out via the faster interface.
Update 1: I was wondering what would be select's behavior in this case, i.e. when you call select on read descriptors, the one on which data is available is returned. However, in my scenario when we are writing on the descriptors and waiting for select to return the one that finished writing first, does select ensure returning only when entire packet is written i.e. say i tried writing 1200 bytes in one go. Will it only return when entire 1200 are return or there is a permanent error? I am not sure how would select behave and failed to find any documentation describing that.
I'd adapt the producer/consumer pattern. In this case one producer and several consumers.
Let the main thread handle your source (be the producer) and spawn off one thread for each connection (being the consumers).
The treads in parallel pull a chunk of the source each and send it over the connection one by one.
The thread holding the fastest connection is expected to send the most chunks in this setup.
Using poll/epoll/select for writing is rather tricky. The reason is that sockets are mostly ready for writing unless their socket send buffer is full. So, polling for 'writable' is apt to just spin without ever waiting.
You need to proceed as follows:
When you have something to write to a socket, write it, in a loop that terminates when all the data has been written or write() returns -1 with errno == EAGAIN/EWOULDBLOCK.
At that point you have a full socket send buffer. So, you need to register this socket with the selector/poll/epoll for writability.
When you have nothing else to do, select/poll/epoll and repeat the writes that caused the associated sockets to be polled for writability.
Do those writes the same way as at (1) but this time, if the write completes, deregister the socket for writability.
In other words you must only select/poll for writeability if you already know the socket's send buffer is full, and you must stop doing so immediately you know it isn't.
How you fit all this into your application is another question.

Resources