ANSWER
https://stackoverflow.com/a/12507520/962890
it was so trivial.. args! but lots of good information received. thanks to everyone.
EDIT
link to github: https://github.com/MarkusPfundstein/stream_lame_testing
ORIGINAL POST
i have some questions regarding IPC through pipelines. My goal is to receive MP3 data per TCP/IP stream, pipe it through LAME to decode it to wav, do some math and store it on disk (as a wav). I am using non blocking IO for the whole thing.
What irritates me a bit is that the tcp/ip read is way more fast than the pipe line trough lame. When i send a ~3 MB mp3 the file gets read on the client side in a couple of seconds. In the beginning, i can also write to the stdin of the lame process, than it stops writing, it reads the rest of the mp3 and if its finished i can write to lame again. 4096 bytes take approx 1 second (to write and read from lame). This is pretty slow, because i want to decode my wav min 128kbs.
The OS Is a debian 2.6 kernel on a this micro computer:
https://www.olimex.com/dev/imx233-olinuxino-maxi.html
65 MB RAM
400 MhZ
ulimit -n | grep pipe returns 512 x 8 , means 4096 which is ok. Its a 32 bit system.
The weird thing is that
my_process | lame --decode --mp3input - output.wav
goes very fast.
Here is my fork_lame code (which shall essentialy connect stout of my process to stdin of lame and visa versa)
static char * const k_lame_args[] = {
"--decode",
"--mp3input",
"-",
"-",
NULL
};
static int
fork_lame()
{
int outfd[2];
int infd[2];
int npid;
pipe(outfd); /* Where the parent is going to write to */
pipe(infd); /* From where parent is going to read */
npid = fork();
if (npid == 0) {
close(STDOUT_FILENO);
close(STDIN_FILENO);
dup2(outfd[0], STDIN_FILENO);
dup2(infd[1], STDOUT_FILENO);
close(outfd[0]); /* Not required for the child */
close(outfd[1]);
close(infd[0]);
close(infd[1]);
if (execv("/usr/local/bin/lame", k_lame_args) == -1) {
perror("execv");
return 1;
}
} else {
s_lame_pid = npid;
close(outfd[0]); /* These are being used by the child */
close(infd[1]);
s_lame_fds[WRITE] = outfd[1];
s_lame_fds[READ] = infd[0];
}
return 0;
}
This are the read and write functions. Please not that in write_lame_in. when i write to stderr instead of s_lame_fds[WRITE], the output is nearly immedieatly so its definitly the pipe through lame. But why ?
static int
read_lame_out()
{
char buffer[READ_SIZE];
memset(buffer, 0, sizeof(buffer));
int i;
int br = read(s_lame_fds[READ], buffer, sizeof(buffer) - 1);
fprintf(stderr, "read %d bytes from lame out\n", br);
return br;
}
static int
write_lame_in()
{
int bytes_written;
//bytes_written = write(2, s_data_buf, s_data_len);
bytes_written = write(s_lame_fds[WRITE], s_data_buf, s_data_len);
if (bytes_written > 0) {
//fprintf(stderr, "%d bytes written\n", bytes_written);
s_data_len -= bytes_written;
fprintf(stderr, "data_len write: %d\n", s_data_len);
memmove(s_data_buf, s_data_buf + bytes_written, s_data_len);
if (s_data_len == 0) {
fprintf(stderr, "finished\n");
}
}
return bytes_written;
}
static int
read_tcp_socket(struct connection_s *connection)
{
char buffer[READ_SIZE];
int bytes_read;
bytes_read = connection_read(connection, buffer, sizeof(buffer)-1);
if (bytes_read > 0) {
//fprintf(stderr, "read %d bytes\n", bytes_read);
if (s_data_len + bytes_read > sizeof(s_data_buf)) {
fprintf(stderr, "BUFFER OVERFLOW\n");
return -1;
} else {
memcpy(s_data_buf + s_data_len,
buffer,
bytes_read);
s_data_len += bytes_read;
}
fprintf(stderr, "data_len: %d\n", s_data_len);
}
return bytes_read;
}
The select stuff is pretty basic select logic. All blocks are non blocking of course.
Anyone any idea? I'd really appreciate any help ;-)
Oops! Did you check your LAME output?
Looking at your code, in particular
static char * const k_lame_args[] = {
"--decode",
"--mp3input",
"-",
"-",
NULL
};
and
if (execv("/usr/local/bin/lame", k_lame_args) == -1) {
means you are accidentally omitting the --decode flag as it will be argv[0] for LAME, instead of the first argument (argv[1]). You should use
static char * const k_lame_args[] = {
/* argv[0] */ "lame",
/* argv[1] */ "--decode",
/* argv[2] */ "--mp3input",
/* argv[3] */ "-",
/* argv[4] */ "-",
NULL
};
instead.
I think you are seeing a slowdown because you're accidentally recompressing the MP3 audio. (I noticed this just a minute ago, so haven't checked if LAME does that if you omit the --decode flag, but I believe it does.)
It is possible there is some sort of a blocking issue wrt. nonblocking pipes (not really being nonblocking), causing your end to block until LAME consumes the data.
Could you try an alternative approach? Use normal, blocking pipes, and a separate thread (using pthreads), which has the singular purpose of writing data from a circular buffer to LAME. Your main thread then keeps filling the circular buffer from your TCP/IP connection, and can easily also track and report buffer levels -- very useful during development and debugging. I've had much better success with blocking pipes and threads than nonblocking pipes, in general.
In Linux, threads really do not have that much of an overhead, so you should be comfortable in using them even on embedded architectures. The only trick you must master is specifying a sensible stack size for the worker thread -- in this case 16384 bytes is quite likely enough -- because only the initial stack given to the process will automatically grow and threads stacks are fixed an by default quite large.
Do you need example code?
Edited to add:
Your program receives data from the TCP/IP connection probably at a steady rate. However, LAME consumes the data in largeish chunks. In other words, the situation is like a car being towed, with the tow car jerking and stopping, with the towee jerking into it every time: both your process and LAME are most of the time waiting the other to receive/send more data.
First, those two close are not required (actually, you shouldn't do that), because the two dup2 which follow will do it automatically :
close(STDOUT_FILENO);
close(STDIN_FILENO);
Related
I have a problem using pipe under Linux. I would like to fill a pipe to make further write's call blocking. An other process should be able to read some characters from the pipe that should allow the other process to write.
The example code:
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <stdlib.h>
int main()
{
int pipefd[2];
int size = 65535;
int total = 0;
// Create the pipe
if(pipe(pipefd) == -1)
{
perror("pipe()");
exit(EXIT_FAILURE);
}
// Fill in (almost full = 65535 (full - 1 byte))
while(total < size)
{
write(pipefd[1], &total, 1);
total++;
}
// Fork
switch(fork())
{
case -1:
perror("fork()");
exit(EXIT_FAILURE);
case 0:
// Close unused read side
close(pipefd[0]);
while(1)
{
// Write only one byte, value not important (here -> total)
int ret = write(pipefd[1], &total, 1);
printf("Write %d bytes\n", ret);
}
default:
// Close unused write side
close(pipefd[1]);
while(1)
{
int nbread;
scanf("%4i", &nbread);
char buf[65535];
// Read number byte asked
int ret = read(pipefd[0], buf, nbread);
printf("Read %d bytes\n", nbread);
}
}
return 0;
}
I don't understand the behavior below. The process write one last time because I didn't fill the pipe completely, normal. But afterwards, the write is blocking (pipe full) and any read should unblock the waiting write call.
test#pc:~$./pipe
Write 1 bytes
4095
Read 4095 bytes
1
Read 1 bytes
Write 1 bytes
Write 1 bytes
Write 1 bytes
Write 1 bytes
Write 1 bytes
Write 1 bytes
...
Instead, the write call is unblocked only after having read 4096 bytes... WHY????
Normally, after a read success of X bytes, there should be X bytes of space available in the pipe and so the write should be able to write up to X bytes, no?
How can I have the behavior "read 1 byte, write 1 byte, etc" instead of "read 1 byte, read 1, read 10, read 2000, ...(until 4096 byte read), write 4096" ?
Why it doesn't work the way you think
So basically what I understand is that your pipe is associated with some kind of linked list of kernel buffers. Processes waiting to write to your pipe are waken up only when one of these buffer is emptied. It happens that in your case these buffers are 4K in size.
See: http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/fs/pipe.c?id=HEAD
Specifically line: 281 Where the test on the buffer size is done and line: 287 where the decision to wake up other processes is done.
The size of the pipe buffer is indeed dependent on the memory page size, see man fcntl
F_SETPIPE_SZ (int; since Linux 2.6.35)
Change the capacity of the pipe referred to by fd to be at least arg bytes.
An unprivileged process can adjust the pipe capacity to any value between
the system page size and the limit defined in /proc/sys/fs/pipe-max-size
(see proc(5)). Attempts to set the pipe capacity below the page size are
silently rounded up to the page size. Attempts by an unprivileged process to
set the pipe capacity above the limit in /proc/sys/fs/pipe-max-size yield
the error EPERM; a privileged process (CAP_SYS_RESOURCE) can override the
limit. When allocating the buffer for the pipe, the kernel may use a
capacity larger than arg, if that is convenient for the implementation. The
F_GETPIPE_SZ operation returns the actual size used. Attempting to set the
pipe capacity smaller than the amount of buffer space currently used to
store data produces the error EBUSY.
How to make it work
The pattern you try to achieve is classical. But it is used the way around. People starts with an empty pipe. Process waiting for an event, does read the empty pipe. Process wanting to signal an event, write a single byte to the pipe.
I think I seen that in Boost.Asio but I'm too lazy to find the correct reference.
Pipe uses 4kB pages for buffer and write is blocked until there is an empty page for write and then do not block until it is full again. It is well described in fjardon's answer. If you would like to use the pipe for signalling you are looking for opposite scenario.
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <stdlib.h>
int main()
{
int pipefd[2];
// Create the pipe
if(pipe(pipefd) == -1)
{
perror("pipe()");
exit(EXIT_FAILURE);
}
// Fork
switch(fork())
{
case -1:
perror("fork()");
exit(EXIT_FAILURE);
case 0:
// Close unused write side
close(pipefd[1]);
while(1)
{
char c;
// Read only one byte
int ret = read(pipefd[0], &c, 1);
printf("Woke up\n", ret);
fflush(stdout);
}
default:
// Close unused read side
close(pipefd[0]);
size_t len = 0;;
char *str = NULL;
while(1)
{
int nbread;
char buf[65535];
while (getline(&str, &len, stdin)) {
if (sscanf(str, "%i", &nbread)) break;
};
// Write number byte asked
int ret = write(pipefd[1], buf, nbread);
printf("Written %d bytes\n", ret);
fflush(stdout);
}
}
return 0;
}
There are many files generated on network shared file system (NFS).
There is a similar question without proper solution: inotify with NFS.
I use select() to test if the file have new data could be read.
(In fact, some are come from socket descriptor, just simplified here).
But, I found even the file till end of file, it still return ready to read state.
Could you suggest better method to write this code?
fd_set rfds;
struct timeval tv;
int retval;
int i,n,f1,f2,maxfd;
char buf[512];
f1 = fileno(fopen("f1", "rb"));
f2 = fileno(fopen("f2", "rb"));
maxfd = (f1 > f2) ? f1 : f2;
for (i=0; i<3; i++) {
FD_ZERO(&rfds);
FD_SET(f1, &rfds);
FD_SET(f2, &rfds);
tv.tv_sec = 5;
tv.tv_usec = 0;
retval = select(maxfd+1, &rfds, NULL, NULL, &tv);
if (retval == -1)
perror("select()");
else if (retval) {
printf("Data is available now.\n");
if (FD_ISSET(f1, &rfds)) {
n = read(f1, buf, sizeof(buf));
printf("f1 is ready:%d read %d bytes\n", i, n);
}
if (FD_ISSET(f2, &rfds)) {
n = read(f2, buf, sizeof(buf));
printf("f2 is ready:%d read %d bytes\n", i, n);
}
} else
printf("No data within five seconds.\n");
}
The output will like following if my f1 and f2 contains 3 bytes.
Data is available now.
f1 is ready:0 read 3 bytes
f2 is ready:0 read 3 bytes
Data is available now.
f1 is ready:1 read 0 bytes <- I wish won't enter here
f2 is ready:1 read 0 bytes <- I wish won't enter here
Data is available now.
f1 is ready:2 read 0 bytes <- I wish won't enter here
f2 is ready:2 read 0 bytes <- I wish won't enter here
NFS doesn't have a way to notify clients when files change, so you're unfortunately out-of-luck. You'll need to poll.
In Unix, regular files are always considered "fast devices", so they cannot be polled. That is, as you have found out, they always return "ready" if you try to select() or poll() on them. IIRC the Linux-specific epoll returns an error outright if you try to poll on a regular fd.
If you want to integrate something like this into your event loop you'll have to apply some duct tape. E.g. have a separate thread which at suitable intervals tries to read()/fstat()/stat() the file/fd, then if it detects that new data is available, send a message to a pipe. In the main event loop you can then poll the pipe.
I'm trying to pass structs between processes using named pipes. I got stuck at trying to open the pipe non-blocking mode. Here's my code for writing to the fifo:
void writeUpdate() {
// Create fifo for writing updates:
strcpy(fifo_write, routing_table->routerName);
// Check if fifo exists:
if(access(fifo_write, F_OK) == -1 )
fd_write = mkfifo(fifo_write, 0777);
else if(access(fifo_write, F_OK) == 0) {
printf("writeUpdate: FIFO %s already exists\n", fifo_write);
//fd_write = open(fifo_write, O_WRONLY|O_NONBLOCK);
}
fd_write = open(fifo_write, O_WRONLY|O_NONBLOCK);
if(fd_write < 0)
perror("Create fifo error");
else {
int num_bytes = write(fd_write, routing_table, sizeof(routing_table));
if(num_bytes == 0)
printf("Nothing was written to FIFO %s\n", fifo_write);
printf("Wrote %d bytes. Sizeof struct: %d\n", num_bytes,sizeof(routing_table)+1);
}
close(fd_write);
}
routing_table is a pointer to my struct, it's allocated, so there's no prob with the name of the fifo or smth like that.
If I open the fifo without the O_NONBLOCK option, it writes smth for the first time, but then it blocks because I'm having troubles reading the struct too. And after the first time, the initial fifo is created, but other fifo's appear, named '.', '..'.
With O_NONBLOCK option set, it creates the fifo but always throws an error: 'No such device or address'. Any idea why this happens? Thanks.
EDIT: Ok, so I'm clear now about opening the fifo, but I have another problem, in fact reading/writing the struct to the fifo was my issue to start with. My code to read the struct:
void readUpdate() {
struct rttable *updateData;
allocate();
strcpy(fifo_read, routing_table->table[0].router);
// Check if fifo exists:
if(access(fifo_read, F_OK) == -1 )
fd_read = mkfifo(fifo_read, 777);
else if(access(fifo_read, F_OK) == 0) {
printf("ReadUpdate: FIFO %s already exists\n Reading from %s\n", fifo_read, fifo_read);
}
fd_read = open(fifo_read, O_RDONLY|O_NONBLOCK);
int num_bytes = read(fd_read, updateData, sizeof(updateData));
close(fd_read);
if(num_bytes > 0) {
if(updateData == NULL)
printf("Read data is null: yes");
else
printf("Read from fifo: %s %d\n", updateData->routerName, num_bytes);
int result = unlink(fifo_read);
if(result < 0)
perror("Unlink fifo error\n");
else {
printf("Unlinking successful for fifo %s\n", fifo_read);
printf("Updating table..\n");
//update(updateData);
print_table_update(updateData);
}
} else
printf("Nothing was read from FIFO %s\n", fifo_read);
}
It opens the fifo and tries to read, but it seems like nothing is in the fifo, although in writeUpdate the first time it says it wrote 4 bytes (this seems wrong too). At reading, first time around it prints 'a' and then num_bytes is always <=0.
I've looked around and only found this example, with simple write/read, is there smth more needed when writing a struct?
My struct looks like this:
typedef struct distance_table {
char dest[20]; //destination network
char router[20]; // via router..
int distance;
} distance_table;
typedef struct rttable {
char routerName[10];
char networkName[20];
struct distance_table table[50];
int nrRouters;
} rttable;
struct rttable *routing_table;
"No such device or address" is the ENXIO error message. If you look at the open man page, you'll see that this error is reported in particular if:
O_NONBLOCK | O_WRONLY is set, the named file is a FIFO and no process
has the file open for reading. (...)
which is exactly your situation. So the behavior you are seeing is normal: you can't write (without blocking) to a pipe that has no readers. The kernel won't buffer your messages if nothing is connected to the pipe for reading.
So make sure you start the "consumer(s)" before your "producer", or remove the non-blocking option on the producer.
BTW: using access is, in most circumstances, opening yourself to time of check to time of use issues. Don't use it. Try the mkfifo - if it works, you're good. If it fails with EEXISTS, you're good too. If it fails otherwise, clean up and bail out.
For the second part of your question, it really depends completely on how exactly the data you are trying to send is structured. Serializing a random struct in C is not easy at all, especially if it contains variable data (like char *s for example).
If you struct contains only primitive types (and no pointers), and both sides are on the same machine (and compiled with the same compiler), then a raw write on one side and read on the other of the whole struct should work.
You can look at C - Serialization techniques for more complex data types for example.
Concerning your specific example: you're getting mixed up between pointers to your structs and plain structs.
On the write side you have:
int num_bytes = write(fd_write, routing_table, sizeof(routing_table));
This is incorrect since routing_table is a pointer. You need:
int num_bytes = write(fd_write, routing_table, sizeof(*routing_table));
// or sizeof(struct rttable)
Same thing on the read side. On the receiving size you're also not allocating updateData as far as I can tell. You need to do that too (with malloc, and remember to free it).
struct rttable *updateData = malloc(sizeof(struct rrtable));
I am creating a serial port application in which i am creating two threads one is WRITER THREAD which will write data to serial port and a READER THREAD which will read data from serial port.I know how to open, configure,read and write data on serial port but how to do it using threads.
I am using LINUX(ubuntu) and trying to open ttyS0 port programming in C.
The way I have done this in the past is to set up the port for asynchronous I/O using a VMIN of 0 and a VTIME of, say, 5 deciseconds. The purpose of this was to allow the thread to notice when it was time for the application to shut down, as it could try to read, time out, check for a quit flag, and then try to read some more.
Here is an example read function:
size_t myread(char *buf, size_t len) {
size_t total = 0;
while (len > 0) {
ssize_t bytes = read(fd, buf, len);
if (bytes == -1) {
if (errno != EAGAIN && errno != EINTR) {
// A real error, not something that trying again will fix
if (total > 0) {
return total;
}
else {
return -1;
}
}
}
else if (bytes == 0) {
// EOF
return total;
}
else {
total += bytes;
buf += bytes;
len -= bytes;
}
}
return total;
}
The write function would look as you would expect.
In your setup function, make sure to set:
struct termios tios;
...
tios.c_cflag &= ~ICANON;
tios.c_cc[VMIN] = 0;
tios.c_cc[VTIME] = 5; // You may want to tweak this; 5 = 1/2 second, 10 = 1 second, ...
...
Using of a serial port from 2 threads is simple, if only one thread reads and other thread only writes.
You should use one file descriptor for the serial port.
Open and initialize it in one thread by using normal open, tcsetattr, etc functions.
Then deliver the file descriptor to the other thread(s).
Now the reader thread can use read() function, and the writer can use write() function without any extra synchronization. You can also use select() in both threads.
Closing of the file descriptor needs attention, you should do it in one thread for avoiding problems.
I've been writing a little program for fun that transfers files over TCP in C on Linux. The program reads a file from a socket and writes it to file (or vice versa). I originally used read/write and the program worked correctly, but then I learned about splice and wanted to give it a try.
The code I wrote with splice works perfectly when reading from stdin (redirected file) and writing to the TCP socket, but fails immediately with splice setting errno to EINVAL when reading from socket and writing to stdout. The man page states that EINVAL is set when neither descriptor is a pipe (not the case), an offset is passed for a stream that can't seek (no offsets passed), or the filesystem doesn't support splicing, which leads me to my question: does this mean that TCP can splice from a pipe, but not to?
I'm including the code below (minus error handling code) in the hopes that I've just done something wrong. It's based heavily on the Wikipedia example for splice.
static void splice_all(int from, int to, long long bytes)
{
long long bytes_remaining;
long result;
bytes_remaining = bytes;
while (bytes_remaining > 0) {
result = splice(
from, NULL,
to, NULL,
bytes_remaining,
SPLICE_F_MOVE | SPLICE_F_MORE
);
if (result == -1)
die("splice_all: splice");
bytes_remaining -= result;
}
}
static void transfer(int from, int to, long long bytes)
{
int result;
int pipes[2];
result = pipe(pipes);
if (result == -1)
die("transfer: pipe");
splice_all(from, pipes[1], bytes);
splice_all(pipes[0], to, bytes);
close(from);
close(pipes[1]);
close(pipes[0]);
close(to);
}
On a side note, I think that the above will block on the first splice_all when the file is large enough due to the pipe filling up(?), so I also have a version of the code that forks to read and write from the pipe at the same time, but it has the same error as this version and is harder to read.
EDIT: My kernel version is 2.6.22.18-co-0.7.3 (running coLinux on XP.)
What kernel version is this? Linux has had support for splicing from a TCP socket since 2.6.25 (commit 9c55e01c0), so if you're using an earlier version, you're out of luck.
You need to splice_all from pipes[0] to to every time you do a single splice from from to pipes[1] (the splice_all is for the amount of bytes just read by the last single splice) . Reason: pipes represents a finite kernel memory buffer. So if bytes is more than that, you'll block forever in your splice_all(from, pipes[1], bytes).