Semantics of ALSA PCM calls - c

Hi I am writing a program that has to capture from three input devices at the same time (in this case, it's three identical USB webcams).
First of all, ALSA is not based on the familiar UNIX paradigm "everything is a file" so I cannot use the regular poll(3) call; knowing that the data stream should be steady among all devices, for now I do something like:
while(!stop)
for(i = 0; i < input_device_count; i++)
{
snd_pcm_readi(handle[i], buffer, frames);
write(fd_out[i], buffer, size);
}
This code iterates over each device and reads from it, writing the result on a file previously open. It works but I suspect there is a better way to do this, also using mmap so I do not have to copy from Kernel-space to user and to Kernel again.
For a moment, let's assume the three inputs stay in sync; the above code still does not guarantee that I will begin to record from the three devices at the same time. Is there a way to guarantee that? In fact, what are the semantics of calls like snd_pcm_prepare() and snd_pcm_start()? Right now I am not using those, I just go straight to snd_pcm_readi().
I have tried to search for code examples but I haven't found anything that has to do with multiple captures at the same time. Any hint would be appreciated!

ALSA is based on the familiar UNIX paradigm "everything is a file"; so to handle multiple devices, you should use poll(3).
There are ALSA plugins that are implemented on top of multiple files, so there might be more than one handle per PCM device.
Call snd_pcm_poll_descriptors_count for each device to know how many pollfd structures you need, then call snd_pcm_poll_descriptors to get the file handles and the respective event bits.
In your loop, after calling poll, you must not read the vales in the pollfd structures directly but call snd_pcm_poll_descriptors_revents to translate them back.
To ensure that multiple devices start at the same time, call snd_pcm_link.
However, this does not guarantee that the devices will run at the same speed.

Related

Client side program won't move ahead

I'm writing a program where a client tries to download a file from the server.
The following is the code for the client side.
FILE * file_to_download;
file_to_download = fopen("CLIENT_file_downloaded", "ab");
long int total_bytes = 0;
int bytes_received = 0;
char received_buffer[2048];
printf(">>> Downloading file...\n");
while ((bytes_received = read(connector, received_buffer, sizeof received_buffer)) > 0) { // keep receiving until data stops sending
printf("=");
total_bytes += bytes_received;
fwrite(received_buffer, 1, bytes_received, file_to_download);
}
if (bytes_received < 0) {
printf("\n>>> Read error\n");
exit(1);
}
printf("\n>>> File downloaded (Bytes received: %ld)\n\n", total_bytes);
This works perfectly when I close the socket connection immediately after, however, if I leave it open for some other functionality (say sending a message to the server), it halts at ">>> Downloading file...", however, I can see the downloaded file in the folder. Also, once I terminate the server side program, it prints out everything else.
From other SO threads, I think this has something to do with the socket getting "blocked".
But then why is my file downloaded (I can see it in the folder)? The fwrite function is responsible for this which is inside the while loop, but if it is going inside the while loop, why doesn't it print anything?
How can I have the server tell the client that it's done sending it data and how can I have the client side program move forward?
You've already gotten a lot of comments, but I'll try to summarize at least a few of the bits and pieces into one place.
First of all, the problem: as you're doing things right now, there are basically three ways you call to read can return:
It returns a strictly positive value (i.e., at least 1) that tells you how may bytes you read.
It returns 0 to indicate that the socket was closed.
It return a negative value to indicate an error.
But there's not a defined way for it to return and tell you: "the socket's still open, there was no error, but there's no data waiting to be read."
So, as others have said, if you're going to transfer a file (or some other defined chunk of data) you generally need to define some application-level protocol on top of TCP to support that. The most obvious starting point is that you send the size of the file first (typically as a single fixed-size chunk, such as 4 or 8 bytes), followed by that many bytes of data.
If you do just that, you can define something that at least can work. There are all sorts of possible errors it can miss, but at least if things all work well, it can be fine.
The next step beyond that is typically to add something like some sort of checksum/CRC so when you think a transfer is complete, you can verify the data to get at least a reasonable assurance that it worked (i.e., that the data you received matches what was sent).
Another generally direction to consider is how you're doing your reading. There are a couple of choices here. One is to avoid calling read until you're sure it can/will succeed. If you're dealing only with one (or a few) sockets at a time, you can call select, which will tell you when your socket is ready to read, so issuing a read is guaranteed to succeed quickly. It might read less than you've asked, but it will return rather than waiting indefinitely for data. If you have to deal with a lot of sockets, you might prefer to look up epoll, which does roughly the same thing, but reduces overhead when you have to deal with many handles.
Another possible way to deal with this problem is to set the O_NONBLOCK option for your socket. In this case, attempting to read when no data is available will be treated as an error, so it'll return immediately with an error of EAGAIN or EWOUDLBLOCK (you have to be prepared for either). This gives you a fairly easy way to at least proceed when you have no more data available, but does nothing about any of the other difficulties in transferring data effectively.
As others have noted, there are quite a few existing protocols for doing things like this and reinventing it may not be the best use of your time. On the other hand, some protocols can be somewhat painful (e.g., ftp's normal mode requires that you open/use two separate sockets). Others are complex enough that you probably don't want to try to implement them on your own, but libraries to support them well can be difficult to find.
Personally, I've found that websockets work pretty reasonably for quite a few tasks like this. They include framing (so what was sent as a single websocket write will be received with a single websocket read). They also use CRC to do error checking. So, for quite a few cases like this, it'll take care most of the details more or less automatically. It also includes (and in most cases uses) what they call a ping/pong protocol to detect loss of connection much faster than TCP normally does on its own.
But as noted above, there are lots of alternatives, some of them designed much more specifically for transferring files (so what you receive isn't just the content of the file, but things like the name and other metadata attached to that content).

Handling lots of FILE pointers

TLDR
Is there a clean way to handle 1 to 65535 files through an entire program without allocating global variables where a lot of it is may never used and without using linked lists (mingw-w64 on windows)
Long Story
I have a tcp-server which allocates data from a lot of clients (up to 65535) and saves them in kind of a database. The "database" is a directory/file structure which looks like this: data\%ADDR%\%ADDR%-%DATATYPE%-%UTCTIME%.wwss where %ADDR% is the Address, %DATATYPE% is the type of data and %UTCTIME% is the utc time in seconds when the first data packet arrived on this socket. So every time a new connection is accepted it should create this file as specified.
How do I handle 65535 FILE handles correctly? First thought: Global variable.
FILE * PV_WWSS_FileHandles[0x10000]
//...
void tcpaccepted(uint16_t u16addr, uint16_t u16dataType, int64_t s64utc) {
char cPath[MAX_PATH];
snprintf(cPath, MAX_PATH, "c:\\%05u\\%05u-%04x-%I64d.wwss", u16addr, u16addr, u16dataType, s64utc);
PV_WWSS_FileHandles[u16addr] = fopen(cPath, "wb+");
}
This seems very lazy, as it will likely never happen that all addresses are connected at the same time and so it allocates memory which is never used.
Second thought: Creating a linked list which stores the handles. The bad thing here is, that it could be quite cpu intensive because I want to do this in a multithreading Environment and when f.e. 400 threads receive new data at the same time they all have to go through the entire list to find there FILE handle.
You really should look at other people's code. Apache comes to mind. Let's assume you can open 2^16 file handles on your machine. That's a matter of tuning.
Now... consider first what a file handle is. It's generally a construct of your C standard library... which is keeping an array (the file handle is the index to that array) of open files. You're probably going to want to keep an array, too, if you want to keep other information on those handles.
If you're concerned about the resources you're occupying, consider that each open network filehandle causes the OS to keep a 4k or 8k (it's configurable) buffer x2 (in and out) along with the file handle structure. That's easily a gigabyte of memory in use at the OS level.
When you do your equivalent of select(), if your OS is smart, you'll get the filehandle back --- so you can use that to index your array of "what to do" for that file handle. If your select() is not smart, you'll have to check every open filehandle ... which would make any attempt at performance a laugh.
I said "look at other people's solutions." I mean it. The original apache used one filehandle per process (effectively). When select()'s were dumb, this was a good strategy. Bad in that typically, dumb OS's would wake too many processes --- but that was circa 1999. These days apache defaults to it's hybrid MPM model... which is a hybrid of multi-threading and multi-tasking. It services a certain number of clients per process (threads) and has multiple processes. This keeps the number of files per process more reasonable.
If you go back further, for simplicity, there's the inetd approach. Fork one (say) ftp process per connect. The world's largest ftp server (ftp.freebsd.org) ran that way for many years.
Do not store file handles in files (silly). Do not store file handles in linked lists (your most popular code route will kill you). Take advantage of the fact that file handles are small integers and use an array. realloc() can help here.
Heh... I see other FreeBSD people have chipped in ... in the comments. Anyways... look up FreeBSD and kqueue() if you're going to try keeping that many things open in one process.

What does opening a file actually do?

In all programming languages (that I use at least), you must open a file before you can read or write to it.
But what does this open operation actually do?
Manual pages for typical functions dont actually tell you anything other than it 'opens a file for reading/writing':
http://www.cplusplus.com/reference/cstdio/fopen/
https://docs.python.org/3/library/functions.html#open
Obviously, through usage of the function you can tell it involves creation of some kind of object which facilitates accessing a file.
Another way of putting this would be, if I were to implement an open function, what would it need to do on Linux?
In almost every high-level language, the function that opens a file is a wrapper around the corresponding kernel system call. It may do other fancy stuff as well, but in contemporary operating systems, opening a file must always go through the kernel.
This is why the arguments of the fopen library function, or Python's open closely resemble the arguments of the open(2) system call.
In addition to opening the file, these functions usually set up a buffer that will be consequently used with the read/write operations. The purpose of this buffer is to ensure that whenever you want to read N bytes, the corresponding library call will return N bytes, regardless of whether the calls to the underlying system calls return less.
I am not actually interested in implementing my own function; just in understanding what the hell is going on...'beyond the language' if you like.
In Unix-like operating systems, a successful call to open returns a "file descriptor" which is merely an integer in the context of the user process. This descriptor is consequently passed to any call that interacts with the opened file, and after calling close on it, the descriptor becomes invalid.
It is important to note that the call to open acts like a validation point at which various checks are made. If not all of the conditions are met, the call fails by returning -1 instead of the descriptor, and the kind of error is indicated in errno. The essential checks are:
Whether the file exists;
Whether the calling process is privileged to open this file in the specified mode. This is determined by matching the file permissions, owner ID and group ID to the respective ID's of the calling process.
In the context of the kernel, there has to be some kind of mapping between the process' file descriptors and the physically opened files. The internal data structure that is mapped to the descriptor may contain yet another buffer that deals with block-based devices, or an internal pointer that points to the current read/write position.
I'd suggest you take a look at this guide through a simplified version of the open() system call. It uses the following code snippet, which is representative of what happens behind the scenes when you open a file.
0 int sys_open(const char *filename, int flags, int mode) {
1 char *tmp = getname(filename);
2 int fd = get_unused_fd();
3 struct file *f = filp_open(tmp, flags, mode);
4 fd_install(fd, f);
5 putname(tmp);
6 return fd;
7 }
Briefly, here's what that code does, line by line:
Allocate a block of kernel-controlled memory and copy the filename into it from user-controlled memory.
Pick an unused file descriptor, which you can think of as an integer index into a growable list of currently open files. Each process has its own such list, though it's maintained by the kernel; your code can't access it directly. An entry in the list contains whatever information the underlying filesystem will use to pull bytes off the disk, such as inode number, process permissions, open flags, and so on.
The filp_open function has the implementation
struct file *filp_open(const char *filename, int flags, int mode) {
struct nameidata nd;
open_namei(filename, flags, mode, &nd);
return dentry_open(nd.dentry, nd.mnt, flags);
}
which does two things:
Use the filesystem to look up the inode (or more generally, whatever sort of internal identifier the filesystem uses) corresponding to the filename or path that was passed in.
Create a struct file with the essential information about the inode and return it. This struct becomes the entry in that list of open files that I mentioned earlier.
Store ("install") the returned struct into the process's list of open files.
Free the allocated block of kernel-controlled memory.
Return the file descriptor, which can then be passed to file operation functions like read(), write(), and close(). Each of these will hand off control to the kernel, which can use the file descriptor to look up the corresponding file pointer in the process's list, and use the information in that file pointer to actually perform the reading, writing, or closing.
If you're feeling ambitious, you can compare this simplified example to the implementation of the open() system call in the Linux kernel, a function called do_sys_open(). You shouldn't have any trouble finding the similarities.
Of course, this is only the "top layer" of what happens when you call open() - or more precisely, it's the highest-level piece of kernel code that gets invoked in the process of opening a file. A high-level programming language might add additional layers on top of this. There's a lot that goes on at lower levels. (Thanks to Ruslan and pjc50 for explaining.) Roughly, from top to bottom:
open_namei() and dentry_open() invoke filesystem code, which is also part of the kernel, to access metadata and content for files and directories. The filesystem reads raw bytes from the disk and interprets those byte patterns as a tree of files and directories.
The filesystem uses the block device layer, again part of the kernel, to obtain those raw bytes from the drive. (Fun fact: Linux lets you access raw data from the block device layer using /dev/sda and the like.)
The block device layer invokes a storage device driver, which is also kernel code, to translate from a medium-level instruction like "read sector X" to individual input/output instructions in machine code. There are several types of storage device drivers, including IDE, (S)ATA, SCSI, Firewire, and so on, corresponding to the different communication standards that a drive could use. (Note that the naming is a mess.)
The I/O instructions use the built-in capabilities of the processor chip and the motherboard controller to send and receive electrical signals on the wire going to the physical drive. This is hardware, not software.
On the other end of the wire, the disk's firmware (embedded control code) interprets the electrical signals to spin the platters and move the heads (HDD), or read a flash ROM cell (SSD), or whatever is necessary to access data on that type of storage device.
This may also be somewhat incorrect due to caching. :-P Seriously though, there are many details that I've left out - a person (not me) could write multiple books describing how this whole process works. But that should give you an idea.
Any file system or operating system you want to talk about is fine by me. Nice!
On a ZX Spectrum, initializing a LOAD command will put the system into a tight loop, reading the Audio In line.
Start-of-data is indicated by a constant tone, and after that a sequence of long/short pulses follow, where a short pulse is for a binary 0 and a longer one for a binary 1 (https://en.wikipedia.org/wiki/ZX_Spectrum_software). The tight load loop gathers bits until it fills a byte (8 bits), stores this into memory, increases the memory pointer, then loops back to scan for more bits.
Typically, the first thing a loader would read is a short, fixed format header, indicating at least the number of bytes to expect, and possibly additional information such as file name, file type and loading address. After reading this short header, the program could decide whether to continue loading the main bulk of the data, or exit the loading routine and display an appropriate message for the user.
An End-of-file state could be recognized by receiving as many bytes as expected (either a fixed number of bytes, hardwired in the software, or a variable number such as indicated in a header). An error was thrown if the loading loop did not receive a pulse in the expected frequency range for a certain amount of time.
A little background on this answer
The procedure described loads data from a regular audio tape - hence the need to scan Audio In (it connected with a standard plug to tape recorders). A LOAD command is technically the same as open a file - but it's physically tied to actually loading the file. This is because the tape recorder is not controlled by the computer, and you cannot (successfully) open a file but not load it.
The "tight loop" is mentioned because (1) the CPU, a Z80-A (if memory serves), was really slow: 3.5 MHz, and (2) the Spectrum had no internal clock! That means that it had to accurately keep count of the T-states (instruction times) for every. single. instruction. inside that loop, just to maintain the accurate beep timing.
Fortunately, that low CPU speed had the distinct advantage that you could calculate the number of cycles on a piece of paper, and thus the real world time that they would take.
It depends on the operating system what exactly happens when you open a file. Below I describe what happens in Linux as it gives you an idea what happens when you open a file and you could check the source code if you are interested in more detail. I am not covering permissions as it would make this answer too long.
In Linux every file is recognised by a structure called inode. Each structure has an unique number and every file only gets one inode number. This structure stores meta data for a file, for example file-size, file-permissions, time stamps and pointer to disk blocks, however, not the actual file name itself. Each file (and directory) contains a file name entry and the inode number for lookup. When you open a file, assuming you have the relevant permissions, a file descriptor is created using the unique inode number associated with file name. As many processes/applications can point to the same file, inode has a link field that maintains the total count of links to the file. If a file is present in a directory, its link count is one, if it has a hard link its link count will be two and if a file is opened by a process, the link count will be incremented by 1.
Bookkeeping, mostly. This includes various checks like "Does the file exist?" and "Do I have the permissions to open this file for writing?".
But that's all kernel stuff - unless you're implementing your own toy OS, there isn't much to delve into (if you are, have fun - it's a great learning experience). Of course, you should still learn all the possible error codes you can receive while opening a file, so that you can handle them properly - but those are usually nice little abstractions.
The most important part on the code level is that it gives you a handle to the open file, which you use for all of the other operations you do with a file. Couldn't you use the filename instead of this arbitrary handle? Well, sure - but using a handle gives you some advantages:
The system can keep track of all the files that are currently open, and prevent them from being deleted (for example).
Modern OSs are built around handles - there's tons of useful things you can do with handles, and all the different kinds of handles behave almost identically. For example, when an asynchronous I/O operation completes on a Windows file handle, the handle is signalled - this allows you to block on the handle until it's signalled, or to complete the operation entirely asynchronously. Waiting on a file handle is exactly the same as waiting on a thread handle (signalled e.g. when the thread ends), a process handle (again, signalled when the process ends), or a socket (when some asynchronous operation completes). Just as importantly, handles are owned by their respective processes, so when a process is terminated unexpectedly (or the application is poorly written), the OS knows what handles it can release.
Most operations are positional - you read from the last position in your file. By using a handle to identify a particular "opening" of a file, you can have multiple concurrent handles to the same file, each reading from their own places. In a way, the handle acts as a moveable window into the file (and a way to issue asynchronous I/O requests, which are very handy).
Handles are much smaller than file names. A handle is usually the size of a pointer, typically 4 or 8 bytes. On the other hand, filenames can have hundreds of bytes.
Handles allow the OS to move the file, even though applications have it open - the handle is still valid, and it still points to the same file, even though the file name has changed.
There's also some other tricks you can do (for example, share handles between processes to have a communication channel without using a physical file; on unix systems, files are also used for devices and various other virtual channels, so this isn't strictly necessary), but they aren't really tied to the open operation itself, so I'm not going to delve into that.
At the core of it when opening for reading nothing fancy actually needs to happen. All it needs to do is check the file exists and the application has enough privileges to read it and create a handle on which you can issue read commands to the file.
It's on those commands that actual reading will get dispatched.
The OS will often get a head start on reading by starting a read operation to fill the buffer associated with the handle. Then when you actually do the read it can return the contents of the buffer immediately rather then needing to wait on disk IO.
For opening a new file for write the OS will need to add a entry in the directory for the new (currently empty) file. And again a handle is created on which you can issue the write commands.
Basically, a call to open needs to find the file, and then record whatever it needs to so that later I/O operations can find it again. That's quite vague, but it will be true on all the operating systems I can immediately think of. The specifics vary from platform to platform. Many answers already on here talk about modern-day desktop operating systems. I've done a little programming on CP/M, so I will offer my knowledge about how it works on CP/M (MS-DOS probably works in the same way, but for security reasons, it is not normally done like this today).
On CP/M you have a thing called the FCB (as you mentioned C, you could call it a struct; it really is a 35-byte contiguous area in RAM containing various fields). The FCB has fields to write the file-name and a (4-bit) integer identifying the disk drive. Then, when you call the kernel's Open File, you pass a pointer to this struct by placing it in one of the CPU's registers. Some time later, the operating system returns with the struct slightly changed. Whatever I/O you do to this file, you pass a pointer to this struct to the system call.
What does CP/M do with this FCB? It reserves certain fields for its own use, and uses these to keep track of the file, so you had better not ever touch them from inside your program. The Open File operation searches through the table at the start of the disk for a file with the same name as what's in the FCB (the '?' wildcard character matches any character). If it finds a file, it copies some information into the FCB, including the file's physical location(s) on the disk, so that subsequent I/O calls ultimately call the BIOS which may pass these locations to the disk driver. At this level, specifics vary.
In simple terms, when you open a file you are actually requesting the operating system to load the desired file ( copy the contents of file ) from the secondary storage to ram for processing. And the reason behind this ( Loading a file ) is because you cannot process the file directly from the Hard-disk because of its extremely slow speed compared to Ram.
The open command will generate a system call which in turn copies the contents of the file from the secondary storage ( Hard-disk ) to Primary storage ( Ram ).
And we 'Close' a file because the modified contents of the file has to be reflected to the original file which is in the hard-disk. :)
Hope that helps.

Optimizing data stream to disk in C (also flash memory)

I have a C program running on Linux that acquires data from a USB device (sensor data), does some processing and streams the result to disk. Currently I save to a text file using fputs(), a line looks like this:
timestamp value1 value2 ... valueN
the sample rate being up to 250Hz.
The program should run on a RPi or similar board and possibly write the data to a flash memory (SD card).
I have following questions:
Should I be optimizing the data stream or let the OS do the job? More specifically, should I be trying to minimize how often data is actually written to disk (also given the use of a flash memory)?
I have read about setbuf() and setvbuf(), as I understand they should effectively delay writing until a "block" is filled. Are these appropriate or is there a better way other than perhaps implementing my own buffer?
Which output function is best suited for data streaming with the above in mind (fputs() / fprintf() / write())?
Should I be trying to increase randomness (as to use all sectors) when writing to a SD card? If yes what's the best way to achieve this?
Here some more thoughts:
I can consider using a binary format to decrease size, but I would prefer keeping the text format to simplify later data handling.
Using a hard drive is also an option in the final design, especially if a high acquisition rate is to be carried on over a long time.
The data rate being relatively low I do not expect bandwidth problem with either hard drive or SD card. It is possible that the rate will be higher in the future (kHz or more).
Thanks for your answers.
EDIT 20130128
Thank you for all the answers so far, they give me some good insight. I'll sum it up a bit:
In general I should not have bandwidth issues, however to avoid unnecessary large log files I might consider a binary format. Yes the log should be human readable, if not I'll make an export function or similar. Yes unwind's assumption is correct, about 10 or 15 data values each line.
The mentioned read/write cycles per cell should be enough for some time, at least in the testing phase, considering we don't always write and delete the same cells. I will play around with buffer size in setvbuf() and set the buffering mode to full buffering to see if I can optimize this while keeping a reasonable save interval (a few seconds or more also depending on sample rate).
In the final design I might use a hard drive to avoid most of the problems mentioned here, or a second SD card which can be easily replaced (might be also good to quickly retrieve the data). I will format this with one of the format suggested here (FAT or JFFS2/F2FS).
Following zmo's suggestion I will try to make the system as read only as possible (at least the system partition), I was already considering this.
A Beaglebone, also mentioned by zmo, is my next choice if I'm not happy with the RPi (I read that its USB bus is not always stable, USB is obviously very important for my application).
I have already implemented a UDP port to send data over network, still I would like to keep at least a local copy of that data and maybe only send a subset of or already processed data, as well as "control data".
Should I be optimizing the data stream or let the OS do the job? More specifically, should I be trying to minimize how often data is actually written to disk (also given the use of a flash memory)?
Well, you can usually assume that the OS does a pretty awesome job at buffering and handling output to the hard driveā€¦ As long as you don't do unbuffered writes.
Though, from my experience, you should not write logs to a SD Card, because it definitely kills the SD Card faster than you can imagine. On my first projects, I had installed linux on beaglebones, and between 6 months to 12 months after, all my SD Cards had to be replacedā€¦
Since then, I've learned to run read only systems on the SD card and send any kind of regular updates over the network, the trick being to use a ramdisk for /tmp and /var.
In your case, using a hard drive is an easy solution (which will works smoothly), but you can also use a secondary SD Card where you write the logs. Then you'll be able to use a "stupid" filesystem such as a FAT one where you'll write your data aligned, as your data will be the only thing to be written on the SD. What is killing a SDCard is lots of little read/writes that happen a lot with temporary files, and defragmentation of the drive.
I have read about setbuf() and setvbuf(), as I understand they should effectively delay writing until a "block" is filled. Are these appropriate or is there a better way other than perhaps implementing my own buffer?
well, just keep it to full buffering, it will help write your data aligned on the filesystem.
Which output function is best suited for data streaming with the above in mind (fputs() / fprintf() / write())?
they should all behave similarly for your problematic.
Should I be trying to increase randomness (as to use all sectors) when writing to a SD card? If yes what's the best way to achieve this?
the firmware of the sdcard should be taking care of that for you. The only thing would be to use a simpler filesystem like FAT (or JFFS2/F2FS like ivan-voras suggets), because ext2/ext3/ext4 filesystems do automatic defragmentation which basically is moving around inodes to keep everything aligned. Though I'm not sure if it disables that behavior with SDcards and SSDs.
Writing to the SD card often will definitely kill it sooner, but it also means you can attempt to prolong this time by reducing the number of writes. As others have said, the best solution for you would be to write the logs over the network to a server or just another machine which has proper storage (in the simplest case, maybe you can use syslog(3) or just plain NFS).
If you want to continue with the original plan, then using setvbuf(3) to enable block buffered mode and setting a large buffer size (like 128 KiB or 256 KiB) would be best. A large buffer size also means that you will lose unwritten data from the buffer if power goes out, etc.
However, a large buffer only delays the inevitable and you should search for other options. It's not as alarming as Lundin's answer states because there are many cells and you're not writing always to the same one, so if you get the largest SD card you can buy, then using his method you can calculate approximately how many times you can rewrite the entire card before it fails. Using a flash-friendly file system such as F2FS or JFFS2 will be beneficial.
Here're my thoughts:
It might be a good idea to buffer some data in memory before writing to disk, but keep in mind that this might cause data loss in case of power failure.
I think this is highly dependent on the file system and type of storage you use. There is no generic answer but it could prove useful to implement and benchmark it on your specific configuration.
Considering the huge amount of data you're outputting, I'd choose a binary format (unless you want the file to be human readable)
The firmware of the flash drive should already take care of this. Basically this is the cornerstone of all modern SSDs. (SD card controllers should implement it too.)

Using interrupts during reading a file from disk

Assume that a large file is saved on disk and I want to run a computation on every chunk of data contained in the file.
The C/C++ code that I would write to do so would load part of the file, then do the processing, then load the next part, then do the processing of this next part, and so on.
If I am, however, interested to do so in the shortest possible time, I could actually do the following: First, tell DMA-controller to load first part of the file. When this part is loaded tell the DMA-controller to load the second part (in some other part of the memory) and then immediately start processing the first part.
If I get an interrupt from the DMA during processing the first part, I finish the first part and afterwards tell the DMA to overwrite it with the third part of the file; then I process the second part.
If I do not get an interrupt from the DMA during processing the first part, I finish the first part and wait for the interrupt of the DMA.
Depending of how long the processing takes in relation to the disk-read, this should be up to twice as fast. In reality, of course, one would have to measure. But that is not the question I am asking.
The question is: Is it possible to do this a) in C using some non-standard extension or b) in assembly? Or do operating systems not allow such things in general? The question is meant primarily in a single-thread context, although I also would be interested to know how to do it with two threads. Also, I am interested in specific code; this is more of a theoretical question.
You're right that you will not get the benefit of this by default, because a blocking read stops your thread from doing any processing. Hans is right that modern OSes already take care of all the little details of DMA and interrupt completion routines.
You need to use the architecture you've described, of issuing a request in advance of when you will use the data. Issue asynchronous I/O requests (on Windows these are called OVERLAPPED). Then the flow will go exactly as you envisions, but the DMA and interrupts are handled in the drivers.
On Windows, take a look at FILE_FLAG_OVERLAPPED (to CreateFile) and ReadFile (if you like events) or ReadFileEx (if you like callbacks). If you don't have to process the data in any particular order, then add a completion port to the mix, which queues the completion responses.
On Linux, OSX, and many other Unix-like OSes, look at aio_read. Or fadvise. Or use mmap with madvise.
And you can get these benefits without even writing native code. .NET recently added the ReadAsync method to its FileStream, which can be used with continuation-passing style in the form of Task objects, with async/await syntactic sugar in the C# compiler.
Typically, in a multi-mode (user/system) operating system, you do not have access to direct dma or to interrupts. In systems that extend those features from kernel(system) mode down to user mode, the overhead eliminates the benefit of using them.
Ignoring that what you're asking to do requires a very specialized environment to support it, the idea is sound and common: declaring two (or more) buffers to enable DMA to the next while you process the first. When two buffers are used they're sometimes referred to as ping-pong buffers.

Resources