C fastest way to continously write data to file [closed] - c

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have a string composed of some packet statistics, such as packet length, etc.
I would like to store this to a csv file, but if I use the standard fprintf to write to a file, it writes incredibly slowly, and I end up losing information.
How do I write information to a file as quickly as possible in order to minimize information loss from packets. Ideally I would like to support millions of packets per second, which means I need to write millions of lines per second.
I am using XDP to get packet information and send it to the userspace via an eBPF map if that matters.

The optimal performance will depend on the hard drive, drive fragmentation, the filesystem, the OS and the processor. But optimal performance will never be achieved by writing small chunks of data that do not align well with the filesystem's disk structure.
A simple solution would be to use a memory mapped file and let the OS asynchronously deal with actually committing the data to the file - that way it is likely to be optimal for the system you are running on without you having to deal with all the possible variables or work out the optimal write block size of your system.
Even with regular stream I/O you will improve performance drastically by writing to a RAM buffer. Making the buffer size a multiple of the block size of your file system is likely to be optimal. However since file writes may block if there is insufficient buffering in the file system itself for queued writes or write-back, you may not want to make the buffer too large if the data generation and the data write occur in a single thread.
Another solution is to have a separate write thread, connected to the thread generating the data via a pipe or queue. The writer thread can then simply buffer data from the pipe/queue until it has a "block" (again matching the file system block size is a good idea), then committing the block to the file. The pipe/queue then acts a buffer storing data generated while the thread is stalled writing to the file. The buffering afforded by the pipe, the block, the file system and the disk write-cache will likely accommodate any disk latency so long at the fundamental write performance of the drive is faster then the rate at which data to write is being generated - nothing but a faster drive will solve that problem.

Use sprintf to write to a buffer in memory.
Make that buffer as large as possible, and when it gets full, then use a single fwrite to dump the entire buffer to disk. Hopefully by that point it will contain many hundreds or thousands of lines of CSV data that will get written at once while you begin to fill up another in-memory buffer with more sprintf.

Related

I want to receive Ethernet real-time data in C and save it to a file, and the data size reaches thousands MB, or even GB. But don't know how to deal

I want to receive Ethernet real-time data in C and save it to a file, and the data size reaches several thousand MB, or even gigabytes. I have no previous experience in dealing with it, and I hope to get your advice and guidance.
At present, I use malloc to apply for a piece of memory, read a frame of data from the Ethernet interface to the requested memory, but I found that the process of saving to the file encountered some difficulties, such as packet loss due to the poor reading and writing speed of the saved file, I want to ask you if there is a more perfect processing plan, I am a small white, so this aspect of data saving and file writing are the most common writing.

improve performance in file IO in C [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to write bulk of integers to a file after performing heap operations on them, one by one. I am trying to merge sorted files into a single file. As of now, I am writing to file after every operation. I am using min heap to merge files.
My questions are -
When performing file write, is disk accessed every time a file write is made or chunks of memory blocks are written at a time?
Will it improve performance if I'll take output of heap in an array of say size 1024 or may be more and then perform a write at once?
Thank you in advance.
EDIT- Will using setbuffer() help? I feel it should help for certain extend.
1. When performing file write, is disk accessed every time a file write is made
or chunks of memory blocks are written at a time?
No. Your output isn't written until the output buffer is full. You can force a write with fflush to flush output streams causing an immediate write, but otherwise, output is buffered.
other 1. Will it improve performance if I'll take output of heap in an array of
say size 1024 or may be more and then perform a write at once?
If you are not exhausting the heap, then no, you are not going to gain significant performance putting the storage on the stack, etc.. Buffering is always preferred, but if you store all the data in an array and then call write, you still have the same size output buffer to deal with.
When performing file write, is disk accessed every time a file write
is made or chunks of memory blocks are written at a time?
This is up to the kernel. Buffers are flushed when you call fsync() on the file descriptor. fflush() only flushes the data buffered in the FILE structure, it doesn't flush the kernel buffers.
Will it improve performance if I'll take output of heap in an array of
say size 1024 or may be more and then perform a write at once?
I made tests some time ago to compare the performance of write() and fwrite() against a custom implementation, and it turns out you can gain a fair speedup by calling write() directly with large chunks. This is actually what fwrite() does, but due to the infrastructure it has to maintain, it is slower than a custom implementation. As for buffer size, 1024 is certainly too small. 8K or something would perform better.
It is operating system and implementation specific.
On most Linux systems -with a good filesystem like Ext4- the kernel will try hard to avoid disk accesses by caching a lot of file system data. See linuxatemyram
But I would still recommend avoiding making too much of IO operations, and have some buffering (if using stdio(3) routines, pass buffers of several dozens of kilobytes to fwrite(3) and use setvbuf(3) and fflush(3) with care; alternatively use direct syscalls like write(2) or mmap(2) with buffers of e.g. 64Kbytes...)
BTW, using perhaps the posix_fadvise(2) syscall might marginally help performance (if used wisely).
In reality, the bottleneck is often the hardware. Use RAM filesystems (tmpfs) or fast SSD disks if you can.
On Windows systems (which I never used), I have no idea, but the general intuition is that some buffering should help.

Getting data from MATLAB Simulink every 0.008s in .txt file

I need to get data from my simulink model, write it to txt file, have another program read it, and this every 0.008s.
Is there any way to do it? All i could get is to get data into workspace
Also the system is discrete
You should use a To File block to save the data to disk. It will figure out the correct buffer size, etc., for you and write the data to disk. You just have to poll from the other program to get new data.
8 milliseconds is generally not enough data to justify the overhead of disk IO, so the To File block needs more than this to write to disk, and your other program needs more than this to read. This obviously introduces latency.
If you want a lower-latency solution, consider using UDP or TCP comminication blocks that exist in the DSP System Toolbox libarary.
Of course, it's impossible to say anything without a lot more detail.
How much data? What operating system? What happens if you "miss"? What kind of disk is the file on? Does it really have to be a file on-disk, can't you use e.g. pipes or something to avoid hitting disk? What does the "other program" have to do with the data?
8 milliseconds is not a lot of time for a disk to do anything, you're basically going to be assuming all accesses are in cache in order to work, so factor out the disk. Use a pipe or a RAM disk.
8 milliseconds is also not a lot of time for a typical desktop operating system.

How to prevent C read() from reading from cache

I have a program that is used to exercise several disk units in a raid configuration. 1 process synchronously (O_SYNC) writes random data to a file using write(). It then puts the name of the directory into a shared-memory queue, where a 2nd process is waiting for the queue to have entries to read the data back into memory using read().
The problem that I can't seem to overcome is that when the 2nd process attempts to read the data back into memory, none of the disk units show read accesses. The program has code to check whether or not the data read back in is equal to the code that is written to disk, and the data always matches.
My question is, how can I make the OS (IBM i) not buffer the data when it is written to disk so that the read() system call accesses the data on the disk rather than in cache? I am doing simple throughput calculations and the read() operations are always 10+ times faster than the write operations.
I have tried using the O_DIRECT flag, but cannot seem to get the data to write to the file. It could have to do with setting up the correct aligned buffers. I have also tried the posix_fadvise(fd, offset,len, POSIX_FADV_DONTNEED) system call.
I have read through this similar question but haven't found a solution. I can provide code if it would be helpful.
My though is that if you write ENOUGH data, then there simply won't be enough memory to cache it, and thus SOME data must be written to disk.
You can also, if you want to make sure that small writes to your file works, try writing ANOTHER large file (either from the same process or a different one - for example, you could start a process like dd if=/dev/zero of=myfile.dat bs=4k count=some_large_number) to force other data to fill the cache.
Another "trick" may be to "chew up" some (more like most) of the RAM in the system - just allocate a large lump of memory, then write to some small part of it at a time - for example, an array of integers, where you write to every 256th entry of the array in a loop, moving to one step forward each time - that way, you walk through ALL of the memory quickly, and since you are writing continuously to all of it, the memory will have to be resident. [I used this technique to simulate a "busy" virtual machine when running VM tests].
The other option is of course to nobble the caching system itself in OS/filesystem driver, but I would be very worried about doing that - it will almost certainly slow the system down to a slow crawl, and unless there is an existing option to disable it, you may find it hard to do accurately/correctly/reliably.
...exercise several disk units in a raid configuration... How? IBM i doesn't allow a program access to the hardware. How are you directing I/O to any specific physical disks?
ANSWER: The write/read operations are done in parallel against IFS so the stream file manager is selecting which disks to target. By having enough threads reading/writing, the busyness of SYSBASE or an IASP can be driven up.
...none of the disk units show read accesses. None of them? Unless you are running the sole job on a system in restricted state, there is going to be read activity on the disks from other tasks. Is the system divided into multiple LPARs? Multiple ASPs? I'm suggesting that you may be monitoring disks that this program isn't writing to, because IBM i handles physical I/O, not programs.
ANSWER I guess none of them is a slight exaggeration - I know which disks belong to SYSBASE and those disks are not being targeted with many read requests. I was just trying to generalize for an audience not familiar w/IBM i. In the picture below, you will see that the write reqs are driving the % busyness up, but the read reqs are not even though they are targeting the same files.
...how can I make the OS (IBM i) not buffer the data when it is written to disk... Use a memory starved main storage pool to maximise paging, write immense blocks of data so as to guarantee that the system and disk controller caches overflow and use a busy machine so that other tasks are demanding disk I/O as well.

Efficient way to read/write large number of sparse arrays to disk in c

I need to write around 103 sparse double arrays to disk (one at a time) and read them individually later in the program.
EDIT: Apologies for not framing the question clearly earlier. To be specific I am looking to store as much as possible in memory and save the currently unused variables on the disk. I am working on linux.
The fastest way would be to buffer the I/O. Instead of writing each array individually, you'd first copy as many as you can to a buffer. Once that buffer is full you would write the entire buffer to disk, clear the buffer, and repeat. This minimizes the amount of writes that occur to the disk and will increase I/O efficiency.
If you plan on reading the arrays later in sequential order, I recommend you also buffer the reads so it reads more that it needs and you can work out of the buffer.
You could take it a step further and use asynchronous read/write operations so that your program can process other tasks while waiting on the disk.
If you are concerned about the size on disk it will consume, you can add another layer that will compress/uncompress the data stream as you write/read to and from the disk.
The HDF5 data format is meant to write large amount of data to disk efficiently.
This format is used by NASA and a large number of scientific applications :
http://www.hdfgroup.org/HDF5/
http://en.wikipedia.org/wiki/Hierarchical_Data_Format

Resources