How does fread in C actually work? - c

I understand that fread() has the following function definition:
size_t fread(void *buffer, size_t size, size_t qty, FILE *inptr);
I also understand that inptr is a file pointer that is returned when a FILE pointer is opened using the fopen() function. My question is does inptr store the memory address of every single character/letter of the file in its memory? If that is the case, do the memory addresses from the inptr get copied to *buffer (pointer to buffer array)?
There is one more thing that I am confused about. For each time fread() is called, size * qty bytes of memory is being copied/transferred. Is it the content of the file pointed to by inptr itself or is the memory address of the content of the file that is being copied/transferred?
Would appreciate if someone can help me clear the confusion. Thank you :)

FILE is implemented by your operating system. The functions operating on FILE are implemented by your system. You don't know. To know, you need to browse sources of your operating system.
inptr may be a pointer to memory allocated by your operating system. Or it may be a number, that your operating system uses to find it's data. Anyway, it's a handle, that your system uses to find FILE specific data. And your system decides what is in that data. For caching purposes, maybe all letters are cached in some buffer. Maybe not.
fread call. Fread reads data from an underlying entity behind inptr handle. inptr is interpreted by your system, to access the underlying memory or structure or device or hard drive or printer or keyboard or mouse or anything. It reads qty*size bytes of data. Those data are placed in the buffer. No pointers are placed there. The bytes that are read from the device are placed in the memory pointed to by buffer.

Your questions are a bit confusing (which is probably why you're asking them) so I'll do my best to answer.
FILE *inptr is a handle to the open file. You do not directly read it, it is just used to tell related functions what to operate on. You can kinda think of it like a human reading a file name in a folder, where the file name is used to identify the file, but the contents are accessed in another way.
As for the data, it is read from the file which is opened with fopen() and subsequently provided a file handle. The data does not directly correlate to the FILE pointer, and typically you should not be messing with the FILE pointer directly (don't try to read/write from it directly).
I tried to not get too technical as to the operation, as it seems you are new to C, but just kind of think of the FILE * as the computer's way of "naming" the file internally for its own usage, and the data buffer is merely the content.

You can think of fread as being implemented something like this:
size_t fread(char *ptr, size_t size, size_t nitems, FILE *fp)
{
size_t i;
for(i = 0; i < size * nitems; i++) {
int c = getc(fp);
if(c == EOF) break;
*ptr++ = c;
}
(I've left out the return value because in my simplified illustration there isn't a good way to show it.)
In other words, fread reads a bunch of characters as if by repeatedly calling getc(). So obviously this begs the question of how getc works.
What you have to know is that FILE * points to a structure which, one way or another, contains a buffer of some (not necessarily all) of the file's characters read into memory. So, in pseudocode, getc() looks like this:
int getc(FILE *fp)
{
if(fp->buffer is empty) {
fill fp->buffer by reading more characters from underlying file;
if(that resulted in end-of-file)
return EOF;
}
return(next character from fp->buffer);
}

The answer to the question,
"how does fread() work?"
is basically
"it asks your operating system to read the file for you."
More or less the sole purpose of an operating system kernel is to perform actions like this on your behalf. The kernel hosts the device drivers for the disks and file systems, and is able to fetch data for your program no matter what the file is stored on (e.g. a FAT32 formatted HDD, a network share, etc).
The way in which fread() asks your operating system to fetch data from a file varies slightly between OS and CPU. Back in the good old days of MS-DOS, the fread() function would load up various parameters (calculated from the parameters your program gave to fread()) into CPU registers, and then raise an interrupt. The interrupt handler, which was actually part of MS-DOS, would then go and fetch the requested data, and place it in a given place in memory. The registers to be loaded and the interrupt to raise were all specified by the MS-DOS manuals. The parameters you pass to fread() are abstractions of those needed by the system call.
This is what's known as making a system call. Every operating system has a system calling interface. Libraries like glibc on Linux provide handy functions like fread() (which is part of the standard C library), and make the system call for you (which is not standardised between operating systems).
Note that this means that glibc is not a fundamental part of the operating system. It's just a library of routines that implements the C standard library around the system calls that Linux provides. This means you can use an alternative C library. For example, Android does not use glibc, even though it has a Linux kernel.
Similarly on Windows. All software in Windows (C, C++, the .NET runtime, etc) is written to use the WIN32 API library (win32.dll). The difference on Windows is that the NT kernel system calling interface is not published; we don't know what it is.
This leads to some interesting things.
WINE on Linux recreates WIN32.dll, not the NT kernel system call interface.
Windows Subsystem for Linux on Windows 10 does recreate the Linux system calling interface (which is possible because it is public knowledge).
Solaris, QNX and FreeBSD pull the same trick.
Even more oddly it's looking like MS have done a NT kernel system interface shim for Linux (i.e, the thing that WINE hasn't done) to allow MS-SQLServer to run on Linux. This in effect is a Linux Subsystem for Windows. They've not given this away.

Related

Can I adapt a function that writes to disk to write to memory

I have third-party library with a function that does some computation on the specified data, and writes the results to a file specified by file name:
int manipulateAndWrite(const char *filename,
const FOO_DATA *data);
I cannot change this function, or reimplement the computation in my own function, because I do not have the source.
To get the results, I currently need to read them from the file. I would prefer to avoid the write to and read from the file, and obtain the results into a memory buffer instead.
Can I pass a filepath that indicates writing to memory instead of a
filesystem?
Yes, you have several options, although only the first suggestion below is supported by POSIX. The rest of them are OS-specific, and may not be portable across all POSIX systems, although I do believe they work on all POSIXy systems.
You can use a named pipe (FIFO), and have a helper thread read from it concurrently to the writer function.
Because there is no file per se, the overhead is just the syscalls (write and read); basically just the overhead of interprocess communication, nothing to worry about. To conserve resources, do create the helper thread with a small stack (using pthread_attr_ etc.), as the default stack size tends to be huge (on the order of several megabytes; 2*PTHREAD_STACK_SIZE should be plenty for helper threads.)
You should ensure the named pipe is in a safe directory, accessible only to the user running the process, for example.
In many POSIXy systems, you can create a pipe or a socket pair, and access it via /dev/fd/N, where N is the descriptor number in decimal. (In Linux, /proc/self/fd/N also works.) This is not mandated by POSIX, so may not be available on all systems, but most do support it.
This way, there is no actual file per se, and the function writes to the pipe or socket. If the data written by the function is at most PIPE_BUF bytes, you can simply read the data from the pipe afterwards; otherwise, you do need to create a helper thread to read from the pipe or socket concurrently to the function, or the write will block.
In this case, too, the overhead is minimal.
On ELF-based POSIXy systems (basically all), you can interpose the open(), write(), and close() syscalls or C library functions.
(In Linux, there are two basic approaches, one using the linker --wrap, and one using dlsym(). Both work fine for this particular case. This ability to interpose functions is based on how ELF binaries are linked at run time, and is not directly related to POSIX.)
You first set up the interposing functions, so that open() detects if the filename matches your special "in-memory" file, and returns a dedicated descriptor number for it. (You may also need to interpose other functions, like ftruncate() or lseek(), depending on what the function actually does; in Linux, you can run a binary under ptrace to examine what syscalls it actually uses.)
When write() is called with the dedicated descriptor number, you simply memcpy() it to a memory buffer. You'll need to use global variables to describe the allocated size, size used, and the pointer to the memory buffer, and probably be prepared to resize/grow the buffer if necessary.
When close() is called with the dedicated descriptor number, you know the memory buffer is complete, and the contents ready for processing.
You can use a temporary file on a RAM filesystem. While the data is technically written to a file and read back from it, the operations involve RAM only.
You should arrange for a default path to one to be set at compile time, and for individual users to be able to override that for their personal needs, for example via an environment variable (YOURAPP_TMPDIR?).
There is no need for the application to try and look for a RAM-based filesystem: choices like this are, and should be, up to the user. The application should not even care what kind of filesystem the file is on, and should just use the specified directory.
You could not use that library function. Take a look at this on how to write to in-memory files:
Is it possible to create a C FILE object to read/write in memory

Should fsync be used after each fclose?

On a Linux (Ubuntu Platform) device I use a file to save mission critical data.
From time to time (once in about 10,000 cases), the file gets corrupted for unspecified reasons.
In particular, the file is truncated (instead of some kbyte it has only about 100 bytes).
Now, in the sequence of the software
the file is opened,
modified and
closed.
Immediately after that, the file might be opened again (4), and something else is being done.
Up to now I didn't notice, that fflush (which is called upon fclose) doesn't write to the file system, but only to an intermediate buffer. Could it be, that the time between 3) and 4) is too short and the change from 2) is not yet written to disc, so when I reopen with 4) I get a truncated file which, when it is closed again leads to permanent loss of those data?
Should I use fsync() in that case after each file write?
What do I have to consider for power outages? It is not unlikely that the data corruption is related to power down.
fwrite is writing to an internal buffer first, then sometimes (at fflush or fclose or when the buffer is full) calling the OS function write.
The OS is also doing some buffering and writes to the device might get delayed.
fsync is assuring that the OS is writing its buffers to the device.
In your case where you open-write-close you don't need to fsync. The OS knows which parts of the file are not yet written to the device. So if a second process wants to read the file the OS knows that it has the file content in memory and will not read the file content from the device.
Of course when thinking about power outage it might (depending on the circumstances) be a good idea to fsync to be sure that the file content is written to the device (which as Andrew points out, does not necessarily mean that the content is written to disc, because the device itself might do buffering).
Up to now I didn't notice, that fflush (which is called upon fclose) doesn't write to the file system, but only in an intermediate buffer. Could it be, that the time between 3) and 4) is too short and the change from 2) is not yet written to disc, so when I reopen with 4) I get a truncated file which, when it is closed again leads to permanent loss of those data?
No. A system that behaved that way would be unusable.
Should I use fsync() in that case after each file write?
No, that will just slow things down.
What do I have to consider for power outtages? It is not unlikeley, that the data corruption is related to power down.
Use a filesystem that's resistant to such corruption. Possibly even consider using a safer modification algorithm such as writing out a new version of the file with a different name, syncing, and then renaming it on top of the existing file.
If what you're doing is something like this:
FILE *f = fopen("filename", "w");
while(...) {
fwrite(data, n, m, f);
}
fclose(f);
Then what can happen is that another process can open the file while it's being written (between the open and write system calls that the C library runs behind the scenes, or between separate write calls). Then they would see only a partially written file.
The workaround to that is to write the file with another name, and rename() it over the actual filename. The downside is that you need double the amount of space.
If you are sure the opening of the file happens only after the write, then that cannot happen. But then there has to be some syncronization between the writer and reader so that the latter does not start reading too early.
fsync() tells the system to write the changes to the actual storage, which is a bit of an oddball within the other POSIX system calls, since I think nothing is specified of a system if it crashes, and that's the only situation where it matters if some data is stored on the actual storage, and not in some cache. Even with fsync() it's still possible for the storage hardware to cache the data, or for an unrelated corruption to trash the file system when the system crashes.
If you're happy to let the OS do its job, and don't need to think about crashes, you can ignore fsync() completely and just let the data be written when the OS sees fit. If you do care about crashes, you have to look more closely into what guarantees the filesystem makes (or doesn't). E.g. at least at some point, the ext* developers pretty much demanded applications to do an fsync() on the containing directory, too.

What is stdin in C language?

I want to build my own scanf function. Basic idea is data from a memory address and save it to another memory address.
What is stdin? Is it a memory-address like 000ffaa?
If it is a memory-address what is it so I can build my own scanf function. Thanks!.
No, stdin is not "a memory address".
It's an I/O stream, basically an operating-system level abstraction that allows data to be read (or written, in the case of stdout).
You need to use the proper stream-oriented I/O functions to read from the stream.
Of course you can read from RAM too, so it's best to write your own function to require a function that reads a character, then you can adapt that function to either read from RAM or from stdin.
Something like:
int my_scanf(int (*getchar_callback)(void *state), void *state, const char *fmt, ...);
Is usually reasonable. The state pointer is some user-defined state that is required by the getchar_callback() function, and passed to it by my_scanf().
stdin is an "input stream", which is an abstract term for something that takes input from the user or from a file. It is an abstraction layer sitting on top of the actual file handling and I/O. The purpose of streams is mainly to make your code portable between different systems.
Reading/writing to memory is much more low-level and has nothing to do with streams as such. In order to use a stream in a meaningful way, you would have to know how a certain compiler implements the stream internally, which may not be public information. In some cases, like in Windows, streams are defined by the OS itself and can get accessed through API calls.
If you are looking to build your own scanf function, you would have to look into specific API functions for a specific OS, then build your own abstraction layer on top of those.
On Unix everything is a file
https://en.wikipedia.org/wiki/Everything_is_a_file
Or like they notice
Everything is a file descriptor
You can find on unix system /dev/stdin who is a symbolic link to /dev/fd/0 who is a Character special file

What does opening a file actually do?

In all programming languages (that I use at least), you must open a file before you can read or write to it.
But what does this open operation actually do?
Manual pages for typical functions dont actually tell you anything other than it 'opens a file for reading/writing':
http://www.cplusplus.com/reference/cstdio/fopen/
https://docs.python.org/3/library/functions.html#open
Obviously, through usage of the function you can tell it involves creation of some kind of object which facilitates accessing a file.
Another way of putting this would be, if I were to implement an open function, what would it need to do on Linux?
In almost every high-level language, the function that opens a file is a wrapper around the corresponding kernel system call. It may do other fancy stuff as well, but in contemporary operating systems, opening a file must always go through the kernel.
This is why the arguments of the fopen library function, or Python's open closely resemble the arguments of the open(2) system call.
In addition to opening the file, these functions usually set up a buffer that will be consequently used with the read/write operations. The purpose of this buffer is to ensure that whenever you want to read N bytes, the corresponding library call will return N bytes, regardless of whether the calls to the underlying system calls return less.
I am not actually interested in implementing my own function; just in understanding what the hell is going on...'beyond the language' if you like.
In Unix-like operating systems, a successful call to open returns a "file descriptor" which is merely an integer in the context of the user process. This descriptor is consequently passed to any call that interacts with the opened file, and after calling close on it, the descriptor becomes invalid.
It is important to note that the call to open acts like a validation point at which various checks are made. If not all of the conditions are met, the call fails by returning -1 instead of the descriptor, and the kind of error is indicated in errno. The essential checks are:
Whether the file exists;
Whether the calling process is privileged to open this file in the specified mode. This is determined by matching the file permissions, owner ID and group ID to the respective ID's of the calling process.
In the context of the kernel, there has to be some kind of mapping between the process' file descriptors and the physically opened files. The internal data structure that is mapped to the descriptor may contain yet another buffer that deals with block-based devices, or an internal pointer that points to the current read/write position.
I'd suggest you take a look at this guide through a simplified version of the open() system call. It uses the following code snippet, which is representative of what happens behind the scenes when you open a file.
0 int sys_open(const char *filename, int flags, int mode) {
1 char *tmp = getname(filename);
2 int fd = get_unused_fd();
3 struct file *f = filp_open(tmp, flags, mode);
4 fd_install(fd, f);
5 putname(tmp);
6 return fd;
7 }
Briefly, here's what that code does, line by line:
Allocate a block of kernel-controlled memory and copy the filename into it from user-controlled memory.
Pick an unused file descriptor, which you can think of as an integer index into a growable list of currently open files. Each process has its own such list, though it's maintained by the kernel; your code can't access it directly. An entry in the list contains whatever information the underlying filesystem will use to pull bytes off the disk, such as inode number, process permissions, open flags, and so on.
The filp_open function has the implementation
struct file *filp_open(const char *filename, int flags, int mode) {
struct nameidata nd;
open_namei(filename, flags, mode, &nd);
return dentry_open(nd.dentry, nd.mnt, flags);
}
which does two things:
Use the filesystem to look up the inode (or more generally, whatever sort of internal identifier the filesystem uses) corresponding to the filename or path that was passed in.
Create a struct file with the essential information about the inode and return it. This struct becomes the entry in that list of open files that I mentioned earlier.
Store ("install") the returned struct into the process's list of open files.
Free the allocated block of kernel-controlled memory.
Return the file descriptor, which can then be passed to file operation functions like read(), write(), and close(). Each of these will hand off control to the kernel, which can use the file descriptor to look up the corresponding file pointer in the process's list, and use the information in that file pointer to actually perform the reading, writing, or closing.
If you're feeling ambitious, you can compare this simplified example to the implementation of the open() system call in the Linux kernel, a function called do_sys_open(). You shouldn't have any trouble finding the similarities.
Of course, this is only the "top layer" of what happens when you call open() - or more precisely, it's the highest-level piece of kernel code that gets invoked in the process of opening a file. A high-level programming language might add additional layers on top of this. There's a lot that goes on at lower levels. (Thanks to Ruslan and pjc50 for explaining.) Roughly, from top to bottom:
open_namei() and dentry_open() invoke filesystem code, which is also part of the kernel, to access metadata and content for files and directories. The filesystem reads raw bytes from the disk and interprets those byte patterns as a tree of files and directories.
The filesystem uses the block device layer, again part of the kernel, to obtain those raw bytes from the drive. (Fun fact: Linux lets you access raw data from the block device layer using /dev/sda and the like.)
The block device layer invokes a storage device driver, which is also kernel code, to translate from a medium-level instruction like "read sector X" to individual input/output instructions in machine code. There are several types of storage device drivers, including IDE, (S)ATA, SCSI, Firewire, and so on, corresponding to the different communication standards that a drive could use. (Note that the naming is a mess.)
The I/O instructions use the built-in capabilities of the processor chip and the motherboard controller to send and receive electrical signals on the wire going to the physical drive. This is hardware, not software.
On the other end of the wire, the disk's firmware (embedded control code) interprets the electrical signals to spin the platters and move the heads (HDD), or read a flash ROM cell (SSD), or whatever is necessary to access data on that type of storage device.
This may also be somewhat incorrect due to caching. :-P Seriously though, there are many details that I've left out - a person (not me) could write multiple books describing how this whole process works. But that should give you an idea.
Any file system or operating system you want to talk about is fine by me. Nice!
On a ZX Spectrum, initializing a LOAD command will put the system into a tight loop, reading the Audio In line.
Start-of-data is indicated by a constant tone, and after that a sequence of long/short pulses follow, where a short pulse is for a binary 0 and a longer one for a binary 1 (https://en.wikipedia.org/wiki/ZX_Spectrum_software). The tight load loop gathers bits until it fills a byte (8 bits), stores this into memory, increases the memory pointer, then loops back to scan for more bits.
Typically, the first thing a loader would read is a short, fixed format header, indicating at least the number of bytes to expect, and possibly additional information such as file name, file type and loading address. After reading this short header, the program could decide whether to continue loading the main bulk of the data, or exit the loading routine and display an appropriate message for the user.
An End-of-file state could be recognized by receiving as many bytes as expected (either a fixed number of bytes, hardwired in the software, or a variable number such as indicated in a header). An error was thrown if the loading loop did not receive a pulse in the expected frequency range for a certain amount of time.
A little background on this answer
The procedure described loads data from a regular audio tape - hence the need to scan Audio In (it connected with a standard plug to tape recorders). A LOAD command is technically the same as open a file - but it's physically tied to actually loading the file. This is because the tape recorder is not controlled by the computer, and you cannot (successfully) open a file but not load it.
The "tight loop" is mentioned because (1) the CPU, a Z80-A (if memory serves), was really slow: 3.5 MHz, and (2) the Spectrum had no internal clock! That means that it had to accurately keep count of the T-states (instruction times) for every. single. instruction. inside that loop, just to maintain the accurate beep timing.
Fortunately, that low CPU speed had the distinct advantage that you could calculate the number of cycles on a piece of paper, and thus the real world time that they would take.
It depends on the operating system what exactly happens when you open a file. Below I describe what happens in Linux as it gives you an idea what happens when you open a file and you could check the source code if you are interested in more detail. I am not covering permissions as it would make this answer too long.
In Linux every file is recognised by a structure called inode. Each structure has an unique number and every file only gets one inode number. This structure stores meta data for a file, for example file-size, file-permissions, time stamps and pointer to disk blocks, however, not the actual file name itself. Each file (and directory) contains a file name entry and the inode number for lookup. When you open a file, assuming you have the relevant permissions, a file descriptor is created using the unique inode number associated with file name. As many processes/applications can point to the same file, inode has a link field that maintains the total count of links to the file. If a file is present in a directory, its link count is one, if it has a hard link its link count will be two and if a file is opened by a process, the link count will be incremented by 1.
Bookkeeping, mostly. This includes various checks like "Does the file exist?" and "Do I have the permissions to open this file for writing?".
But that's all kernel stuff - unless you're implementing your own toy OS, there isn't much to delve into (if you are, have fun - it's a great learning experience). Of course, you should still learn all the possible error codes you can receive while opening a file, so that you can handle them properly - but those are usually nice little abstractions.
The most important part on the code level is that it gives you a handle to the open file, which you use for all of the other operations you do with a file. Couldn't you use the filename instead of this arbitrary handle? Well, sure - but using a handle gives you some advantages:
The system can keep track of all the files that are currently open, and prevent them from being deleted (for example).
Modern OSs are built around handles - there's tons of useful things you can do with handles, and all the different kinds of handles behave almost identically. For example, when an asynchronous I/O operation completes on a Windows file handle, the handle is signalled - this allows you to block on the handle until it's signalled, or to complete the operation entirely asynchronously. Waiting on a file handle is exactly the same as waiting on a thread handle (signalled e.g. when the thread ends), a process handle (again, signalled when the process ends), or a socket (when some asynchronous operation completes). Just as importantly, handles are owned by their respective processes, so when a process is terminated unexpectedly (or the application is poorly written), the OS knows what handles it can release.
Most operations are positional - you read from the last position in your file. By using a handle to identify a particular "opening" of a file, you can have multiple concurrent handles to the same file, each reading from their own places. In a way, the handle acts as a moveable window into the file (and a way to issue asynchronous I/O requests, which are very handy).
Handles are much smaller than file names. A handle is usually the size of a pointer, typically 4 or 8 bytes. On the other hand, filenames can have hundreds of bytes.
Handles allow the OS to move the file, even though applications have it open - the handle is still valid, and it still points to the same file, even though the file name has changed.
There's also some other tricks you can do (for example, share handles between processes to have a communication channel without using a physical file; on unix systems, files are also used for devices and various other virtual channels, so this isn't strictly necessary), but they aren't really tied to the open operation itself, so I'm not going to delve into that.
At the core of it when opening for reading nothing fancy actually needs to happen. All it needs to do is check the file exists and the application has enough privileges to read it and create a handle on which you can issue read commands to the file.
It's on those commands that actual reading will get dispatched.
The OS will often get a head start on reading by starting a read operation to fill the buffer associated with the handle. Then when you actually do the read it can return the contents of the buffer immediately rather then needing to wait on disk IO.
For opening a new file for write the OS will need to add a entry in the directory for the new (currently empty) file. And again a handle is created on which you can issue the write commands.
Basically, a call to open needs to find the file, and then record whatever it needs to so that later I/O operations can find it again. That's quite vague, but it will be true on all the operating systems I can immediately think of. The specifics vary from platform to platform. Many answers already on here talk about modern-day desktop operating systems. I've done a little programming on CP/M, so I will offer my knowledge about how it works on CP/M (MS-DOS probably works in the same way, but for security reasons, it is not normally done like this today).
On CP/M you have a thing called the FCB (as you mentioned C, you could call it a struct; it really is a 35-byte contiguous area in RAM containing various fields). The FCB has fields to write the file-name and a (4-bit) integer identifying the disk drive. Then, when you call the kernel's Open File, you pass a pointer to this struct by placing it in one of the CPU's registers. Some time later, the operating system returns with the struct slightly changed. Whatever I/O you do to this file, you pass a pointer to this struct to the system call.
What does CP/M do with this FCB? It reserves certain fields for its own use, and uses these to keep track of the file, so you had better not ever touch them from inside your program. The Open File operation searches through the table at the start of the disk for a file with the same name as what's in the FCB (the '?' wildcard character matches any character). If it finds a file, it copies some information into the FCB, including the file's physical location(s) on the disk, so that subsequent I/O calls ultimately call the BIOS which may pass these locations to the disk driver. At this level, specifics vary.
In simple terms, when you open a file you are actually requesting the operating system to load the desired file ( copy the contents of file ) from the secondary storage to ram for processing. And the reason behind this ( Loading a file ) is because you cannot process the file directly from the Hard-disk because of its extremely slow speed compared to Ram.
The open command will generate a system call which in turn copies the contents of the file from the secondary storage ( Hard-disk ) to Primary storage ( Ram ).
And we 'Close' a file because the modified contents of the file has to be reflected to the original file which is in the hard-disk. :)
Hope that helps.

Is there a better way to manage file pointer in C?

Is it better to use fopen() and fclose() at the beginning and end of every function that use that file, or is it better to pass the file pointer to every of these function ? Or even to set the file pointer as an element of the struct the file is related to.
I have two projects going on and each one use one method (because I thought about passing the file pointer after I began the first one).
When I say better, I mean in term of speed and/or readability. What's best practice ?
Thank you !
It depends. You certainly should document what function is fopen(3)-ing a FILE handle and what function is expecting to fclose(3) it.
You might put the FILE* in a struct but you should have a convention about who and when should the file be read and/or written and closed.
Be aware that opened files are some expansive resources in a process (=your running program). BTW, it is also operating system and file system specific. And FILE handles are buffered, see fflush(3) & setvbuf(3)
On small systems, the maximal number of fopen-ed files handles could be as small as a few dozens. On a current Linux desktop, a process could have a few thousand opened file descriptors (which the internal FILE is keeping, with its buffers). In any case, it is a rather precious and scare resource (on Linux, you might limit it with setrlimit(2))
Be aware that disk IO is very slow w.r.t. CPU.

Resources