Printing FAT32 file systerm information by reading device file - c

I want to write a program that can print out the information of a FAT32 file systerm like Number of of sectors per cluster by reading a device file, for example, a device file called fat32.disk
How can I do this in C? should I read the file by using fopen("fat32.disk","r") ?
Then how can I locate the information stored on Volume ID in fat32.disk? something like sectors_per_cluster = 0x0D in my script?

Related

Does stat()/fstat() function finally open or read the file to get attributes?

In my program there is a function to frequently call stat() to get the attributes of a file in flash storage. Sometimes after power off and reboot the contents of the file lost. I noticed that the stat() finally calls the file system driver in the Linux kernel.
My questions are: will the Linux kernel fs open or read the file to get the file attributes? Is it possible for the power off during stat() or fstat() corrupt the file in flash?
All the stat() call does is to retrieve the contents of the file's i-node; the file itself isn't touched. However, the file's i-node will be in memory, and it the file was updated in any way [even by being held open by this or another process], the file mtime and such will need to be updated and the i-node will get updated, perhaps wrongly. Poof! No file.
But this behavior is not unique to flash.

How to change/fake the 'file size' on the fly in C code

I am working on user level filesystem using FUSE and my requirement is that :
When I issue read for File A, I want to superimpose the contents of another file (say File B) and present the contents of File B as File A's contents.
I have already achieved it by buffer modifications by capturing it in my fuse read and internally reading File B and copying the buffer contents to the passed in buffer for File A and not doing any actual read call for File A. So, File A call returns with File B's contents copied in its buffer.
Also, File A is of smaller size compared to File B.
When checked using debugger, File A buffer contents look fine (contains whole of File B contents), but when it gets displayed (say with Vi) for File A, I am able to see only those many characters as the File A's size, but as File B size is more, the whole data never gets shown even if the returned buffer to File A (the File B data copied) has more to display. This is because File A size is smaller and the display terminates when character count is reached for File A's filesize.
I tried looking into struct stat, but it is a read-only thing which shows me the size of File A which is smaller compared to File B.
struct stat stat1;
stat(fileA, &stat1);
So, my question is that how do I fake/change the size of File A on-the-fly, so that it is able to display whole the data (which got superimposed because File B was bigger).
You won't be able to do this because many applications request file size before reading the file and then read only the reported amount of data.

Is the file position per inode?

I am confused with the concept of file position as used in lseek. Is this file position maintained at inode level or a simple variable which could have different values for different process working on the same file?
Per the lseek docs, the file position is associated with the open file pointed to by a file descriptor, i.e. the thing that is handed to your by open. Because of functions like dup and fork, multiple descriptors can point to a single description, but it's the description that holds the location cursor.
Think about it: if it were associated with the inode, then you would not be able to have multiple processes accessing a file in a sensible manner, since all accesses to that file by one process would affect other processes.
Thus, a single process could have track many different file positions as it has file descriptors for a given file.
It's not an 'inode', but FILEHANDLE inside kernel.
Inode is a part of file description of the *nix specific file system on the disk. FAT32, for example, has no inodes, but supported by Linux.
For know the relation between file descriptors and open files, we need to examine three data structures.
the per-process file descriptor table
the system wide table of open file descriptors
the file system i-node table.
For each process kernel maintains a table of open file descriptors. Each entry in this table records information about a single file descriptor including
a set of flags controlling the operation of the file descriptor.
a reference to the open file description
The kernel maintains a system wide table of all open file descriptors. An open file description stores all information related to an open file including:
the current file offset(as updated by read() and write(),or explicitly modified using lseek())
status flags specified when opening the file.
the file access mode (read-only,write only,or read-write,as specified in open())
setting relating to signal-driven I/O and
a reference to i-node object for this file.
Reference-Page 94,The Linux Programming Interface by Michael Kerrisk

Processing files in C under different operating systems

I am working with files in C. They are very, very huge files.
I read and write to these files (they are just formatted text files).
I give no extension to them;
I just say fopen("filename","r").
Now a file has contents like
1 line1 100
2 line2 200
.
.
20000 line 20000 14567
The problem is I am executing it in Mac OS (Leopard to be particular) and when I open the same file (no extension) in another o/s (Windows to be particular) the program fails to read the contents of the file.
I suppose it's because the formatting of the file is differing.
Any solutions for a standard file format or extension that does not conflict with other o/s?
If you're working with plaintext files, remember that in Unix and Unix-like OSes lines end with \n and in Windows they end with \r\n.
If you do a file transfer as plaintext between operating systems, your client may change the line endings to be compatible with your target OS. You can set the transfer to binary so that you get an exact byte representation of the original file.

Reading and wrting 1000 bytes from a file in vb6

I am developing an application in vb6.In my application i am trying to copy various files in a single file.The problem is i am trying to read 1000 bytes from the source file and write it to the target file in reverse order.Then another 1000 bytes and so on until i reach the last of the source file.I did similar type of work in java using file pointer.But here i am not finding the solution.Please help.
This tutorial covers how to read and write from binary files, there is a section about reading blocks of data from a file.
You could create a buffer for this purpose. Here is some code to get you started. (I don't have vb6 at this moment so the code is not verified)
Example Code:
Dim Buffer As String * 1000
Open "C:\Windows\FileName.txt" For Binary As #1
Get #1, 1, Data
Close #1
Moreover in your case you will need to keep track of the position in the file
Get #file handle, position, Buffer
Also use Put to write the read buffer to another file.
Put #file handle, position, Buffer

Resources