Reading and wrting 1000 bytes from a file in vb6 - file

I am developing an application in vb6.In my application i am trying to copy various files in a single file.The problem is i am trying to read 1000 bytes from the source file and write it to the target file in reverse order.Then another 1000 bytes and so on until i reach the last of the source file.I did similar type of work in java using file pointer.But here i am not finding the solution.Please help.

This tutorial covers how to read and write from binary files, there is a section about reading blocks of data from a file.

You could create a buffer for this purpose. Here is some code to get you started. (I don't have vb6 at this moment so the code is not verified)
Example Code:
Dim Buffer As String * 1000
Open "C:\Windows\FileName.txt" For Binary As #1
Get #1, 1, Data
Close #1
Moreover in your case you will need to keep track of the position in the file
Get #file handle, position, Buffer
Also use Put to write the read buffer to another file.
Put #file handle, position, Buffer

Related

How to write at the middle of a file in c

Is it possible to write at the middle of a file for example I want to insert some string at the 5th position in the 2nd line of a file in c ?
I'm not very familiar with some of C functions that are related to handling files , if someone could help me I would appreciate it
I tried using fputs but I couldn't insert characters at the desired location
open a new output file
read the input file line by line (fgets) writing each line out to a new file as you read.
When you hit the place you want to insert write the new line(s)
The carry on copy the old lines to the new file
close input and output
rename output file to input
Continuing from my comments above. Here's what I'd do:
Create two large, static char[] buffers of the same size--each large enough to store the largest file you could possibly ever need to read in (ex: 10 MiB). Ex:
#define MAX_FILE_SIZE_10_MIB (10*1024*1024)
static char buffer_file_in[MAX_FILE_SIZE_10_MIB];
static char buffer_file_out[MAX_FILE_SIZE_10_MIB];
Use fopen(filename, "r+") to open the file as read/update. See: https://cplusplus.com/reference/cstdio/fopen/. Read the chars one-by-one using fgetc() (see my file_load() function for how to use fgetc()) into the first large char buffer you created, buffer_file_in. Continue until you've read the whole file into that buffer.
Find the location of the place you'd like to do the insertion. Note: you could do this live as you read the file into buffer_file_in the first time by counting newline chars ('\n') to see what line you are on. Copy chars from buffer_file_in to buffer_file_out up to that point. Now, write your new contents into buffer_file_out at that point. Then, finish copying the rest of buffer_file_in into buffer_file_out after your inserted chars.
Seek to the beginning of the file with fseek(file_pointer, 0, SEEK_SET);
Write the buffer_file_out buffer contents into the file with fwrite().
Close the file with fclose().
There are some optimizations you could do here, such as storing the index where you want to begin your insertion, and not copying the chars up to that point into buffer_file_in, but rather, simply copying the remaining of the file after that into buffer_file_in, and then seeking to that point later and writing only your new contents plus the rest of the file. This avoids unnecessarily rewriting the very beginning of the fie prior to the insertion point is all.
(Probably preferred) you could also just copy the file and the changes you insert straight into buffer_file_out in one shot, then write that back to the file starting at the beginning of the file. This would be very similar to #pm100's approach, except using 1 file + 1 buffer rather than 2 files.
Look for other optimizations and reductions of redundancy as applicable.
My approach above uses 1 file and 1 or 2 buffers in RAM, depending on implementation. #pm100's approach uses 2 files and 0 buffers in RAM (very similar to what my 1 file and 1 buffer approach would look like), depending on implementation. Both approaches are valid.

Reading content from a file and storing it to String in C

I've written a simple http server in C and am now trying to implement HTML files.
For this I need send a response, containing the content of the HTML file.
How do I do that best?
Do I read the file line by line, and if so how do I store them in a single string?
Thanks already!
Here is an example of reading a text file by chunks which, if the file is big, would be faster than reading the file line by line.
As #tadman said in his comment, text files aren't generally big so reading them in chunks doesn't make any real difference in speed but web servers serve other files too - like perhaps photos or movies (which are big). So if you are only going to read text files then reading line by line might be simpler (you could use fgets instead of fread) but if you are going to read other kinds of files then reading all of them in chunks means you can do it the same way for all of them.
However, as #chux said in his comment, there is another difference between reading text files and binary files. The difference is that text files are opened in text mode: fopen(filename,"r"); and binary files must be opened in binary mode: fopen(filename,"rb"); A web server could probably open all files in binary mode because web browsers ignore whitespace anyway but other kinds of programs need to know what the line endings will be so it can make a difference.
https://onlinegdb.com/HkM---r2X
#include <stdio.h>
int main()
{
// we will make the buffer 200 bytes in size
// this is big enough for the whole file
// in reality you would probably stat the file
// to find it's size and then malloc the memory
// or you could read the file twice:
// - first time counting the bytes
// - second time reading the bytes
char buffer[200]="", *current=buffer;
// we will read 20 bytes at a time to show that the loop works
// in reality you would pick something approaching the page size
// perhaps 4096? Benchmarking might help choose a good size
int bytes, chunk=20, size=sizeof(buffer)/sizeof(char);
// open the text file in text mode
// if it was a binary file you would need "rb" instead of "r"
FILE *file=fopen("test.html","r");
if(file)
{
// loop through reading the bytes
do {
bytes=fread(current,sizeof(char),chunk,file);
current+=bytes;
} while (bytes==chunk);
// close the file
fclose(file);
// terminate the buffer so that string function will work
*current='\0';
// print the buffer
printf("%s",buffer);
}
return 0;
}

Prepend text to a file in C

I want to add a standard header to a file (or group of files) using C. These files could be quite large, so it would be a bad idea to load them into memory, or copy them into temporary files (I think).
Is there a way to simply prepend the header directly to each file?
The header itself is quite small, not more than 1 KB
You cannot insert data into a file.
However, there is no need to load the entire file in memory. Just create a new file, write the data you are inserting, then copy the contents of the original file to the new file (do it block by block instead of loading the entire file into memory).
Finally, delete the original file and rename the new file to match the original file.
This is the most efficient way to do this and it is reasonably efficient.
It should be possible without a temporary file - you can read the file from the end, block by block, writing each block back at (original_position + header_size). The first block would be written back at header_size, leaving room for the header.
However, you don't really want to do this. It would corrupt the file if aborted (think: out of disk space, other I/O error, power down, whatever).
Thus, you should actually use temporary file - write to it everything you need, then rename it to the original file's name (assuming you create temporary file on the same file system, otherwise you'd need to copy).
Edit: to clarify what I mean, simplified solution when the whole file fits in RAM:
allocate buffer same size as the file
open the file, and read it into the buffer
seek(file, header_size) and write the buffer here
seek(file, 0) write the header
If the file is to big, you can allocate smaller buffer and repeat reads/writes starting with read at file_size - buffer_size and write at file_size - buffer_size + header_size. Then repeat with next chunk read at file_size - 2 * buffer_size, write at file_size - 2 * buffer_size + header_size, and so on.
But let me repeat: you risk corrupting your file if it fails!

How to read efficiently from an stdin pipe data that need seeking

I'm looking for the best way to read data from an stdin pipe in C programming.
Problem : I need to seek on this data, ie I need to read data from the start of the stream after reading some data at the end of this same stream.
Small use case : gunzip -c 4GbDataFile.gz | myprogram
Another one :
On local host : nc -l -p 1234 | myprogram
On remote host : gunzip -c 4GbDataFile.gz | nc -q 0 theotherhost 1234
I know that reading from fifo can be done only once. So, at the moment :
I slurp everything from stdin to memory and work from this allocated memory.
It's ugly, but it works. An evident issue is that if someone sends a huge (or a continuous) stream to my app, I'll end with a big allocated memory chunk or I'll run out of memory. (Think about an 8Gb file)
What I thought next :
I set a size limit (maybe user-defined) of that memory chunk. Once I've read this much data from stdin :
Either I stop here : "Errr. Out of memory, bazinga. Forget it." style.
Either I start dumping what I am reading to a file and work from this file once all data is read.
But then, what is the point? I can not find out the origin of the data that I am reading. If this is a local 8Gb file, I'll be dumping it to another 8Gb file on the same system.
So, my question is :
How do you read efficiently a lot of data from an stdinpipe when you have to seek back and forth in it?
Thanks in advance for your answers.
Edit :
My program needs to read metadata somewhere (depending of the file format) in the given file, so that maybe at the end of the stream. Then it may read back other data at the start of the stream, then at another place etc. In short : it needs to have access to any bytes of the data.
An example would be to read data of an archive file without knowing the file format before starting to read from stdin: I need to check the archive metadata, find archive files names and offsets etc.
So I'll make a local copy of stdin content and work from it. Thanks everyone for your inputs ;)
You need to get your requirements clear. If you need to seek() then obviously you can't take input from stdin. If you need to seek() then you should take input file name as argument.
The data structure in your 4GbDataFile just doesn't lend itself to what you want to do. Think outside the box. Don't hammer your program into something it shouldn't even attempt. Try to fix the input format where it is generated so you don't need to seek back 4 GB.
In case you do like hammering: 4GB of in-core memory is pretty expensive. Instead, save the data read from stdin in a file, then open the file (or mmap it) and seek to your heart's content.
I think you should read the infamous Useless Use of Cat Award.
TL;DR: change cat 4gbfile | yourprogram to yourprogram < 4gbfile.
If you really insist on having it work with data from a pipe, you'll have to store it in a temporary file at startup then replace file descriptor 0 with a copy of the fd for the temp file, using dup2.

Delete a character from a file in C

How can I delete few characters from a file using C program?
I could not find any predefined functions for it.
To understand the purpose, I am trying to send a file through a socket, if N bytes are sent successfully, I want to delete those bytes from the file. At the end, the file will be empty.
Any other way to do this efficiently?
Thanks
Pradeep
If they're at the end, truncate the file at the appropriate length. If they're not then you'll need to rewrite the file.
Your way is pretty inefficient for large files, since you would have to copy "the rest of the file" some bytes further to the beginning, which costs much. I would rather record the "current sending position" somewhere outside of the file and update that information. That way, you don't have to copy the rest of the file so often.
There is no straightforward way to delete bytes from the beginning of a file. You will have to start from where you want to trim the file, and read from there to the end of the file, writing to the start of the file.
It might make more sense to just track how many bytes you have already written to the file in some other file.
you should use an index which points to the beginning of the data you haven't sent yet.
It is not necessary to delete what you have sent, just pass them, when you send the whole file delete it.
If the char's are one after the other than why dont you give a try to fseek();

Resources