What are possible reasons that cmd stop writing to a file with redirection? - file

This is on win7.
I got a batch script that executes a C++ program and take all of its output to the file with ">".
The program takes input from servers and display everything. We need all these information so we log all these outputs down to a file. But after a short while, we see that the program stops writing to the file and just stop there while the program continues running.
The file size is also at 0 byte (OS doesn't update until file is closed?) But we can see the content of the file with notepad++, but it does not seem to update any longer.
There are about 250,000 lines long and we see that our data simply got cut off in the end. For example, suppose you should have a table of data that lists out 123 567 436 975, we only see 123 567 43. The whole line isn't even finished in the end.
There are a lot of things to write down and there are lots of network transmission. Does the program simply give up outputting when there are too much data? Is there a way around this?

Try to disable buffering. setbuf(stdout, NULL);.
Anyway, in new versions of windows, when a file is being created and data is being written (the clasic >file scenario), the grow of the file is not always visible.
In this case, dir command shows a 0 bytes file, or stops to show increasing values.
Try to read the file with type file >nul and then dir file. This "should" refresh the file size information. But it is not needed. The file is growing, just not showing it.

Related

size limit with output redirection or files created with fopen?

Redirecting the output of a program to a file:
program > file.log 2>1&
Does not include all the rows I see when running on the console without redirection.There are no errors . Windows 10. Roughly get 50k rows in a file of 1,800 KB.
I get more rows in the file if I reduce the size of each row (rounding of numbers(.
I did try handling the file directly with fopen, but I still not get all the output.
program > file.log 2>1&
Expected result: see in the log file the same output I see displayed on the console.
Actual result: a log file that is truncated, either by redirecting the output console or creating the file directly with fopen. No issues seen on sterr or running the program in debug mode.
Having done redirection a lot, I can say with reasonable confidence you have one of exactly four issues.
1) You ran out of disk space.
2) You ran out of disk space quota.
3) You reached the maximum file size for that volume. Note that FAT32 (includes almost all USB sticks) has a maximum file size of 2GB.
4) You are saving to NTFS and need to defragment your hard disk.

Does Linux automatically close files?

I'm creating a scheduling software, i have hundreds of dataset described in text files.
I'm using dirent.h with a loop to read the texts files, for each file i make a schedule and i append the result to another text file ( like cpu time, dataset name, tardiness ...), this file is common to all schedules.
I'm opening/closing the result file just once ( fopen() before the loop, fclose() after the loop when all the schedules are done).
I've no problem on Windows 7, but under linux, the file seems to be closed by the system due to a kind of timeout, I've just 9-10 dataset that are scheduled (~ 2 hours) and after it is stuck because it can't write into the result file :/
Does anyone already have this kind of trouble and found a solution?
Linux does not close the file automatically. Something is wrong in your code.
Try running your program using "strace" and identify where the close() happens.
strace -f -o 1.txt ./my_best_app_ever
Open the 1.txt file using a text editor (or less) and see what your app is doing.

Retrieve data from COM port using a batch file

I'm trying to automatically retrieve data from a COM port using a batch file.
I'm able to configure the com port and to send the command in other to ask my device for the info.
The problem is that I'm not able to capture the data that the device sends. I've tried with RealTerm and the device is working and sends the info back to the pc, but I really need the batch file to do it automatically, here is the code:
echo off
MODE COMxx ...
COPY retrievecommand.txt \\\\.\COMxx:
COPY \\\\.\COMxx: data.txt
Any suggestions?
Use the TYPE command in a recursive loop using the DOS GOTO command to a DOS LABEL. Use 'append output' to capture text like TYPE COM1:>>Data.txt The double > means continually concatenate (or append) to Data.txt. A single > or 'redirect output' would replace the text in Data.txt every loop (if com data present on port). Add a 2nd line that redirects to the monitor screen so you can watch activity too (i.e. TYPE COM1:>CON [CON means console or monitor screen but you can omit it as console is default anyway])
Control-Z is not needed by TYPE command. It will just dump text continually until operator does a Control-C and then a Y to break the loop. You really don't need to stop the loop unless you are done with the batch file all together. The Data.txt file will be available to other programs live and will not present a 'Sharing Violation' if you try to access it with another program like NOTEPAD.EXE while this batch file is still looping.
Also if you make a 3rd line in the batch file that says TYPE COM1:>Data1.txt [notice only one redirect], you will have a single line of instant text that will disappear with next iteration. But sometimes that is helpful if you need only one line of data. There are creative ways to extract one line of data to another text file using the DOS FIND command.
When reading, the COPY command will continue until it detects the end of file. As the source is a device (with a potentially infinite stream) it only knows to stop when it detects an end of file marker. This is the Ctrl-Z (0x1A) character.
The suggestion in the duplicate question of using the TYPE command to read is likely to result in the same problem.
There is no standard mechanism to read a single line. If you can port your application to PowerShell, you should be able to read single lines with the results you expect.

Managing log file size

I have a program which logs its activity.
I want to implement a log file mechanism to keep the log file under a certain size, lets say 10 MB.
The log file itself just holds commands the program executed; those commands are variable length.
Right now, the program runs on a windows environment, but I'm likely to port it to UNIX soon.
I've came up with two methods for managing the log files:
1. Keep multiple files of lower size, and if the new command exceeds the current file length, truncate the oldest file to zero size, and start writing there.
2. Keep a header in the file, which holds metadata regarding the first command in the file, and the next place to write to in the file. Also I think, each command should hold metadata about it's length this way.
My questions are as follows:
In terms of efficiency which of these methods would you use, and why?
Is there a unix command / function to this easily?
Thanks a lot for your help,
Nihil.
On UNIX/Linux platforms there's a logrotate program that manages logfiles. Details can be found for example here:
http://linuxcommand.org/man_pages/logrotate8.html

Deleting x amount of lines from the top of a text file

I'm writing an application which will use a log file. What I want to do is when the application starts it will check to see if the log file is above a certain size and if it is it will delete 'x' amounts of lines from the top of the log file to shorten it up. What would be a good way of going about doing this? Would it be easier to write the most recent entries to the top of the file and then delete from the bottom when I do delete?
So you have a log file, let's call it "log.log"
First, move log.log to log.log.bak. Open it for reading. Read line by line until you have read x number of lines. Open a new file, log.log, for writing. Continue reading lines from log.log.bak and for each line write it to log.log. When there are no more lines, close log.log and log.log.bak and then delete log.log.bak.
Some pseudo code:
x = number of lines to delete
move log.log to log.log.bak
open log.log.bak for reading
while i have read < x lines
read a line and throw it away
open log.log for writing
while there are more lines in log.log.bak
read a line from log.log.bak
write the line to log.log
close log.log
close log.log.bak
delete log.log.bak
There are plenty of questions left open-
What environment are you in? In a Un*x you could simply 'tail -100 input.txt >trimmed.txt'
Can you load the file into memory, or will it be too large?
Can you use an intermediate file?
What languages do you have available/are you familiar with?
How often will you perform the trimming? If you're writing more often than trimming, write to the bottom (which is fast) and perform the expensive trimming operation rarely.
If you have C available, you can find the filesize with fseek(f,0,SEEK_END);long size=ftell(f); (immediately after opening f).
If you need to trim, you can fseek(f,size-desired_size,SEEK_SET); and then while (fgetc(f)!='\n') {}, which will take you to the end of the line you intersect.
Then copy what remains to a new file.
I wouldn't remove a certain number of lines. Basically you have to process the whole file if you do that and that could be a lot of processing. The normal practice is just to roll the log file (rename it something else, often just appending the date) and start again.
Also bear in mind that the file being a certain size is no guarantee that there are the requisite lines there in which case you're just renaming the file the expensive way.
I often find it useful when an app startup begins at the top of a file too.
Keep several log files File1 - FileN for each kind of log.
Fill file File1 then File2 and so on when each file passes some fixed size.
When you fill FileN delete File1 and start over (deleting) and rewriting File1, File2 ...
This gives you a cyclical fixed size log.
Note: this requires you to keep track of which is the current log file to write to. This can be stored in a separate log file.

Resources