Pervasive SQL(10.3) File size exceeding 2GB resulting in a .^01 file being created - pervasive

We have a database with a data file exceeding 2Gb, this resulted in a .^01 file being generated with the same file name. We now have a .DAT file and a .^01 with the same name.
I have subsequently deleted the unnecessary data (old history, no longer required) and the .DAT file is now only 372MB, but the .^01 file remains.
I would like to clone the .DAT file and save the data and reload it into the cloned (blank file. I normally use Butil (Clone, Save and Load) but am unsure what I need to do with the .^01 file as the Butil -Save FileName.^01 FileName.seq returns an error as it does not recognise the ^:
BUTIL-14: The file that caused the error is FileName.01.
BUTIL-100: MicroKernel error = 12. The MicroKernel cannot find the specified file.
I would greatly appreciate some direction/input in this regard
Thank you and kind regards,

You don't need to do anything with the .^XX file(s). They are called Extended files and are automatically handled by the PSQL engine. A BUTIL -CLONE / -COPY will read all of the data (original file and extended file(s)) and copy it to the new file.
To rebuild it, you should do something like:
BUTIL -CLONE <NEWFILE.DAT> <OLDFILE.DAT>
BUTIL -COPY <OLDFILE.DAT> <NEWFILE.DAT>
Also, if the file grows above 2GB again, the Extended File (.^01) will come back.

Related

Btrieve file only shows partial data

Almost ready to throw up the white flag but thought I'd throw it out there. I have an OLD program from 1994 that uses a btrieve dB and renders basic membership info for a gym. The btr file that holds the data will open in notepad and I can search and find all records although the formatting is nearly unreadable. When it opens in the program there is a huge chunk of records missing. It seems to stop on specific records up and down when scrolling.
I know almost nothing about btrieve as it predates my IT career by many years and I've honestly never seen it. Any suggestions on where I should troubleshoot or tools that may help would be much appreciated.
This sounds like the file may be corrupted although I would expect errors if it was corrupted. One way to rebuild the file is to use BUTIL (and a couple of OS commands).
The steps to rebuild are:
Make a backup of the original file to another directory.
Rename the original file. I like to use change the extension to .OLD.
Delete the original file. It will be recreated in the next step.
Issue the BUTIL -CLONE command (BUTIL -CLONE
Issue the BUTIL -COPY command (BUTIL -COPY
The rebuild is complete.
I've use the commands below in the past (changing 'filename' and the extensions to match my files).
copy filename.btr someother\location\filename.btr
ren filename.btr filename.old
del filename.btr
butil -clone filename.btr filename.old
butil -copy filename.old filename.btr

Reconstruct odt file with missing content.xml file

I have an .odt file that's corrupt. I looked online and apparently if you can get to the content.xml file, there's a chance the file can be repaired. However, in my case, when I convert the file to a .zip and extract it, I don't have that file. However, the .odt file is 2.9MB and has content in it when you convert it to a .txt file.
How can I recreate the content.xml file from the .txt file?
You might not want to hear this, but depending on where the corruption happened, there is nothing you can do.
The idea behind the method you are describing is that if the corruption only concerns, for example, the styles.xml, you can still recover the contents by looking at content.xml. For more details on this, see https://en.wikipedia.org/wiki/OpenDocument_technical_specification#Format_internals
However, from your zip extract, it looks like the only uncorrupted file is styles.xml, which doesn't help you much.
What you can try to do is the following: Rename your .odt-File so that it ends in .zip, and then try to recover that file using one of the multitude of tools available on the internet, for example here, until you get a valid content.xml file.

what is the difference between hadoop -appendToFile versus hadoop -put when used for updating stream data into hdfs continously

As per hadoop source code following descriptions are pulled out from the classes -
appendToFile
"Appends the contents of all the given local files to the
given dst file. The dst file will be created if it does not exist."
put
"Copy files from the local file system into fs. Copying fails if the file already exists, unless the -f flag is given.
Flags:
-p : Preserves access and modification times, ownership and the mode.
-f : Overwrites the destination if it already exists.
-l : Allow DataNode to lazily persist the file to disk. Forces
replication factor of 1. This flag will result in reduced
durability. Use with care.
-d : Skip creation of temporary file(<dst>._COPYING_)."
I am trying to update a file into hdfs regularly as it is being updated dynamically from a streaming source in my local File System.
Which one should I use out of appendToFile and put, and Why?
appendToFile modifies the existing file in HDFS, so only the new data needs to be streamed/written to the filesystem.
put rewrites the entire file, so the entire new version of the file needs to be streamed/written to the filesystem.
You should favor appendToFile if you are just appending to the file (i.e. adding logs to the end of a file). This function will be faster if that's your use case. If the file is changing more than just simple appends to the end, you should use put (slower but you won't lose data or corrupt your file).

Loading files into MAGMA

I'm trying to load files into MAGMA and am running into some trouble. Ostensibly, the command load "filename";should be sufficient. I've attempted, but keep getting the same result:
>> load "filename";
^
User error: Could not open file "filename" (No such file or directory)
The file is saved in my documents folder, so I'm not sure what the issue is. Do I have to specify the path? Save the file in a particular place?
I've tried reformatting, using both txt and rtf files, so I don't think that's the issue.
For loading file in MAGMA you can place your file in installed place folder. For example: C:\Program Files (x86)\Magma
Also if your file have an special format you should mention it.
Suppose You want loading a txt file with name a. with load"a"; you face with error. You must type load"a.txt";.
Try using GetCurrentDirectory() command to find your current directory location. And then you can use SetPath() to change where MAGMA has to be to search for your file. This will fix it.

Detecting the database a .DAT file belongs to

I have a set of .DAT files present along side a set of .IDX files with the same name.
The goal is to be able to open these files and read its contents, parsing it into a new format. The problem: I have no idea what database the data is being stored in! The files contain no headers or clues, they are binary, and the resource from which I have received these has no idea as to its storage mechanism.
So the question is: What are some common databases which store databases in .DAT files and store their indexes in .IDX files with the same name? Is there an application I can use in Linux or Windows which can detect the database?
EDIT :-
File names:
price.dat
price.idx
Here is a hex dump of the beginning of the .DAT file:
030D04806420500FFE3E0500002078581001C000738054E0C0099804138100402550080442090082403C101F7406010080C0A010201002010C006FC0246C0403FE00B041C051F0091BFE042F812FE054F8177E066F81BFE078F8207E08AF824FE09CF8297E0AEF82DFE0C0F8327E0D2F836FE0E4F83B7E0F6F83FE5FEFF47C06608480FA91F003C0213101F1BFDFE804220100F500D2A00388430801E04028D4390D128B46804024010A067269FCA546003C0844060E11F084B9E1377850
Here is a hex dump of the beginning of the .IDX file:
030D04805820100FFD7E0000397FEB60050410007300246A3060068220009BE0401030088B3903F740E010C80402410281402030094004C708004DC058880FFC052F015EBFE042F812FE054F8177E066F81BFE078F8207E08AF824FE09CF8297E0AEF82DFE0C0F8327E0D2F836FE0E4F83B7E0F6F83FFE108F8447E11AF848FE12CF84D7E13EF851FE150F8567E162F85AFE174F85F7E186F863FE198F8687E1AAF86CFE1BCF8717E1CEF875FE1E0F87A7E1F2F87EF5FEFF005E30901714
Both files uniquely start out with 030D04806420500FF wonder if this is a good start?
Did a quick search on Google but it didn't return anything...
END EDIT :-
Any other ideas?
Thanks much in advance!
There is a faircom ODBC driver called 'ctreeODBC_RO.exe' which should be capable.

Resources