Help required with ancient, unknown storage system - database

Morning all,
I've gone and told a customer I could migrate some of their old data out of a DOS based system into the new system I've developed for them. However I said that without actually looking at the files that stored the data in the old system - I just figured a quick google would solve all the problem for me... I was wrong!
Anyway, this program has a folder with hundreds... well 800 files with all sorts of file extensions, .ave, .bak, .brw, .dat, .001, .002...., .007, .dbf, .dbe and .his.
.Bak obviously isn't a SQL backup file.
Does anyone have any programming experience using any of those file types who may be able to point me in the direction of some way to read and extract the data?
I cant mention the program name for the reason that I don't think the original developer will allow this...
Thanks.

I'm willing to bet that the .dbf file is in DBase format, which is really straightforward. The contents of that might provide clues to the rest of them.

the unix 'file' utility can be used to recognize many file types by their 'magic number'. It examines the file's contents and compares it with thousands of known formats. If the files are in any kind of common format, this can probably save you a good amount of work.
if they're NOT in a common format, it may send you chasing after red herrings. Take its suggestions as just that, suggestions.

In complement to the sites suggested by Greg and Dmitriy, there's also the repository of file formats at http://www.wotsit.org ("What's its format?").
If that doesn't help, a good hex editor (with dump display) is your friend... I've always found it amazing how easy it can be to read and recognize many file formats.

Could be anything. Best be is to open with a hex editor, and see what you can see
Most older systems used a basic ISAM which had one file per table that contained a set of fixed length data records. The other files would probably be indexes
As you only need the data, not the index, just look for the files with repeating data patterns (it often looks like pretty patterns on the hex editor screen)
When you find the file with the data, try to locate a know record e.g. "Mr Smith" and see if you can work out the other fields. Integers are often byte for byte, dates are often encoded and days from a known start date, money could be in BCD
If you see a strong pattern, then most likely each record is a fixed length. There will probably be a header block on the file say 128 or 256 bytes, and then the fixed length records
Many old system where written in COBOL. There is plenty of info on the net re cobol formats, and some companies even sell COBOL ODBC drivers!

I think Greg is right about .dbf file. You should try to find some information about other file formats using sites like http://filext.com and http://dotwhat.net. The .bak file is usually a copy of another file with the same name, but other extension. For example there may be database.dbf file and database.bak file with backup of it. You should ask (if it's possible) for any details/documentation/source code of application that used that files from your customer.

Back in the DOS days, programmers used to make up their own file extentions pretty much as they saw fit. The DBF might well be a DBase file which is easy enough to read, and the .BAK is probably a backup of one of the other important files, or just a backup left by a text editor.
For the remaining files, first thing I would do is check if they are in a readable ASCII format by opening them in a text editor.
If this doesn't give you a good result, try opening them in a binary editor that shows side by side hex and ASCII with control characters blanked out. Look for repeating patterns that might correspond to record fields. For example, say the .HIS was something like an order histrory file, it might contain embedded product codes or names. If this is the case, count the number of bytes between such fields. If it is a regular number, you probably have a flat binary file of records. This is best decoded by opening the file in the app, looking for values in a given record, and searching for the corresponding values in the binary file. Time consuming, and a pain in the ass, but workable enough once you get the hang of it.
Happy hacking!

.DBF is a dBASE or early FoxPro database.
.DAT was used by Btrieve, and IIRC Paradox for DOS.
The .DBE and .00x files are probably either temporary or index files related to the .DAT files.
.DBF is easy. They'll open with MS Access or Excel (pre-2007 versions of Office, anyway), or with ADO or ODBC.
If the .DAT files are indeed Btrieve, you're in a world of hurt. They're a mess, even if you can get your hands on the right version of the data dictionary and a copy of the Btrieve structure. (Been there, done that, wore out the t-shirt before I got done.)

As others have suggested, I recommend a hex editor if you can't figure out what those files are and that dbf is probably Dbase.
BAK seems to be a backup file. I'm thinking that *.001, *.002, etc might be part of the backup. Are they all the same size? Maybe the backup was broken up into smaller pieces so that it could fit onto removable media?
Finally, take this as a life lesson. Before sending that Statement of Work over, if the customer asks you to import data from System A to System B, always ask for the sample schema and sample data and sample files. Lots of times things that seem straight forward hand up being nightmares.
Good luck!

Be sure to use the Modified date on the files as clues, if the .001, .002, etc all have similar time stamps, maybe along with the .BAK, they might be part of the backup. Also there may be some old cruft in the directory you can (somewhat safely) ignore. Look for .BAT files and try to dissect them as well.

One hint, if the .dbf files are DBase, FoxPro, or one of the other products that used that format. Then you may be able to read them using ODBC. My system still has the ODBC driver for .dbf (Vista, with VS 2008 - how it got there I'd have to hunt up, but I'd guess it was MDAC Microsoft Data Access which put that there). So, you may not have a "world of unpicking to do", if the ODBC driver will read the .dbf files.
I seem to remember (with a little confidence of 20+ years ago DBase III tinkering) that DBase used .001, .002, ... file for memo (big text) fields.
Good luck trying to salvage the data.

The DBF format is fairly common.
The other files are puzzling.
I'm guessing that either you're dealing with old BTrieve files (bad), or (hopefully) with the results of some ill-conceived backup scheme where someone backed up his database into the same directory rather than into the hard drive in which case you could ignore these.

It's now part of Pervasive, but I used, years ago, Data Junction to migrate data from lots of file types to others. Have a look, unless you want to write a parser.

.dat can also be old Clarion 2.1 files... It works on an ISAM basis also, with key/index files

Related

read thunderbird address mab files content

I have several address list's on my TBIRD address book.
every time I need to edit an address that is contained in several lists, is a pain on the neck to find which list contains the address to be modified.
As a help tool I want to read the several files and just gave the user a list of which
xxx.MAB files includes the searched address on just one search.
having the produced list, the user can simply go to edit just the right address list's.
Will like to know a minimum about the format of mentioned MAB files, so I can OPEN + SEARCH for strings into the files.
thanks in advance
juan
PD have asked mozilla forum, but there are no plans from mozilla to consolidate the address on one master file and have the different list's just containing links to the master. There is one individual thinking to do that, but he has no idea when due to lack of resources,
on this forum there is a similar question mentioning MORK files, but my actual TBIRD looks like to have all addresses contained on MAB files
I am afraid there is no answer that will give you a proper solution for this question.
MORK is a textual database containing the files Address Book Data (.mab files) and Mail Folder Summaries (.msf files).
The format, written by David McCusker, is a mix of various numerical namespaces and is undocumented and seem to no longer be developed/maintained/supported. The only way you would be able to get the grips of it is to reverse engineer it parallel with looking at source code using this format.
However, there have been experienced people trying to write parsers for this file format without any success. According to Wikipedia former Netscape engineer Jamie Zawinski had this to say about the format:
...the single most brain-damaged file format that I have ever seen in
my nineteen year career
This page states the following:
In brief, let's count its (Mork's) sins:
Two different numerical namespaces that overlap.
It can't decide what kind of character-quoting syntax to use: Backslash? Hex encoding with dollar-sign?
C++ line comments are allowed sometimes, but sometimes // is just a pair of characters in a URL.
It goes to all this serious compression effort (two different string-interning hash tables) and then writes out Unicode strings
without using UTF-8: writes out the unpacked wchar_t characters!
Worse, it hex-encodes each wchar_t with a 3-byte encoding, meaning the file size will be 3x or 6x (depending on whether whchar_t is 2
bytes or 4 bytes.)
It masquerades as a "textual" file format when in fact it's just another binary-blob file, except that it represents all its magic
numbers in ASCII. It's not human-readable, it's not hand-editable, so
the only benefit there is to the fact that it uses short lines and
doesn't use binary characters is that it makes the file bigger. Oh
wait, my mistake, that isn't actually a benefit at all."
The frustration shines through here and it is obviously not a simple task.
Consequently there apparently exist no parsers outside Mozilla products that is actually able to parse this format.
I have reversed engineered complex file formats in the past and know it can be done with the patience and right amount of energy.
Sadly, this seem to be your only option as well. A good place to start would be to take a look at Thunderbird's source code.
I know this doesn't give you a straight-up solution but I think it is the only answer to the question considering the circumstances for this format.
And of course, you can always look into the extension API to see if that allows you to access the data you need in a more structured way than handling the file format directly.
Sample code which reads mork
Node.js: https://www.npmjs.com/package/mork-parser
Perl: http://metacpan.org/pod/Mozilla::Mork
Python: https://github.com/KevinGoodsell/mork-converter
More links: https://wiki.mozilla.org/Mork

File structure of EDB file

I have an offline .EDB file (exchange Database) that I want to pull information from such as the Computer name and the Flags etc. I have found the following offsets from http://www.edbsearch.com/edb.html which indicate that the Computer name etc comes from byte 0x24 0x10 However, looking at the following EDB file in 101 editor, the value appears to be non existent. It appears later on within the file, but not in a constant place.
Is there a constant byte that I can reliably pull the Computer name from the .EDB file ? I am working on backups from another computer, but all of the solutions that I have found are for Live versions of .EDB files - which are useless for myself as I have offline databases.
Many thanks,
With database replication (CCR in 2007, DAGs in 2010+), the concept of a computer name isn't that helpful. After a failover/switchover, what should the computer name be?
I don't think that the Computer Name is populated anymore. If eseutil.exe -mh doesn't report it, then it's not there.
Also check out JetGetDatabaseFileInfo. http://msdn.microsoft.com/en-us/library/windows/desktop/gg269239(v=exchg.10).aspx Note that the documentation is for esent.dll (Windows), and that ese.dll (Exchange) is not documented. While esent.dll and ese.dll are very similar, and for simple things (such as this) you can treat them similarly and get away with it, they are NOT identical, and you will sometimes come upon incompatibilities. In other words: Do it at your own risk, your mileage may vary, etc. etc. :)
-martin

Why do file formats have magic numbers?

For example, Portable Executable has several, including the famous "MZ" at the beginning, as well as the "PE\0\0" at the start of the PE header. The Rar file format has the "Rar!" header at the beginning, and several others have similar "magic values" in the file.
What purpose do such magic values serve?
Because users change the file extension, or other programs steal the file extension, it allows the application to cancel processing of a file in an unknown format instead of trying its best and then failing anyway.
the concept of magic numbers goes back to unix and pre-dates the use of file extensions.
The original idea of the shell was that all 'executable' would look the same - it didn't matter how the file had been created or what program should be used to evaluate it. The shell would look at the contents of the file and determine the appropriate file. Microsoft came along and chose a different approach and the era of file extensions was born. Then to make things 'nicer' for users microsoft chose to 'hide' these extensions and the era of trojan files which look like they are of one type but really have a different extension and are processed by a different file was born.
If two applications store data differently, but are constructed such that a file for one might possibly also be a valid (but meaningless) file for the other, very bad things can happen. A program may think it has successfully loaded the file (unaware that the data is meaningless) and then write back a file which to it would be semantically identical, but which would no longer be meaningfully readable by the application that wrote it (or anything else for that matter).
Using magic numbers doesn't entirely prevent this, but it can help at least somewhat.
BTW, trying to guess about the format of data is often very dangerous. For example, suppose one has a list of what are probably dates in the format nn-nn-nn. If one doesn't know what format the dates are in, there may be enough information to pretty well guess the format (e.g. if one of the records is 12-31-99, then absent information to the contrary, the dates are probably mm-dd-yy) but if all dates are within the first 12 days of a month, the data could easily be misinterpreted. Suppose, though, the data were preceded by something saying "MM-DD-YY". Then the risks of misinterpretation could be reduced.
To quickly identify the type of the file, or the positions within it.
Your question should not be “why do file formats have magic number”, but rather “what are the advantages of file formats having magic number”!
Suggestions:
Programs that undelete files by reading disk free space may recognize file types
Your UNIX knows whether an executable file is to be interpreted (she-bang) or is binary
When you lose extensions, programs like file can detect what your files are
Designer of file formats consider it is always safer when applications can easily ensure they are reading a file which has the good format.
As you have a header, it does not cost much to put it at header start.

What should I know before poking around an unknown archive file for things?

A game that I play stores all of its data in a .DAT file. There has been some work done by people in examining the file. There are also some existing tools, but I'm not sure about their current state. I think it would be fun to poke around in the data myself, but I've never tried to examine a file, much less anything like this before.
Is there anything I should know about examining a file format for data extraction purposes before I dive headfirst into this?
EDIT: I would like very general tips, as examining file formats seems interesting. I would like to be able to take File X and learn how to approach the problem of learning about it.
You'll definitely want a hex editor before you get too far. It will let you see the raw data as numbers instead of as large empty blocks in whatever font notepad is using (or whatever text editor).
Try opening it in any archive extractors you have (i.e. zip, 7z, rar, gz, tar etc.) to see if it's just a renamed file format (.PK3 is something like that).
Look for headers of known file formats somewhere within the file, which will help you discover where certain parts of the data are stored (i.e. do a search for "IPNG" to find any (uncompressed) png files somewhere within).
If you do find where a certain piece of data is stored, take a note of its location and length, and see if you can find numbers equal to either of those values near the beginning of the file, which usually act as pointers to the actual data.
Some times you just have to guess, or intuit what a certain value means, and if you're wrong, well, keep moving. There's not much you can do about it.
I have found that http://www.wotsit.org is particularly useful for known file type formats, for help finding headers within the .dat file.
Back up the file first. Once you've restricted the amount of damage you can do, just poke around as Ed suggested.
Looking at your rep level, I guess a basic primer on hexadecimal numbers, endianness, representations for various data types, and all that would be a bit superfluous. A good tool that can show the data in hex is of course essential, as is the ability to write quick scripts to test complex assumptions about the data's structure. All of these should be obvious to you, but might perhaps help someone else so I thought I'd mention them.
One of the best ways to attack unknown file formats, when you have some control over contents is to take a differential approach. Save a file, make a small and controlled change, and save again. Do a binary compare of the files to find the difference - preferably using a tool that can detect inserts and deletions. If you're dealing with an encrypted file, a small change will trigger a massive difference. If it's just compressed, the difference will not be localized. And if the file format is trivial, a simple change in state will result in a simple change to the file.
The other thing is to look at some of the common compression techniques, notably zip and gzip, and learn their "signatures". Most of these formats are "self identifying" so when they start decompressing, they can do quick sanity checks that what they're working on is in a format they understand.
Barring encryption, an archive file format is basically some kind of indexing mechanism (a directory or sorts), and a way located those elements from within the archive via pointers in the index.
With the the ubiquitousness of the standard compression algorithms, it's mostly a matter of finding where those blocks start, and trying to hunt down the index, or table of contents.
Some will have the index all in one spot (like a file system does), others will simply precede each element within the archive with its identity information. But in the end somewhere, there is information about offsets from one block to another, there is information about data types (for example, if they're storing GIF files, GIF have a signature as well), etc.
Those are the patterns that you're trying to hunt down within the file.
It would be nice if somehow you can get your hand on two versions of data using the same format. For example, on a game, you might be able to get the initial version off the CD and a newer, patched version. These can really highlight the information you're looking for.

Writing more to a file than just plain text

I have always been able to read and write basic text files in C++, but so far no one has discussed much more than that.
My question is this:
If developing a file type by myself for use by an application I also create, how would I go about writing the data to a file and preserve the layout, formatting, etc.? Are there any standards, or does it just depend on the creativity of the programmer?
You basically have to come up with your own file format and write binary data.
You can also serialize your object model and write the output to a file, but that's usually less efficient.
Better to use an existing database, or use xml (or other) for simple needs. If you want to write a file in a format that already exists, find a library that supports it.
You have to know the binary file format for the file you are trying to create. Consider Joel's post on this topic: the 97-2003 File Format is a 349 page spec.
Nearly all the time, to do something like that, you use an API, to avoid the grunt work. Be careful however, because trial and error and figuring out "what works" by trial and error can result in an upgrade of the program breaking your code. Plus you have to take into account other operating systems, minor version differences, patches, etc.
There are a number of standards of course. The likely one to use is some flavor of xml since there are libraries and tools that already exist to help you work with it, but nothing is stopping you from inventing your own.
Well you could store the data in a format you could read, but which maintained the integrity of your data (XML or JSON for instance).
Or (shudder) you could come up with your own propriatory binary format, and use that.
you would go at it exactly the same way as you would a text file. writing your data byte by byte, encoded in such a way that when you read the file you know what you are reading.
for a spreadsheet application you could even use a text format (OOXML, OpenDocument) to store presentation and content information.
Or you could define binary datastructures and write that directly to the file.
the choice between text or binary format depends on the application. for a configuration file you may prefer a text file which can be modified outside your app, for a database you will most likely choose a binary format for performance reasons.
See wotsit.org for information on file formats for various file types. Example: You can figure out exactly how to write out a .BMP file and how it is composed.
Writing to a database can be done by using a wrapper class in your language, mainly passing it SQL commands.
If you create a binary file , you can write any file to it . The only drawback is that you have to know exactly where it starts and where it ends .
Use xml (something open, descriptive, and validatable), and stick with the text. There are standards for this sort of thing as well, including ODF
You can open the file as binary, instead of text (how one does this depends somewhat on the platform), from there you can write the data directly out to disk. The only real caveat to this is endianess, which can become an issue when moving the files from one architecture to another (x86 to PPC for instance).
Writing binary data to disk is really no harder than writing text, and really, your creativity is key for how you store the data.
The general problem is usually referred to as serialization of your application state and in your case with a source/target of a file in whatever format makes sense for you. These days the preferred input/output format is XML, and you may want to look into the existing standards in this field. The problem then becomes how do I map from the state of my system to the particular schema. Boost has a serialization framework that you may want to check out.
/Allan
There are a variety of approaches you can take, but in general you'll want some sort of serialization library. BOOST::Serialization, or Google's Protocal Buffers are a good example of these. The basic idea is that you have memory structures (classes and objects) that represent your data, and you want to write that data to a file in a way that can be used to reconstruct those structures again.
If you're hesitant to use a library, you can do it all manually, but realize that you can end up writing a lot of redundant code, or developing your own library. See fopen, fread, fwrite and fclose for a starting point.
A typical binary file format for custom data is an "indexed file format" consisting of
-------
|index|
-------
|data |
-------
Where the index contains records "pointing" to the data.
The index consists of records containing an offset and a size. The offset tells you where in the file the data is stored and the size tells you the size of the data at that offset (i.e. the number of bytes to read).
typedef struct {
size_t offset
size_t size
} Index
typedef struct {
int ID
char First[20]
char Last[20]
char *RandomInfo
} Data
Suppose you want to store 50 records in the file you would create 50 indices and 50 data structures. The 50 index structures would be written to the file first, followed by the 50 data structures.
To read the file you would read in the 50 index structures, then from the data in the read-in index structures you could tell where to "seek" to read the data records.
Look up (fopen, fread, fwrite, fclose, ftell) for functions to read/write the data.
(Sorry my semicolon key doesn't work)
You usually use a third party library for these things. For example, you would link in a database library for say Oracle that would allow you to talk to the database. Because the underlying file type, ( i.e. Excel spreadsheet vs Openoffice, Oracle vs MySQL, etc. ) differ these libraries abstract away your need to care how the file is constructed.
Hope that helps you find what you're looking for!
1985 called, and said they have some help IFF you are willing to read up. The interchange file format is still in use today and provides some basic metadata around binary files, such as RIFF or WAV audio. (Unfortunately, TIFF is a false friend.) It allegedly even inspired PNG, so it can't be that bad.

Resources