I am aware that .dbf database holds (text) fields larger than 254 characters in separate .dbt files linking them with an M memo field.
I have a legacy database which I can plainly see contains a field with a stated (max) length of 255 characters.
When I edit that file and save it with OpenOffice Calc, it creates a .dbf and a .dbt. I would like to leave the edited file in the format I have found, that is with a 255 characters field.
Is that possible?
Does it depend on the character set (the only option I can see when using OpenOffice and Excel, versions that supported .dbf)?
There are many versions of dbf files; some of them, such as Clipper, actually allowed for character fields longer than 254 to be stored in the dbf file itself.
I don't believe OpenOffice nor Excel support writing to those formats, so if you want to make changes (and not just read the data) you will need to find another tool.
So far as telling the xBase version, you could start with Data File Header Structure for the
dBASE Version 7 Table File.
As for DBF file editors, Google is your friend. A quick search gave me, for example, GTK DBF Editor and CDBF. The latter I remember using about three years back with a client who was still running Clipper 5.2 apps under Microsoft Windows 98.
Related
I created a new sqlite3 database using the latest sqlite3.exe v3.25 tools downloaded from https://www.sqlite.org/download.html
I want to verify that the sqlite database created is v3.25 to take advantage of the latest features.
How do I use sqlite3.exe tool to verify that the version is v3.25? I welcome other means to do so.
I tried the most voted answer here;
How to find SQLITE database file version
However, the answer doesn't help. It returns the library version, not the database file version.
I am using windows 10.
As per the official specification of the database format, there is a header at the beginning of the database file spanning the first 100 bytes.
There are a few items in the header that contain version numbers:
Offset Size Description
18 1 File format write version. 1 for legacy; 2 for WAL.
19 1 File format read version. 1 for legacy; 2 for WAL.
60 4 The "user version" as read and set by the user_version pragma.
92 4 The version-valid-for number.
96 4 SQLITE_VERSION_NUMBER
As per your question it looks like you want to get SQLITE_VERSION_NUMBER.
That is actually not a "database version" (and there is not such a thing), but rather the value for the SQLite library that most recently modified the database file.
To get that value you simply have to read the 4-byte big-endian integer at offset 96.
For that sake, you can use python code here ("I welcome other means to do so.).
I tried creating a database with python code containing the lines
import sqlite3
print('sqlite3 sqlite_version: ', sqlite3.sqlite_version)
(... create database ...)
which printed
sqlite3 sqlite_version: 3.31.1
and then reading the database header with the linked code.
In particular, lines
sqlite_version_number = unpack('>i', header[96:])[0]
print('SQLITE_VERSION_NUMBER: ' + str(sqlite_version_number))
printed
SQLITE_VERSION_NUMBER: 3031001
confirming the correct reading of "the database version number" (between quotes for reasons explained above).
Notes:
This would work in both Windows and Linux, as long as you have python available.
You can find other ways of reading the header. The point is identifying where to look for the information you need.
2.1. Sqlite. I couldn't find a command that would show the required info.
2.2. Linux. dd can do the job for you.
2.3. Powershell. [System.IO.File] or Get-Content can possibly help. I did not explore this further.
I have data stored in .ddm, .pnt, .fdt and .bin files.
How can I export (or extract or transform) data from those file formats into .csv?
I think it's an ADABAS database.
Yes. The file extensions looks like an adabas database.
You need an adabas/natural enviroment for running database and you can write a simple program in Natural for read database content and put them to text "work file" with delimiters ";" and csv extensions. J don't met any tool for manual unpack database files.
As peterozgood pointed out, you would normally use Natural for that.
If you're using Natural on Windows or Unix you can code the following
DEFINE WORK FILE nn TYPE 'CSV'
...where nn is a number between 1 & 32, identifying the desired workfile.
(this may also be specified by your Admin in the so-called Natparm, along with Codepage & Delimiter)
Then you can output data to the file by coding
WRITE WORK FILE nn operand1 ... operandN
Natural will automatically create the csv.
Fields will be separated by the delimiter and quoted and escaped as necessary.
(the delimiter may be specified in the Natparm or as a startup parameter)
Unfortunately this functionality is not available with Mainframe Natural.
(CSV that is. Workfiles are of course available)
I am building a basic POS app for my cousin's pharmacy store so that he can dump the software he is currently using and save on license cost.All the medicines name which he has painfully entered into the software have been stored in a file with .d01 extension.
What i want is a way to read the contents of the .d01 file programmatically so that i can import the name of the medicines into my app.
The s/w from which my cousin uses is built in Foxpro(coz i see a lot of .cdx,.idx,.dbf files) and the file which i want to import is with .d01 extension. when i open the file in notepad it is something like this
http://img192.imageshack.us/img192/5528/foxpro.jpg
So I assume its somekind of database table or something. So can anyone please help me in reading this file, as i am not at all aware of foxpro.
Thanks a lot in advance to all those who take out time to reply.
hey guys thank you very much for replying so promptly.. I tried the solution suggested by Otávio and it worked, i will now write a small utility to read the dbf.
It has a good chance of being just a regular .dbf file. Copy it somewhere safe, change the extension to dbf and see if you can open it from foxpro.
Although it may have .cdx files, the actual paste of the file does not appear to be a visually recognizable header format of a VFP table... even if part of a database container. The characters around each column name don't look right. It may be from another language that also utilized "Compound Indexes". I even saw an article about Sybase's IAnywhere too. If worst-case scenario, and it is determined to be a possible fixed-length per row and no dynamic column sizes, you might take the file, strip off what appears to be the header and leave just the data and stream read it in based on how many constant characters are determined for the length. yeah, brute force, but just an option. Again, it doesn't LOOK like a VFP table.
BTW, what is the name of the software he is using... I'd look into that to see if any other type of indication to its source.
It looks sort of like a DBF file - maybe Clipper or something.
I have a bunch of *.TBC files from a very old application that runs in MS-DOS called TURBOLAB. Anyone know which DB System use files with a TBC extension.
I've tried renaming the files to *.dbf to check if they are dBase files with no luck.
Any idea?
Judging by the application and era (old MS-DOS) *.tbc is probably a fixed length binary record format written by the application's developers.
Try opening up the file in a text editor like TextPad first and if you can read the contents, if so I have a fixed length text file reader that you can adapt to your needs. If you cannot you may need to determine field lengths and data types through trial and error.
Also, are there associated files for each *.tbc? A paired file could indicate field lengths and data types (or that information could be stored at the top of a *.tbc file itself).
I Googled this: FlexPro. I hope it helps. Sounds pricey, I hope your data is worth it.
Judging by the application and era
(old MS-DOS) *.tbc is probably a fixed
length binary record format written by
the application's developers.
I think you are right. Unfortunately there are no matching file names. If each of those files is a 'table', there are like ~150 tables in this database. Too much work for such an old app. I guess my customer will have to start from scratch using my app.
Thanks anyway for your help.
I am looking for either a NAnt Task for SQL Server bcp, or the file format for bcp native output.
I supposed I could build a NAntContrib Task for bcp but I don't have time at the moment (Do we ever?).
Has anybody strolled this path before? Advice?
Thanks - Jon
bcp doesn't have a native file output format as such, but can produce a binary file where the fields are prefixed by a 1-4 byte header that contains the length of the field. The length of the header or the row/column delimiter formats are specified in the control file (format described here),
If using prefixed files, SQL Server bcp uses a -1 length in the header to denote nulls.
'Native' in bcp-speak refers to binary representations of the column data. This question has some discussion of these formats.
You could always use NAnt's <exec> task to drive bcp...