I created a new sqlite3 database using the latest sqlite3.exe v3.25 tools downloaded from https://www.sqlite.org/download.html
I want to verify that the sqlite database created is v3.25 to take advantage of the latest features.
How do I use sqlite3.exe tool to verify that the version is v3.25? I welcome other means to do so.
I tried the most voted answer here;
How to find SQLITE database file version
However, the answer doesn't help. It returns the library version, not the database file version.
I am using windows 10.
As per the official specification of the database format, there is a header at the beginning of the database file spanning the first 100 bytes.
There are a few items in the header that contain version numbers:
Offset Size Description
18 1 File format write version. 1 for legacy; 2 for WAL.
19 1 File format read version. 1 for legacy; 2 for WAL.
60 4 The "user version" as read and set by the user_version pragma.
92 4 The version-valid-for number.
96 4 SQLITE_VERSION_NUMBER
As per your question it looks like you want to get SQLITE_VERSION_NUMBER.
That is actually not a "database version" (and there is not such a thing), but rather the value for the SQLite library that most recently modified the database file.
To get that value you simply have to read the 4-byte big-endian integer at offset 96.
For that sake, you can use python code here ("I welcome other means to do so.).
I tried creating a database with python code containing the lines
import sqlite3
print('sqlite3 sqlite_version: ', sqlite3.sqlite_version)
(... create database ...)
which printed
sqlite3 sqlite_version: 3.31.1
and then reading the database header with the linked code.
In particular, lines
sqlite_version_number = unpack('>i', header[96:])[0]
print('SQLITE_VERSION_NUMBER: ' + str(sqlite_version_number))
printed
SQLITE_VERSION_NUMBER: 3031001
confirming the correct reading of "the database version number" (between quotes for reasons explained above).
Notes:
This would work in both Windows and Linux, as long as you have python available.
You can find other ways of reading the header. The point is identifying where to look for the information you need.
2.1. Sqlite. I couldn't find a command that would show the required info.
2.2. Linux. dd can do the job for you.
2.3. Powershell. [System.IO.File] or Get-Content can possibly help. I did not explore this further.
Related
I am aware that .dbf database holds (text) fields larger than 254 characters in separate .dbt files linking them with an M memo field.
I have a legacy database which I can plainly see contains a field with a stated (max) length of 255 characters.
When I edit that file and save it with OpenOffice Calc, it creates a .dbf and a .dbt. I would like to leave the edited file in the format I have found, that is with a 255 characters field.
Is that possible?
Does it depend on the character set (the only option I can see when using OpenOffice and Excel, versions that supported .dbf)?
There are many versions of dbf files; some of them, such as Clipper, actually allowed for character fields longer than 254 to be stored in the dbf file itself.
I don't believe OpenOffice nor Excel support writing to those formats, so if you want to make changes (and not just read the data) you will need to find another tool.
So far as telling the xBase version, you could start with Data File Header Structure for the
dBASE Version 7 Table File.
As for DBF file editors, Google is your friend. A quick search gave me, for example, GTK DBF Editor and CDBF. The latter I remember using about three years back with a client who was still running Clipper 5.2 apps under Microsoft Windows 98.
In Emacs Lisp Reference Manual:
— Function: file-name-sans-versions filename &optional keep-backup-version
This function returns filename with any file version numbers, backup version numbers, or trailing tildes discarded.
If keep-backup-version is non-nil, then true file version numbers understood as such by the file system are discarded from the return value, but backup version numbers are kept.
What does the "true file version numbers" in the second paragraph mean?
Systems such as OpenVMS use a version number on files, such that the when you created a file, say HELLO.TXT the actual filename would be HELLO.TXT;1 where the "1" is the version number. If you editted the file and saved it, the file system would automatically save a complete new copy as HELLO.TXT;2
Whenever you open the file, you get the highest version number automatically, so usually you don't need to worry about them at all. You could always specify the exact version number if you wanted, or use ;-1 to get one version prior, ;-2 for two versions prior etc. using ;-0 would open the oldest version available. Things like large databases would update the file, which didnt create new versions.
More detals: http://en.wikipedia.org/wiki/Files-11
I want to transfer a test file to mainframe, but the test file has lines exceeding 80 characters, which is default for FTP. Because the created dataset has record length 80, I am getting
451-File transfer failed. File contains records that are longer than the LRECL of the new file.
error. I tried this;
curl --ftp-ssl -k -v -B -T BBBBB -u USERNAME:PASS ftp://HOST_NAME:PORT/'DATASET_NAME(BBBBB)'
To solve this problem, I added -Q "site lrecl=250" but this didn' t help.
Are you creating a dataset or are creating a Member in a PDS ???. The syntax DATASET_NAME(BBBBB) implies you could be creating a member in an existing PDS.
The LRECL characteristics are defined in the PDS definition and can not be changed by the send command.
If it is an existing PDS, you will need to create a new dataset / PDS either through the send command or be creating a new dataset on the Mainframe with the correct characteristics and then doing the send.
Pre-allocate, on the Mainframe, a dataset with the characteristics that you want, like the LRECL (record-length) and RECFM (record format, are all the records a "fixed" length, or can they differ?).
If you ftp to that dataset, does that give you a problem?
I don't think that 80 is a "default" value for ftp, it is likely just the LRECL of the dataset you are trying to stuff the data into.
Somewhere amongst the Mainframe Support staff are people who know the standards for your Mainframe's ftp usage. It would be worth locating them, explaining what you have and what you need to do, and ask them the correct way to do it. Better than struggling away now, then have it "bounced" along the way as being "non-standard".
If this is a one-time or seldom repeated task, you could transfer the file to the Unix file system and then use OGET to create the "classic" z/OS file.
I am building a basic POS app for my cousin's pharmacy store so that he can dump the software he is currently using and save on license cost.All the medicines name which he has painfully entered into the software have been stored in a file with .d01 extension.
What i want is a way to read the contents of the .d01 file programmatically so that i can import the name of the medicines into my app.
The s/w from which my cousin uses is built in Foxpro(coz i see a lot of .cdx,.idx,.dbf files) and the file which i want to import is with .d01 extension. when i open the file in notepad it is something like this
http://img192.imageshack.us/img192/5528/foxpro.jpg
So I assume its somekind of database table or something. So can anyone please help me in reading this file, as i am not at all aware of foxpro.
Thanks a lot in advance to all those who take out time to reply.
hey guys thank you very much for replying so promptly.. I tried the solution suggested by Otávio and it worked, i will now write a small utility to read the dbf.
It has a good chance of being just a regular .dbf file. Copy it somewhere safe, change the extension to dbf and see if you can open it from foxpro.
Although it may have .cdx files, the actual paste of the file does not appear to be a visually recognizable header format of a VFP table... even if part of a database container. The characters around each column name don't look right. It may be from another language that also utilized "Compound Indexes". I even saw an article about Sybase's IAnywhere too. If worst-case scenario, and it is determined to be a possible fixed-length per row and no dynamic column sizes, you might take the file, strip off what appears to be the header and leave just the data and stream read it in based on how many constant characters are determined for the length. yeah, brute force, but just an option. Again, it doesn't LOOK like a VFP table.
BTW, what is the name of the software he is using... I'd look into that to see if any other type of indication to its source.
It looks sort of like a DBF file - maybe Clipper or something.
I have a bunch of *.TBC files from a very old application that runs in MS-DOS called TURBOLAB. Anyone know which DB System use files with a TBC extension.
I've tried renaming the files to *.dbf to check if they are dBase files with no luck.
Any idea?
Judging by the application and era (old MS-DOS) *.tbc is probably a fixed length binary record format written by the application's developers.
Try opening up the file in a text editor like TextPad first and if you can read the contents, if so I have a fixed length text file reader that you can adapt to your needs. If you cannot you may need to determine field lengths and data types through trial and error.
Also, are there associated files for each *.tbc? A paired file could indicate field lengths and data types (or that information could be stored at the top of a *.tbc file itself).
I Googled this: FlexPro. I hope it helps. Sounds pricey, I hope your data is worth it.
Judging by the application and era
(old MS-DOS) *.tbc is probably a fixed
length binary record format written by
the application's developers.
I think you are right. Unfortunately there are no matching file names. If each of those files is a 'table', there are like ~150 tables in this database. Too much work for such an old app. I guess my customer will have to start from scratch using my app.
Thanks anyway for your help.