What does "file version numbers" mean? - file

In Emacs Lisp Reference Manual:
— Function: file-name-sans-versions filename &optional keep-backup-version
This function returns filename with any file version numbers, backup version numbers, or trailing tildes discarded.
If keep-backup-version is non-nil, then true file version numbers understood as such by the file system are discarded from the return value, but backup version numbers are kept.
What does the "true file version numbers" in the second paragraph mean?

Systems such as OpenVMS use a version number on files, such that the when you created a file, say HELLO.TXT the actual filename would be HELLO.TXT;1 where the "1" is the version number. If you editted the file and saved it, the file system would automatically save a complete new copy as HELLO.TXT;2
Whenever you open the file, you get the highest version number automatically, so usually you don't need to worry about them at all. You could always specify the exact version number if you wanted, or use ;-1 to get one version prior, ;-2 for two versions prior etc. using ;-0 would open the oldest version available. Things like large databases would update the file, which didnt create new versions.
More detals: http://en.wikipedia.org/wiki/Files-11

Related

How to verify that my sqlite3 database is version 3.25?

I created a new sqlite3 database using the latest sqlite3.exe v3.25 tools downloaded from https://www.sqlite.org/download.html
I want to verify that the sqlite database created is v3.25 to take advantage of the latest features.
How do I use sqlite3.exe tool to verify that the version is v3.25? I welcome other means to do so.
I tried the most voted answer here;
How to find SQLITE database file version
However, the answer doesn't help. It returns the library version, not the database file version.
I am using windows 10.
As per the official specification of the database format, there is a header at the beginning of the database file spanning the first 100 bytes.
There are a few items in the header that contain version numbers:
Offset Size Description
18 1 File format write version. 1 for legacy; 2 for WAL.
19 1 File format read version. 1 for legacy; 2 for WAL.
60 4 The "user version" as read and set by the user_version pragma.
92 4 The version-valid-for number.
96 4 SQLITE_VERSION_NUMBER
As per your question it looks like you want to get SQLITE_VERSION_NUMBER.
That is actually not a "database version" (and there is not such a thing), but rather the value for the SQLite library that most recently modified the database file.
To get that value you simply have to read the 4-byte big-endian integer at offset 96.
For that sake, you can use python code here ("I welcome other means to do so.).
I tried creating a database with python code containing the lines
import sqlite3
print('sqlite3 sqlite_version: ', sqlite3.sqlite_version)
(... create database ...)
which printed
sqlite3 sqlite_version: 3.31.1
and then reading the database header with the linked code.
In particular, lines
sqlite_version_number = unpack('>i', header[96:])[0]
print('SQLITE_VERSION_NUMBER: ' + str(sqlite_version_number))
printed
SQLITE_VERSION_NUMBER: 3031001
confirming the correct reading of "the database version number" (between quotes for reasons explained above).
Notes:
This would work in both Windows and Linux, as long as you have python available.
You can find other ways of reading the header. The point is identifying where to look for the information you need.
2.1. Sqlite. I couldn't find a command that would show the required info.
2.2. Linux. dd can do the job for you.
2.3. Powershell. [System.IO.File] or Get-Content can possibly help. I did not explore this further.

ClearCase automatic merge puts the content of the file in single line

We are using ClearCase UCM INTEROP setup in our organization.
We are facing a strange issue where the file content in put on one line while doing a merge and hence causing the build to fail.
Does anyone know why this is happening?
There is another thing that can sneak up and bite you here.
View text modes -- there are 3 for ClearCase, and you can get unexpected results if you use the wrong 2.
Those modes are:
insert_cr (also known as msdos). This inserts carriage returns on cleartext creation (file open/edit/etc.) and strips them at checkin.
transparent. Doesn't make any changes to the line terminations.
strip_cr. This converts Windows line terminations to Unix ones on cleartext creation. It then re-inserts them on checkin.
If you use strip_cr for unix views, and insert_cr for Windows ones, you will have all kinds of line termination issues. And in some cases this can break builds.
You need to use the 2 that are adjacent to one another on the list above. With Windows using either insert_cr and unix "transparent". Or Windows "transparent" and unix strip_cr.
You may want to look at the problem file in another editor (notepad++ is one that handles both line termination styles well) and see whether you have Unix or Windows line terminations.
It would also help to know the platform where the merge was attempted. If it was done on Windows, then #VoC's steps will work as they call the Windows GUI text merge tool. You may want to try the same thing using the Windows/Unix CLI merge tool (cleardiff) if the files are "text_file" elements. We have seen cases where the 2 CLI and GUI merge tools behave differently.
It depends on:
what "put on one line while doing a merge" means (extra line added, or the all text is in one line, which would mean \n isn't treated as a newline character)
the type of file being merge (text, xml, binary, ...), as the merge manager involved would differ.
Start by trying to reproduce the issue, as explained in this technote:
Copy the "base" version into base.txt.
Copy the "from" version into from.txt and copy the "to" version into to.txt.
Run the following command:
cleardiffmrg -out foo.out -base base.txt from.txt to.txt
See if ignoring whitespaces helps:
cleardiffmrg -blank_ignore -out foo.out -base base.txt from.txt to.txt.
Check your text modes associated with the view in which you are doing the merge.
Try and make sure the newline sequence used in both versions of the file is the same (\n for instance, both in Windows and Unix)

Memo fields in dbf

I am aware that .dbf database holds (text) fields larger than 254 characters in separate .dbt files linking them with an M memo field.
I have a legacy database which I can plainly see contains a field with a stated (max) length of 255 characters.
When I edit that file and save it with OpenOffice Calc, it creates a .dbf and a .dbt. I would like to leave the edited file in the format I have found, that is with a 255 characters field.
Is that possible?
Does it depend on the character set (the only option I can see when using OpenOffice and Excel, versions that supported .dbf)?
There are many versions of dbf files; some of them, such as Clipper, actually allowed for character fields longer than 254 to be stored in the dbf file itself.
I don't believe OpenOffice nor Excel support writing to those formats, so if you want to make changes (and not just read the data) you will need to find another tool.
So far as telling the xBase version, you could start with Data File Header Structure for the
dBASE Version 7 Table File.
As for DBF file editors, Google is your friend. A quick search gave me, for example, GTK DBF Editor and CDBF. The latter I remember using about three years back with a client who was still running Clipper 5.2 apps under Microsoft Windows 98.

Tcl determine file name from browser upload

I have run into a problem in one of my Tcl scripts where I am uploading a file from a Windows computer to a Unix server. I would like to get just the original file name from the Windows file and save the new file with the same name. The problem is that [file tail windows_file_name] does not work, it returns the whole file name like "c:\temp\dog.jpg" instead of just "dog.jpg". File tail works correctly on a Unix file name "/usr/tmp/dog.jpg", so for some reason it is not detecting that the file is in Windows format. However Tcl on my Windows computer works correctly for either name format. I am using Tcl 8.4.18, so maybe it is too old? Is there another trick to get it to split correctly?
Thanks
The problem here is that on Windows, both \ and / are valid path separators so long Windows API is concerned (even though only \ is deemed to be "official" on Windows). On the other hand, in POSIX, the only valid path separator is /, and the only two bytes which can't appear in a pathname component are / and \0 (a byte with value 0).
Hence, on a POSIX system, "C:\foo\bar.baz" is a perfectly valid short filename, and running
file normalize {C:\foo\bar.baz}
would yield /path/to/current/dir/C:\foo\bar.baz. By the same logic, [file tail $short_filename] is the same as $short_filename.
The solution is to either do what Glenn Jackman proposed or to somehow pass the short name from the browser via some other means (some JS bound to an appropriate file entry?). Also you could attempt to detect the user's OS from the User-Agent header.
To make Glenn's idea more agnostic to user's platform, you could go like this:
Scan the file name for "/".
If none found, do set fname [string map {\\ /} $fname] then go to the next step.
Use [file tail $fn] to extract the tail name.
It's not very bullet-proof, but supposedly better than nothing.
You could always do [lindex [split $windows_file_name \\] end]

C - Reading multiple files

just had a general question about how to approach a certain problem I'm facing. I'm fairly new to C so bear with me here. Say I have a folder with 1000+ text files, the files are not named in any kind of numbered order, but they are alphabetical. For my problem I have files of stock data, each file is named after the company's respective ticker. I want to write a program that will open each file, read the data find the historical low and compare it to the current price and calculate the percent change, and then print it. Searching and calculating are not a problem, the problem is getting the program to go through and open each file. The only way I can see to attack this is to create a text file containing all of the ticker symbols, having the program read that into an array and then run a loop that first opens the first filename in the array, perform the calculations, print the output, close the file, then loop back around moving to the second element (the next ticker symbol) in the array. This would be fairly simple to set up (I think) but I'd really like to avoid typing out over a thousand file names into a text file. Is there a better way to approach this? Not really asking for code ( unless there is some amazing function in c that will do this for me ;) ), just some advice from more experienced C programmers.
Thanks :)
Edit: This is on Linux, sorry I forgot to metion that!
Under Linux/Unix (BSD, OS X, POSIX, etc.) you can use opendir / readdir to go through the directory structure. No need to generate static files that need to be updated, when the file system has the information you want. If you only want a sub-set of stocks at a given time, then using glob would be quicker, there is also scandir.
I don't know what Win32 (Windows / Platform SDK) functions are called, if you are developing using Visual C++ as your C compiler. Searching MSDN Library should help you.
Assuming you're running on linux...
ls /path/to/text/files > names.txt
is exactly what you want.
opendir(); on linux.
http://linux.die.net/man/3/opendir
Exemple :
http://snippets.dzone.com/posts/show/5734
In pseudo code it would look like this, I cannot define the code as I'm not 100% sure if this is the correct approach...
for each directory entry
scan the filename
extract the ticker name from the filename
open the file
read the data
create a record consisting of the filename, data.....
close the file
add the record to a list/array...
> sort the list/array into alphabetical order based on
the ticker name in the filename...
You could vary it slightly if you wish, scan the filenames in the directory entries and sort them first by building a record with the filenames first, then go back to the start of the list/array and open each one individually reading the data and putting it into the record then....
Hope this helps,
best regards,
Tom.
There are no functions in standard C that have any notion of a "directory". You will need to use some kind of platform-specific function to do this. For some examples, take a look at this post from Cprogrammnig.com.
Personally, I prefer using the opendir()/readdir() approach as shown in the second example. It works natively under Linux and also on Windows if you are using Cygwin.
Approach 1) I would just have a specific directory in which I have ONLY these files containing the ticker data and nothing else. I would then use the C readdir API to list all files in the directory and iterate over each one performing the data processing that you require. Which ticker the file applies to is determined only by the filename.
Pros: Easy to code
Cons: It really depends where the files are stored and where they come from.
Approach 2) Change the file format so the ticker files start with a magic code identifying that this is a ticker file, and a string containing the name. As before use readdir to iterate through all files in the folder and open each file, ensure that the magic number is set and read the ticker name from the file, and process the data as before
Pros: More flexible than before. Filename needn't reflect name of ticker
Cons: Harder to code, file format may be fixed.
but I'd really like to avoid typing out over a thousand file names into a text file. Is there a better way to approach this?
I have solved the exact same problem a while back, albeit for personal uses :)
What I did was to use the OS shell commands to generate a list of those files and redirected the output to a text file and had my program run through them.
On UNIX, there's the handy glob function:
glob_t results;
memset(&results, 0, sizeof(results));
glob("*.txt", 0, NULL, &results);
for (i = 0; i < results.gl_pathc; i++)
printf("%s\n", results.gl_pathv[i]);
globfree(&results);
On Linux or a related system, you could use the fts library. It's designed for traversing file hierarchies: man fts,
or even something as simple as readdir
If on Windows, you can use their Directory Management API's. More specifically, the FindFirstFile function, used with wildcards, in conjunction with FindNextFile

Resources