I am trying to understand the different file extensions for the pfxplus powerflex database. Could someone please help telling me briefly what each file is for?
.k1
.k2
.k3
...
.k13
.k14
.k15
.fd
.def
.hdr
.prc
.pc3
Data files:
OK, so .dat is the data file.
.k1 -> .k15 are index files.
These are the critical data files for runtime. (Combined with filelist.cfg or pffiles.tab similar to define what files are available overall).
.fd is the file definition, needed for compiling programs
.tag (which you did not mention) is needed only if you need to access field names at run time (such as using a generic report tool)
.def is the file definition in human readable form, and is not needed by any process but is produced so a programmer or user can understand the file structure.
Run time:
The .ptc files are the compiled threads interpreted by the powerflex runtime.
The .prc file is a resource file that is used at runtime in conjunction with the .ptc file - it defines how a character based program is to look in a gui environment in "g-mode". It was the cheap way to upgrade character based programs when windows first started getting popular usage.
.hdr and .pc3 escape me at the moment, but are vaguely familiar - .hdr is probably another data file used with compression or special field types for later versions of pfxplus. .pc3 may in fact be the .ptc files...
Related
I want to localize my application using the catopen()/catgets() family of functions.
As far as I understand, in the absence of NLSPATH variable, message catalogs will be looked up under /usr/share/locale/xx_YY/LC_MESSAGES.
What is the "traditional" file extension for message catalog files? I see some code examples using *.cat while others don't use any extension at all. Is it dependent on a particular UNIX flavour?
On my Linux boxes I see plenty of *.mo files, but those are GNU gettext archives. It seems catgets() can rarely be seen "in the wild" nowadays.
I meant this to be a comment, but it's a bit too long :P
Looking at the doc you've linked to, it seems probably that the code isn't opinionated as to file extension. Since you're not using MIME or anything to automatically find a handler for this file, the only requirement is likely to be that the name is correct. In UNIX, especially in the shell, file extensions often mean nothing to the system - fo example, any file extension can be used on an executable script as long as the executable bit is set and the shebang line at the top of the file specifies an appropriate interpreter.
It's possible the user community, if one still exists for this crufty sounding library, has a standard naming convention that the docs don't describe - but I wouldn't sweat it too much. It's trival to change file names, even if it means a recompile ( command line variables would make the program agnostic as to file name and extension )
I have to work with a C/C++ build environment that drops intermediate files all over the place:
.i files containing the output of the C-preprocessor (roughly raw C)
.s files containing the input of the C-assembler
CEDET (I assume the semantic analyzer) eventually finds these files and attempts to index them. This results in jumping to .i files containing raw C for definitions and generally slowing down parsing and loading of the .semanticdb.
I never open these files in emacs, so they must be being loaded by the background analyser.
Is it possible to prevent the analyser from loading these files? I can't find any configuration options that define the file-types that are parsed by the background analyser.
If you never need C mode for these files, here's a quick fix:
(add-to-list 'auto-mode-alist '("\\.i\\'" . fundamental-mode))
(add-to-list 'auto-mode-alist '("\\.s\\'" . fundamental-mode))
The answer from abo-abo gave me the clues I needed. The grep implementation (used by EDE) of semantic-symref-perform-search uses auto-mode-alist to find matching files for a given semantic mode (based on the current buffer's mode - eg `c-mode) when trying to resolve symbols.
The final fix I used is to specifically eliminate the default entries in the auto-mode-alist using:
(delete '("\\.i\\'" . c-mode) auto-mode-alist)
(delete '("\\.ii\\'" . c++-mode) auto-mode-alist)
Adding fundamental-mode entries as suggested by abo-abo seems to work also, however I was concerned that since the c-mode entries were still in the list a change in implementation could result in them being reactivated.
This may be compiler specific, in which case I am using the IAR EWARM 5.50 compiler (firmware development for the STM32 chip).
Our project consists of a bunch of C-code libraries that we compile first, and then the main application which compiles its C-code and then links in those libraries (pretty standard stuff).
However, if I use a hex editor and open up any of the library object files produced or the final application binary, I find a whole bunch of plain text references inside the output binary to the file paths of the C files that were compiled. (eg. I see "C:\Development\trunk\Common\Encryption\SHA_1.c")
Two issues with this:
we don't really want the file paths being easily readable as that indicates our design some what
the size of the binary grows if you have your C-files located in a long subdirectory (the binary contains the full path, not just the name)...this is especially important when we're dealing with firmware that has a limited amount of code space (256KB).
Any thoughts on this? I've tried all the switches in the compiler I can think of to "remove debug information", etc., but those paths are still in there.
"The command-line option --no_path_in_file_macros has been added. It removes the path leaving only the filename for the symbols FILE and BASE_FILE."
It is defined in the release notes if IAR.
http://supp.iar.com/FilesPublic/UPDINFO/005832/arm/doc/infocenter/iccarm_history.ENU.html
Or you can look for FILE and BASE_FILE macros and remove it you do not want to use the flag.
I have a program which compiles and runs scripts.
To create a standalone version of the script, I reserve a large static buffer to hold the compiled script. The compiled script is copied into a copy of the program and it can then be run from that copy.
This works fine. It has some disadvantages however:
the buffer is static and takes up space if there's no compiled
program in it.
if the script to be included exceeds the buffer's size, I need to build a new version with a larger buffer.
I'd like to add the compiled script to the end of the program, but naively doing so doesn't work as the exe loader chokes on the new file size.
Is there a way to manipulate the exe so it would be acceptable for the loaders (mind this is a cross platform program)?
would be acceptable for the loaders (mind this is a cross platform program)?
I would think that this is unlikely to be possible without being platform specific. Time for a common interface with different implementations (so the code that saves/loads the script is common, but the executable manipulation is specific).
On Windows you'll hit the problem that a running executable file is locked against modification. By working on copies this can be worked around (but the only way to rename back in a completely deterministic way it is perform the move on boot, but scheduling a job might be acceptable).
On Windows the easiest way to add data to an image (executable or dll) is using resources. Define a custom resource type and add into the image (UpdateResource function) and later retrieve with LoadResource.
You said "script", so I suppose you have a separate file containing the script (a text file?). You could write a simple program that reads the script file and convert it in a compilable form (e.g. a C source containing the initialization of an array of byte). There are also tools you can use to convert an arbitrary file into a linkable object (.o or .obj). In the past I have used the command "objcopy" from GNU bimutils. In particular, on linux:
objcopy -I binary -O elf32-i386 mydata mydata.o
This command creates an object and three public symbols you can use to find the start, the end and the size of your data block:
_binary_mydata_start
_binary_mydata_end
_binary_mydata_size
Something similar may work also on Windows, provided that you install a Windows version of GNU binutils (e.g. cygwin).
I have a huge project, whole written in C language and I have a single make file that is used to compile it. The project C files contains lots of capitalize problems in it's header files, meaning there are tones of header files that were miss-spelled in lots of C files.
The problem is I need to migrate this project to compile on Linux machine and since Linux is case sensitive I got tones of errors.
Is there an elegant way which I can run make file in Linux and tell him to ignore case sensitive?
Any other solution will be welcome as well.
Thanks a lot.
Motti.
You'll have to fix everything by hand and rename every file or fix every place with #include. Even if you have a huge project (comparable with linux kernel), it should be possible to do this during a hour or two. Automation may be possible, but manual way should be better - because script won't be able to guess which name is right - filename, or the name used in #include.
Besides, this situation is a fault of original project developer. If he/she wasn't sloppy and named every header in every #include correctly, this wouldn't happen. Technically, this is a code problem similar to syntax error. The only right way to deal with it is to fix it.
I think it takes not too long to write a small script, which goes thru the directories first, then replaces C headers. Explained:
Scan the headers' folder and collect filenames.
Make a lowercase list of them. You have now original and locased pairs.
Scan the C source files and find each line contains "#include"
Lowercase it.
Find the lowercase filename in the list collected and lowercased from headers.
Replace the source line with the one collected from headers.
You should put the modified files into a separate folder structure, avoid overwriting the whole source with some buggy stuff. Don't forget to create target folders during the source tree scan.
I recommend a script language for that task, I prefer PHP, but just it's the only server-side script language which I know. Yep, it will run for a while, but only once.
(I bet that you will have other difficulties with that project, this problem is not a typical indicator of high quality work.)
Well I can only tell you that you need to change the case of those header files. I don't know that there is any way you can make it automatic but still you can use cscope to do it in a easier way.
http://www.linux-tutorial.info/modules.php?name=ManPage&sec=1&manpage=cscope
You can mount the files on a case-insensitive file system. FAT comes to mind. ntfs-3g does not appear to support this.
I use the find all and replace all functionality of Source Insight when i have to do complete replacement. But your problem seems quite big, but you can try the option to replace every header file name in all occurences of source files using the
"Find All" + "Replace" functionality. You can use notepad++ too for doing the same.
A long time ago there was a great tool under MPW (Macintosh Programmer's Workshop) called Canon. It was used to canonize text files, i.e. make all symbols found in a given refernce list have have the same usage of upper/lower case. This tool would be ideal for a task like this - I wonder if anything similar exists under Linux ?