I am looking through some proprietary source code: sample programs in using a library.
The code is written in C and C++, using make for build system.
Each and every file ends in a commented out []: /*[]*/ for source files and #[]# for makefiles. What could be the reason for this?
The code is compiled for ARM with GCC, using extensions.
It is most likely a place holder for some sort of automatic expansion.
Typically something like macrodef (or one of the source code control filters) would expand such items to contain some relevant text. As typically only the comment-protected brackets would expand, the comments would remain in place, protecting the source code from actual expanded items at compilation time.
However, what you are currently looking at is probably the outer containing brackets with all of the internal expansions removed. This may have been done during a code migration from one source code control system to another. Although such an idea is highly speculative, it does not appear that they took the effort to migrate expansion items, instead of just removing them.
On one project I used to work, every C source file contained a comment at the very end:
/* End of file */
The reason for that was the gcc warning
Warning : No new line at end of file
So we had this comment (with a new line after it) to be sure people do not write after the comment :)
Related
I am trying to learn C and I have this C file that I want view the macros of. Is there a tool to view the macros of the compiled C file.
No. That's literally impossible.
The preprocessor is a textual replacement that happens before the main compile pass. There is no difference between using a macro and putting the code the macro expands to in its place.*
*Ignoring the debugger output. But even then you can do it if you know the right #pragma to tell it the file and line number.
They're always defined in the header file(s) that you've imported with #include, or that those files in turn #include.
This may involve a lot of digging. It may involve going into files that make no sense to you because they're not written for casual inspection.
Any macros of any importance are usually documented. They may use other more complex implementation-specific macros that you shouldn't concern yourself with ordinarily, but if you're curious how they work the source is all there.
That being said, this is only relevant if you have the source and more specifically a complete build environment. Once compiled all these definitions, like the source itself, do not appear in the executable and cannot be inferred directly from the executable, especially not a release build.
Unlike Java or C#, C compiles directly to machine code so there's no way to easily reverse that back to the source. There are "decompilers" that try, but they can only really guess as to the original source. VM-based languages like Java and C# only lightly compile the code, sot here are a lot of hints as to how that code was generated and reversing it is an easier process.
I tried to use vim for c program editing. Is there a way to auto write function skeleton defined in header file?
situations like
"my_code.h"
int temp(int*);
and "my_code.c"
<<< here auto write >>> like
int temp(int*) { return }
int main()
{
}
I'm using c.vim plug-in. I tried to find it, but I couldn't make it.
There are code completion scripts, yes.. However, this is not something you generally want. It works for simple things like basic C functions, and fails horribly beyond that (i.e. templates etc in c++). You don't save any time by using such plugins, and mastering vim motion/yank/paste commands provide the same result in the same amount of time, and you become more familiar with a modal editor. Is it that hard to copy-paste the function prototype and add some braces {/}?
If you want something to help as a reminder to write function definitions to go with function prototypes, consider using the taglist plugin.
snippets are like the built-in :abbreviate on steroids, usually with parameter insertions, mirroring, and multiple stops inside them. One of the first, very famous (and still widely used) Vim plugins is snipMate (inspired by the TextMate editor); unfortunately, it's not maintained any more; though there is a fork. A modern alternative (that requires Python though) is UltiSnips. There are more, see this list on the Vim Tips Wiki.
There are three things to evaluate: First, the features of the snippet engine itself, second, the quality and breadth of snippets provided by the author or others; third, how easy it is to add new snippets.
Additionally, there are also template plugins that pre-initialize a new, empty file with a skeleton, often including a file header and copyright statement. Search vim.org; you'll find plenty.
I've have a :GOTOIMPL command in lh-cpp that generates an empty function definition from a function declaration.
However, you'll have to execute the command on each function declaration and go back to the header file. I've never took the time to batch the process from an header file and no implementation file -- as this is not a use case I have as there exist other solutions...
IOW... there exist projects that do the job from the command-line (and which you could call from vim then) (like for instance https://github.com/Davidbrcz/header-expander), or even other plugins (like protodef: http://www.vim.org/scripts/script.php?script_id=2624).
I have a single executable which consist of many .c source files across several directories.
Currently I need to run static analysis on the whole source code, not on each files separately.
I just found gcc ʟᴛᴏ (link time optimisation) works by compressing gimple which mirror the preprocessed source.
Also when the compiler crash during ʟᴛᴏ linking phase, it asks for sending preprocessed sources for the bug report.
By merging source files, I mean combining all the files used for creating the executable into a single file. Compiling and linking that single file would create the library, resulting in doing manually what ʟᴛᴏ does. (but it’s not the aim here. static analysers don’t support things like ɪᴘᴏ/ʟᴛᴏ)
Doing this manually will definitely takes hours…
So, is there a way to merge C source files automatically ? Or at least to get ʟᴛᴏ preprocessed sources ? (it seems thesave-tempsoption does nothing interesting during linking with ʟᴛᴏ)
CIL (C Intermediate Language) has a 'merger' feature which I've successfully used for some simple merge operations.
I've used it to merge moderately complicated programs - around a hundred files in different folders. Of course, if your codebase includes any C++ the CIL merger won't be able to merge that.
No, because for example two files might have conflicting static declarations. There are other ways that moving source code into a single file might make it stop working, and diagnosing every possible one would require solving the Halting Problem. (Does an arbitrary program ever use the result of __FILE__ in such a way that it fails if two sections of code are in the same file?) File-scope declarations are the most likely to occur in the real world, though.
That said, you can try just concatenating the files and seeing what error messages you get. Most headers should keep working if you #include them twice. A conflicting identifier name can be fixed by a search-and replace in the original files.
I have a bunch of C files and I need to check for a set of indentation rules. These rules are custom made. I basically need to flag warnings for all indentation violations. Is there any code/tool that does basic parsing of a C file. I am planning to add my own stuff to a code that already exists.
Since you plan on extending existing code, why not use astyle?
Once you have the correct flags it formats your code accordingly so that the results will always be the same. I (sadly) haven't found any dry-run flags, but a diff of your original file and the newly created astyle file should give you all violations.
I unfortunately was doing a little code archeology today (while refactoring out some old dangerous code) and found a little fossil like this:
# line 7 "foo.y"
I was completely flabbergasted to find such an archaic treasure in there. I read up on it on a website for C programming. However it didn't explain WHY anyone would want to use it. I was left to myself therefore to surmise that the programmer put it in purely for the sheer joy of lying to the compiler.
Note:
(Mind you the fossil was actually on line 3 of the cpp file) (Oh, and the file was indeed pointing to a .y file that was almost identical to this file.
Does anyone have any idea why such a directive would be needed? Or what it could be used for?
It's generally used by automated code generation tools (like yacc or bison) to set the line number to the value of the line in the actual source file rather than the C source file.
That way, when you get an error that says:
a += xyz;
^ No such identifier 'xyz' on line 15 of foo.y
you can look at line 15 of the actual source file to see the problem.
Otherwise, it says something ridiculous like No such identifier 'xyz' on line 1723 of foo.c and you have to manually correlate that line in your auto-generated C file with the equivalent in your real file. Trust me, unless you want to get deeply involved in the internals of lexical and semantic analysis (or you want a brain haemorrhage), you don't want to go through the code generated by yacc (bison may generate nicer code, I don't know but nor do I really care since I write the higher level code).
It has two forms as per the C99 standard:
#line 12345
#line 12345 "foo.y"
The first sets just the reported line number, the second changes the reported filename as well, so you can get an error in line 27 of foo.y instead of foo.c.
As to "the programmer put it in purely for the sheer joy of lying to the compiler", no. We may be bent and twisted but we're not usually malevolent :-) That line was put there by yacc or bison itself to do you a favour.
The only place I've seen this functionality as being useful is for generated code. If you're using a tool that generates the C file from source defined in another form, in a separate file (ie: the ".y" file), using #line can help the user know where the "real" problem is, and where they should go to correct it (the .y file where they put the original code).
The purpose of the #line directive is mainly for use by tools - code generators can use it so that debuggers (for example) can keep context of where things are in the user's code or so error messages can refer the user to the location in his source file.
I've never seen that directive used by a programmer manually putting it in - and I;m not sure how useful that would be.
It has a deeper purpose. The original C preprocessor was a separate program from the compiler. After it had merged several .h files into the .c file, people still wanted to know that the error message is coming from line 42 of stdio.h or line 17 of main.c. Without some means of communication, the compiler would otherwise have no way to know which source file originally held the offending line of code.
It also influences the tables needed by any source-level debugger to translate between generated code and source file and line number.
Of course, in this case, you are looking at a file that was written by a tool (probably named yacc or bison) that is used to create parsers from a description of their grammar. This file is not really a source file. It was created from the real source text.
If your archaeology is leading you to an issue with the parser, then you will want to identify what parser generator is actually being used, and do a little background reading on parsers in general so you understand why it doing things this way at all. The documentation for yacc, bison, or whatever the tool is will likely also be helpful.
I've used #line and #error to create a temporary *.c file that you compile and let your IDE give you a browsable list of errors found by some 3rd party tool.
For example, I piped the output file from PC-LINT into a perl script which converted the human readable errors to #line and #error lines. Then compiled this output, and my IDE lets me step through each error using F4. A lot faster that manually opening up each file and jumping to a particular line.