I was exploring gcc supported pragmas and I just didn't get what the manual say about #pragma GCC dependency:
#pragma GCC dependency allows you to check the relative dates of the current file and another file. If the other file is more recent than the current file, a warning is issued. This is useful if the current file is derived from the other file, and should be regenerated. The other file is searched for using the normal include search path. Optional trailing text can be used to give more information in the warning message.
Can anyone explain this part with some minimal code?
This is useful if the current file is derived from the other file
How can the current file be derived from the other file? I can understand how another file can be derived form the current file but not vice versa.
How can the current file be derived from the other file? I can
understand how another file can be derived form the current file but
not vice versa.
The primary case served is when a C source file is created by a program, using the designated other file as an input. The C source is derived from the other file by running the program. It is to be presumed that differences in the other file would cause the code generator program to produce the C file differently, at least a little, else the pragma in question would not be used.
Thus, if the designated other file's last-modification timestamp is more recent than the C file's, then it is highly suspect to be compiling the C file at all, for it probably does not correspond to the current version of the other file. Instead, one should regenerate the C source from the other file by running the code generator program again, obtaining a whole new version of the C file to replace the current one. The new one will, of course, have a last modification timestamp newer than the other file's, because the other file had to exist before the new version of the C file could be generated from it.
Example:
There is a classic program named lex whose purpose is to help write programs that process text, especially the text of programming languages or rich data languages (the details are not important). The input file for this program describes how to recognize and categorize the basic units of this language, which are called "tokens". If the language being parsed were C, then tokens would include language keywords, numeric constants, and operators. The input file for lex is typically tens of lines long.
lex reads such an input file and writes a C source file defining several functions and some internal tables that implement the "scanning" behavior required: reading the input text and breaking it up into tokens, which it reports back to its caller. The C source generated by this program is typically a few thousand lines long, which, compared to the much smaller input file, explains in a nutshell why lex is useful.
To build a program that scans the language in question, one provides functions (in a different source file) that call those generated by lex, and compiles them along with the lex-generated C source to obtain a complete program. Say the lex input file is named language.l, and the output of running lex on that file is named language.c. If I want to change the behavior of the scanner functions then the thing to do is to modify (small, simple) language.l and then re-run lex to regenerate language.c.
When I change language.l in any meaningful way, language.c becomes out of date until I generate a new version of it from language.l by re-running lex. If I compile the outdated version of language.c then the result does not reflect the current version of language.l. This usually constitutes an error on the part of the person building the program, and #pragma GCC dependency provides a mechanism for eliciting a warning from the compiler in that situation.
Related
As you can see above,I want to know how library functions (like printf) are made in C. I am using the borlandC++ compiler.
They are defined in lib files (***.lib), header files only have prototypes.
Lib files cannot be read in text editors.
So, please let me know how they could read?
C is a compiled language, so the C source code gets translated to binary machine-language code.
Because of that, you can't see the actual source code of any given library you have.
If you want to know how it works, you can see if it's an open source library, find the source code of the particular revision that generated the version you're using, and read it.
If it's not open source, you could try decompiling - use a tool that tries to guess what the original source code could have been like for generating the machine code your library has. As you can guess, this is not an accurate process - compiling isn't an isomorphic process - and, as you probably wouldn't have guessed, it could be illegal - but I'm not really sure what conditions it depends on, if any.
I am looking through some proprietary source code: sample programs in using a library.
The code is written in C and C++, using make for build system.
Each and every file ends in a commented out []: /*[]*/ for source files and #[]# for makefiles. What could be the reason for this?
The code is compiled for ARM with GCC, using extensions.
It is most likely a place holder for some sort of automatic expansion.
Typically something like macrodef (or one of the source code control filters) would expand such items to contain some relevant text. As typically only the comment-protected brackets would expand, the comments would remain in place, protecting the source code from actual expanded items at compilation time.
However, what you are currently looking at is probably the outer containing brackets with all of the internal expansions removed. This may have been done during a code migration from one source code control system to another. Although such an idea is highly speculative, it does not appear that they took the effort to migrate expansion items, instead of just removing them.
On one project I used to work, every C source file contained a comment at the very end:
/* End of file */
The reason for that was the gcc warning
Warning : No new line at end of file
So we had this comment (with a new line after it) to be sure people do not write after the comment :)
I have multiple C source files and respective header files. I am trying to parse these files using a compiler, e.g. ANTLR.
In ANTLR parser grammar, you can define your header files using the
#parser::includes
{#include"a.h"}
You can start parsing the first file e.g.
CommonTree tree = Parser.start("a.c");
and parser will parse the header file
a.h
but how to parse the files if you have multiple source file e.g. b.c, c.c and so on with their respective header files.
C is a pig to parse --- the semantic type of a token depends on what it's been declared as. Consider:
T(*b)[4]
If T is a type name, then this is a variable declaration. If it's an identifier, it's a functional call. In order to resolve this, any C parser that expects to actually work is going to have to keep a full type environment, which means it needs to be an unpleasantly large chunk of a C compiler.
There are ANTLR parsers for C that get all this stuff right but they're not trivial to use and I don't have any experience of them, so can't comment there.
Instead you might want to go look at using external tools to parse your C into something that's easier to deal with. gcc-xml is one such; it uses gcc itself to parse source files and then spit out XML that's much easier to handle.
What is more efficient with the C code I am working with, moving my code into an existing C program or have an h file #included so it calls the separate .c file?
When this is compiled into an .exe how does this work with having it incorporated into the original code vs having an h file and separate .c file?
I am not sure how many lines of code the program has which I would incorporate this other code into but my code is only about a thousand lines of code.
Thanks,
DemiSheep
There's no difference in efficiency between keeping code in a single .c file or in multiple .c files linked into the same executable, since once the executable is created, chances are it will contain the same binary code whichever method you choose. This can be easily verified with a binary disassembler, of course.
What a single .c file can change, however, is the speed of compilation, which may be faster for a single file than for a bunch of files that have to be linked together. IIRC, one of the reasons SQLite's preferred source distribution method is a single huge "amalgamated C file" is compilation speed.
That said, you should really not concern yourself with issues of this kind. Do break your programs into separate modules, each with a clean interface and a separate implementation file.
I know this already has a good answer that says no difference but wanted to add this
The #include directive tells the
preprocessor to treat the contents of
a specified file as if those contents
had appeared in the source program at
the point where the directive appears.
I would have wrote it myself but the msdn docs say it quite nicely.
I unfortunately was doing a little code archeology today (while refactoring out some old dangerous code) and found a little fossil like this:
# line 7 "foo.y"
I was completely flabbergasted to find such an archaic treasure in there. I read up on it on a website for C programming. However it didn't explain WHY anyone would want to use it. I was left to myself therefore to surmise that the programmer put it in purely for the sheer joy of lying to the compiler.
Note:
(Mind you the fossil was actually on line 3 of the cpp file) (Oh, and the file was indeed pointing to a .y file that was almost identical to this file.
Does anyone have any idea why such a directive would be needed? Or what it could be used for?
It's generally used by automated code generation tools (like yacc or bison) to set the line number to the value of the line in the actual source file rather than the C source file.
That way, when you get an error that says:
a += xyz;
^ No such identifier 'xyz' on line 15 of foo.y
you can look at line 15 of the actual source file to see the problem.
Otherwise, it says something ridiculous like No such identifier 'xyz' on line 1723 of foo.c and you have to manually correlate that line in your auto-generated C file with the equivalent in your real file. Trust me, unless you want to get deeply involved in the internals of lexical and semantic analysis (or you want a brain haemorrhage), you don't want to go through the code generated by yacc (bison may generate nicer code, I don't know but nor do I really care since I write the higher level code).
It has two forms as per the C99 standard:
#line 12345
#line 12345 "foo.y"
The first sets just the reported line number, the second changes the reported filename as well, so you can get an error in line 27 of foo.y instead of foo.c.
As to "the programmer put it in purely for the sheer joy of lying to the compiler", no. We may be bent and twisted but we're not usually malevolent :-) That line was put there by yacc or bison itself to do you a favour.
The only place I've seen this functionality as being useful is for generated code. If you're using a tool that generates the C file from source defined in another form, in a separate file (ie: the ".y" file), using #line can help the user know where the "real" problem is, and where they should go to correct it (the .y file where they put the original code).
The purpose of the #line directive is mainly for use by tools - code generators can use it so that debuggers (for example) can keep context of where things are in the user's code or so error messages can refer the user to the location in his source file.
I've never seen that directive used by a programmer manually putting it in - and I;m not sure how useful that would be.
It has a deeper purpose. The original C preprocessor was a separate program from the compiler. After it had merged several .h files into the .c file, people still wanted to know that the error message is coming from line 42 of stdio.h or line 17 of main.c. Without some means of communication, the compiler would otherwise have no way to know which source file originally held the offending line of code.
It also influences the tables needed by any source-level debugger to translate between generated code and source file and line number.
Of course, in this case, you are looking at a file that was written by a tool (probably named yacc or bison) that is used to create parsers from a description of their grammar. This file is not really a source file. It was created from the real source text.
If your archaeology is leading you to an issue with the parser, then you will want to identify what parser generator is actually being used, and do a little background reading on parsers in general so you understand why it doing things this way at all. The documentation for yacc, bison, or whatever the tool is will likely also be helpful.
I've used #line and #error to create a temporary *.c file that you compile and let your IDE give you a browsable list of errors found by some 3rd party tool.
For example, I piped the output file from PC-LINT into a perl script which converted the human readable errors to #line and #error lines. Then compiled this output, and my IDE lets me step through each error using F4. A lot faster that manually opening up each file and jumping to a particular line.