I'm writing a library and have some __weak functions which need to be overwritten by the programmers. I want to make sure they do so, so I did this:
__weak void checkButtons(){
#warning "Implement this"
}
And put the prototype in the header:
__weak void checkButtons();
And implement it in another file:
void checkButtons(){
//Some codes here
}
By the way, the problem is that in compile time, the compiler shows the #warning message.
compiling library.c...
library.c(8): warning: #1215-D: #warning directive: message "Implement this"
#warning message "Implement this"
library.c: 1 warning, 0 errors
I think if a __weak function is overridden, the main function should not be compiled, should be?
I don't know why this happens. Any other idea to force the user?
the main function should not be compiled, should be?
Everything is compiled. Weak functions are compiled, normal functions are compiled. The linker when linking the program chooses (the compiled) normal version of the symbol over the (also compiled) weak version of the symbol. Linking happens after compilation.
why this happens.
You have written a #warning in your code and it's going to be visible and processed by the compiler anyway, irrelevant if the function is used or not. #warnings are interpreted by the preprocessor, before compilation.
How to force user implement __weak functions
First, do not use weak in header file. It would make all function definitions weak that see this declaration.
// header
void checkButtons();
Secondly, if you want to force user to define the function, then don't use weak and don't define the function. Weak symbols are used when you want to provide a default definition for the function, if you don't want to do it then don't use it. Without function definition, user will receive a "undefined reference"-kind-of error message from the linker.
I'm writing a library and have some functions which need to be overwritten by the programmers
Yet better and normal way would be for your library to take function pointers that point to the functions implemented by the user of your library. Such design is "better" - allows code reuse, easier unit-testing and saves a lot of refactoring in case of later changes.
Related
Someone knows what does this error mean?
My piece of code is shown below:
// test.c
inline void fun() {
typedef struct {
int i;
} S;
}
GCC could compile without an error while clang (clang 12.0.0) claims an error:
root:~/test # clang -c test.c
test.c:2:19: error: unsupported: anonymous type given name for linkage purposes by typedef declaration after its linkage was computed; add a tag name here to establish linkage prior to definition
typedef struct {
^
S
test.c:4:7: note: type is given name 'S' for linkage purposes by this typedef declaration
} S;
^
1 error generated.
According to the error text from clang, it looks like I need to add a tag name for the anonymous typedef. After adding a tag name, it won’t show an error. However, this piece of code from a team work program in my department so I need a strong reason to modify it. Someone knows what’s this error mean?
UPDATE: In the comments below someone mentioned, The code could compile but fail to link with GCC. Actually that because the GCC optimization is not turned on. Use -O2 to compile and link will pass.
I mentioned this in a comment, but I think I might be able to expand it to a proper answer. The problem here is in how the inline keyword works in C. Quite frankly, I don't know what the C standards committee was smoking when they came up with this ridiculous concept, and it's probably best not to know. Don't ask why the rules are this way: I don't know.
There is an important difference between extern inline and static inline functions, as well as ones declared simply inline.
inline functions may be defined in a header and included in multiple compilation units. This is because they aren't considered "real" definitions. No symbol for them is emitted, thus there is no multiple-definition error. However, the compiler is not required to actually inline the function call. It may instead try to call it. Since no symbol exists, there will be an error at link time. Never use inline functions in C. They make no sense at all.
extern inline functions provide the definition for an inline function. If the above case happens, and the compiler tries to call the function, so long as some file includes an extern inline definition, there won't be an error because it emits a symbol. This making sense? No? Good.
static inline functions are the only one that make any sense whatsoever. Because they're static and therefore local, no symbol need exist. If the compiler decides to inline it, it does. If not, it calls the function. No problems. Declare your function static inline and it will compile.
But that wasn't your question. Why is clang complaining about the struct? Why doesn't gcc complain? The answer is "idunnoman". Best guess is that clang is taking the rules very seriously, in which case because there is no "real" definition for fun, there is also no real definition for your anonymous struct, and therefore can't be typedef'd.
But that would also be the case if it were a named struct. And why does the linker care whether the structure has a name? Once again, I can only guess it has to do with the fact that there can be numerous definitions for an inline function, and clang wants the structure to have a constant name between them. But different compilation units can have different structures with the same name, so that doesn't make sense either.
Ugh. I thought I had an answer for you but now I'm even more confused than when I started. I'll still post this but it probably ought to be downvoted. Oh well.
Might be a stupid (and really simple) question, but I've wanted to try since I don't know where to find an answer for that. I'm realizing some book, and I've started googling something - I was actually kinda curious why, if we have files like these:
file1.c
#include <stdio.h>
#include "file2.h"
int main(void){
printf("%s:%s:%d \n", __FILE__, __FUNCTION__, __LINE__);
foo();
return 0;
}
file2.h
void foo(void);
and
file2.c
#include <stdio.h>
#include "file2.h"
void foo(void) {
printf("%s:%s:%d \n", __FILE__, __func__, __LINE__);
return;
}
compiling it with:
gcc file1.c file2.c -o file -Wall
Why is it a good practice to include the header file of file2.h which contains prototype of the foo function in the same file that foo is declared? I totally understand attaching it to file1.c, while we should use the header file to define the interface of each module, rather than writing it "raw", but why attaching header file with the prototype to the file where it is declared (file2.c)? -Wall option flag also does not say anything if I won't include it, so why people say it is "the correct way"? Does it help avoiding errors, or is it just for clearer code?
Those code samples are taken from this discussion:
Compiling multiple C files in a program
Where some user said it is 'the correct way'.
To answer this question, you should have a basic understanding of the difference between the compiler and the linker. In a nuttshell, the compiler, compilers each translation unit (C file) alone then it's the linker's job to link all the compiled files together.
For instance, In the above code the linker is the one who is searching where the function foo() called from main() exists and links to it.
The compiler step comes first then the linker.
Let's demonstrate an example where including file2.h in file2.c comes handy:
file2.h
void foo(void);
file2.c
#include <stdio.h>
#include "file2.h"
void foo(int i) {
printf("%s:%s:%d \n", __FILE__, __func__, __LINE__);
return;
}
Here the prototype of foo() is different from its definition.
By including file2.h in file2.c so the compiler can check whether the prototype of the function is equivalent to the definition of it, if not then you will get a compiler error.
What will happen if file2.h is not included in file2.c?
Then the compiler won't find any issue and we have to wait until the linking step when the linker will find that there is no matching for the function foo() called from main() and it will through an error.
Why bother then if the linker, later on, will find out the error anyway?
Because in big solutions there might be hundreds of source codes that take so much time to be compiled so waiting for the linker to raise the error at the end will waste a great amount of time.
This is The Only True ReasonTM:
If the compiler encounters a call of a function without a prototype, it derives one from the call, see standard chapter 6.5.2.2 paragraph 6. If that does not match the real function's interface, it's undefined behavior in most cases. At best it does no harm, but anything can happen.
Only with a high enough warning level, compilers emit diagnostics like warnings or errors. That's why you should always use the highest warning level possible, and include the header file in the implementation file. You will not want to miss this chance to let your code being checked automatically.
C doesn’t mangle symbols usually (there are some exceptions eg. on Windows). Mangled symbols would carry type information. Without it, the linker trusts that you didn’t make mistakes.
If you don’t include the header, you can declare the symbol to be one thing, but then define it to be whatever else. Eg. in the header you might declare foo to be a function, and then in the source file you can define it to be a totally incompatible function (different calling convention and signature), or even not a function at all – say a global variable. Such a project may link but won’t be functional. The error may be in fact hidden, so if you don’t have solid tests in place, you won’t catch it until a customer lets you know. Or worse, there’s a news article about it.
In C++ the symbol carries information about its type, so if you declare one thing and then define something with same base name but an incompatible type, the linker will refuse to link the project, since a particular symbol is referenced but never defined.
So, in C you include the header to prevent mistakes that the tools can’t catch, that will result in a broken binary. In C++, you do it so that you’ll get perhaps an error during compilation instead of later in the link phase.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Why does C++ have header files and .cpp files?
C++ compilation
A compilation in C++ is done in 2 major phases:
The first is the compilation of "source" text files into binary "object" files: The CPP file is the compiled file and is compiled without any knowledge about the other CPP files (or even libraries), unless fed to it through raw declaration or header inclusion. The CPP file is usually compiled into a .OBJ or a .O "object" file.
The second is the linking together of all the "object" files, and thus, the creation of the final binary file (either a library or an executable).
Where does the HPP fit in all this process?
A poor lonesome CPP file...
The compilation of each CPP file is independent from all other CPP files, which means that if A.CPP needs a symbol defined in B.CPP, like:
// A.CPP
void doSomething()
{
doSomethingElse(); // Defined in B.CPP
}
// B.CPP
void doSomethingElse()
{
// Etc.
}
It won't compile because A.CPP has no way to know "doSomethingElse" exists... Unless there is a declaration in A.CPP, like:
// A.CPP
void doSomethingElse() ; // From B.CPP
void doSomething()
{
doSomethingElse() ; // Defined in B.CPP
}
Then, if you have C.CPP which uses the same symbol, you then copy/paste the declaration...
COPY/PASTE ALERT!
Yes, there is a problem. Copy/pastes are dangerous, and difficult to maintain. Which means that it would be cool if we had some way to NOT copy/paste, and still declare the symbol... How can we do it? By the include of some text file, which is commonly suffixed by .h, .hxx, .h++ or, my preferred for C++ files, .hpp:
// B.HPP (here, we decided to declare every symbol defined in B.CPP)
void doSomethingElse() ;
// A.CPP
#include "B.HPP"
void doSomething()
{
doSomethingElse() ; // Defined in B.CPP
}
// B.CPP
#include "B.HPP"
void doSomethingElse()
{
// Etc.
}
// C.CPP
#include "B.HPP"
void doSomethingAgain()
{
doSomethingElse() ; // Defined in B.CPP
}
How does include work?
Including a file will, in essence, parse and then copy-paste its content in the CPP file.
For example, in the following code, with the A.HPP header:
// A.HPP
void someFunction();
void someOtherFunction();
... the source B.CPP:
// B.CPP
#include "A.HPP"
void doSomething()
{
// Etc.
}
... will become after inclusion:
// B.CPP
void someFunction();
void someOtherFunction();
void doSomething()
{
// Etc.
}
One small thing - why include B.HPP in B.CPP?
In the current case, this is not needed, and B.HPP has the doSomethingElse function declaration, and B.CPP has the doSomethingElse function definition (which is, by itself a declaration). But in a more general case, where B.HPP is used for declarations (and inline code), there could be no corresponding definition (for example, enums, plain structs, etc.), so the include could be needed if B.CPP uses those declaration from B.HPP. All in all, it is "good taste" for a source to include by default its header.
Conclusion
The header file is thus necessary, because the C++ compiler is unable to search for symbol declarations alone, and thus, you must help it by including those declarations.
One last word: You should put header guards around the content of your HPP files, to be sure multiple inclusions won't break anything, but all in all, I believe the main reason for existence of HPP files is explained above.
#ifndef B_HPP_
#define B_HPP_
// The declarations in the B.hpp file
#endif // B_HPP_
or even simpler (although not standard)
#pragma once
// The declarations in the B.hpp file
Well, the main reason would be for separating the interface from the implementation. The header declares "what" a class (or whatever is being implemented) will do, while the cpp file defines "how" it will perform those features.
This reduces dependencies so that code that uses the header doesn't necessarily need to know all the details of the implementation and any other classes/headers needed only for that. This will reduce compilation times and also the amount of recompilation needed when something in the implementation changes.
It's not perfect, and you would usually resort to techniques like the Pimpl Idiom to properly separate interface and implementation, but it's a good start.
Because C, where the concept originated, is 30 years old, and back then, it was the only viable way to link together code from multiple files.
Today, it's an awful hack which totally destroys compilation time in C++, causes countless needless dependencies (because class definitions in a header file expose too much information about the implementation), and so on.
Because in C++, the final executable code does not carry any symbol information, it's more or less pure machine code.
Thus, you need a way to describe the interface of a piece of code, that is separate from the code itself. This description is in the header file.
Because C++ inherited them from C. Unfortunately.
Because the people who designed the library format didn't want to "waste" space for rarely used information like C preprocessor macros and function declarations.
Since you need that info to tell your compiler "this function is available later when the linker is doing its job", they had to come up with a second file where this shared information could be stored.
Most languages after C/C++ store this information in the output (Java bytecode, for example) or they don't use a precompiled format at all, get always distributed in source form and compile on the fly (Python, Perl).
It's the preprocessor way of declaring interfaces. You put the interface (method declarations) into the header file, and the implementation into the cpp. Applications using your library only need to know the interface, which they can access through #include.
Often you will want to have a definition of an interface without having to ship the entire code. For example, if you have a shared library, you would ship a header file with it which defines all the functions and symbols used in the shared library. Without header files, you would need to ship the source.
Within a single project, header files are used, IMHO, for at least two purposes:
Clarity, that is, by keeping the interfaces separate from the implementation, it is easier to read the code
Compile time. By using only the interface where possible, instead of the full implementation, the compile time can be reduced because the compiler can simply make a reference to the interface instead of having to parse the actual code (which, idealy, would only need to be done a single time).
Responding to MadKeithV's answer,
This reduces dependencies so that code that uses the header doesn't
necessarily need to know all the details of the implementation and any
other classes/headers needed only for that. This will reduce
compilation times, and also the amount of recompilation needed when
something in the implementation changes.
Another reason is that a header gives a unique id to each class.
So if we have something like
class A {..};
class B : public A {...};
class C {
include A.cpp;
include B.cpp;
.....
};
We will have errors, when we try to build the project, since A is part of B, with headers we would avoid this kind of headache...
gcc has __attribute__((weak)) which allows to create a weak symbol such as a function. This allows the user to redefine a function. I would like to have the same behavior in XC8.
More info:
I am writing a driver for XC8 and I would like to delegate low level initialization to a user defined function.
I know it is possible to redefine a function: there is the putch function that is implemented in XC8's source file and which is called by the printf function. The user is allowed to reimplement putch inside his application. There are two functions with the same name, but no error is raised.
putch's implementation in XC8's source files has a comment saying "Weak implementation. User implementation may be required", so it must be possible.
I looked at pragmas in XC8's user guide, but there is no directive related to this question.
A linker will only search static libraries to resolve a symbol that is not already resolved by input object files, so replacing static library functions can be done without weak linkage. Weak linkage is useful for code provided as source or object code rather then as a static library.
So if no weak linkage directive is supported, you could create a static library for the "weak" symbols and link that.
The XC8 manual documents behaviour for both the IAR compatibility directive __weak and a weak pragma, and in both cases the directives are ignored (supported only in XC16 and XC32), so you will have to use the above suggested method, which is in any case far more portable - if somewhat inconvenient.
In the case of putch() I suspect that this is not working as you believe. I would imagine that this is not a matter of weak linkage at all; in the static library containing printf() an unresolved link to putch() exists, and the linker resolves it with whatever you provide; if you were to compile and link both the Microchip implementation and yours from source code you would get a linker error; equally if you were to provide no implementation whatsoever you would get a linker error.
XC8 compiler does support the "weak" attribute.
The weak attribute causes the declaration to be emitted as a weak symbol. A weak symbol indicates that if a global version of the same symbol is available, that version should be used instead. When the weak attribute is applied to a reference to an external symbol, the symbol is not required for linking.
For example:
extern int __attribute__((weak)) s;
int foo(void)
{
if (&s)
return s;
return 0; /* possibly some other value */
}
In the above program, if s is not defined by some other module, the program will still link but s will not be given an address.
The conditional verifies that s has been defined (and returns its value if it has). Otherwise '0' is returned.
There are many uses for this feature, mostly to provide generic code that can link with an optional library.
A variable can also be qualified with the "weak" attribute.
For example:
char __attribute__((weak)) input;
char input __attribute__((weak));
Wikipedia says:
A weak symbol denotes a specially annotated symbol during linking of
Executable and Linkable Format (ELF) object files. By default, without
any annotation, a symbol in an object file is strong. During linking,
a strong symbol can override a weak symbol of the same name. In
contrast, two strong symbols that share a name yield a link error
during link-time. When linking a binary executable, a weakly declared
symbol does not need a definition. In comparison, (by default) a
declared strong symbol without a definition triggers an undefined
symbol link error. Weak symbols are not mentioned by C or C++ language
standards; as such, inserting them into code is not very portable.
Even if two platforms support the same or similar syntax for marking
symbols as weak, the semantics may differ in subtle points, e.g.
whether weak symbols during dynamic linking at runtime lose their
semantics or not.
What are the weak functions and what are their uses? I am using an stm32f429 micro controller. There are some weak functions in the library. But I can't understand, what they and their use!
I searched about it on google but did't get a satisfactory answer.
When a function is prepended with the descriptor __weak it basically means that if you (the coder) do not define it, it is defined here.
Let us take a look at my arch-nemesis "HAL_UART_RxCpltCallback()".
This function exists within the HAL of the STM32F4-HAL code base that you can download from ST-Micro.
Within the file stm32f4xx_hal_uart.c file you will find this function defined as:
__weak void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
/* NOTE: This function Should not be modified, when the callback is needed,
the HAL_UART_RxCpltCallback could be implemented in the user file
*/
}
So, as the note within the code here says, place this function inside your own user files. However when you do that, do not put in the __weak term. This means that the linker will take your definition of the HAL_UART_RxCpltCallback() function and not the one defined within the stm32f4xx_hal_uart.c file.
This gives the generic code base the ability to always compile. You don't have to write a whole bunch of functions that you are not interested in but it will compile. When it comes time to writing your own, you just have to NOT define yours as __weak and write it.
Simple? Helpful?
Cheers!!
Let's say we have a common (library) protocol interface protocol.c, and upon receiving data we wish to execute application specific logic in our communication interface com.c. This can be solved with a weak function.
/// protocol.h
void protocol_recCallback(protocol_t *prt);
/// protocol.c
__weak void protocol_recCallback(protocol_t *prt) {}
void protocol_rx(protocol_t *prt)
{
// Common protocol interface
protocol_recCallback(prt); // This will call application specific function in com.c
}
/// com.c
#include "protocol.h"
void protocol_recCallback(protocol_t *prt)
{
// Application specific code is executed here
}
Advantage:
If protocol_recCallback() is not defined in com.c. The linker will not output an undefined reference and the __weak function will be called.
__weak function are methods that can be overwritten by user function with same name, used to define vector tables, and default handlers
Normal function writing (declaration and definition) are considered strong meaning that the function name cannot be re declared, you will get compiler/linker error
Declaring the function as week it can be overwritten by user code
void USART1_IRQHandler (void) __attribute__ ((weak, alias("Default_Handler")));
uint32_t vectors[75] __attribute__((section(".isr_vector")));
vectors[0] = STACK_START;
vectors[52] = USART1_IRQHandler;
void Default_Handler(void) {
while(1);
}
uart1.c (user code)
void USART1_IRQHandler(){
...
}
in the above sample code the USART1_IRQHandler is defined as weak function and aliased to Default_handler
User can override this function using the same name without any compiler/linker error if user define the USART1_IRQHandler in uart1.c this new function definition will be used
In addition to "This gives the generic code base the ability to always compile." __weak allows you to regenerating(in CubeMX) your code without touching your __weak +less callback function code.
if you write your code in this:
__weak void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
/* NOTE: This function Should not be modified, when the callback is needed,
the HAL_UART_RxCpltCallback could be implemented in the user file
*/
}
and regenarate in cubemx for some reason. Your code will blow up!