I currently working on Intel xp32 platform using arm env. I found have errors like:
My COMPILER OPTIONS
REM ----- SET COMPILER OPTIONS
SET GCOPTIONS=-DLOGSYS_FLAG -D_SOFTPAY -D__arm --thumb --diag_suppress 1,611,815,550,962,1300
Error
Error: L6242E: Cannot link object synctask.o as its attributes are incompatible with the image attri
butes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte
datatypes.
Error: L6242E: Cannot link object syncdial.o as its attributes are incompatible with the image attri
butes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte
datatypes.
can anybody tell me how to solve it.
I try change compiler
REM ----- SET COMPILER OPTIONS SET GCOPTIONS= --thumb -g+ --apcs
--adsabi --diag_suppress 1,611,815,550,962
And new Erorr
C:\ARM2\RVCT\PROGRAMS\2.0.1\359\WIN_32-PENTIUM\armlink" -o sp2000.axf -entry _vrxgo -scatter _vrxcc.sct -keep *(.0_vrx_pgmhdr) -keep *(.1_vrx_libtbl) -keep *(.2_vrx_libend) -via _vrxcc.via
Error: L6242E: Cannot link object V_misc.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object new.opi as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object delete.opi as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object Glue.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object V_card.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object V_gds.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EMVCWrappers.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object MVT.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EST.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object emvutils.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object emvfuncs.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object SYS.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EMVSelection.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EMVAIDList.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EMVCmdSet.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EMVRiskMgmtData.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Error: L6242E: Cannot link object EMVTxnStatus.o as its attributes are incompatible with the image attributes.
... require 4-byte alignment of 8-byte datatypes clashes with require 8-byte alignment of 8-byte datatypes.
Fatal error: L6045U: Invalid relocation #8 in EMVTxnStatus.o(.text). Type 102 is reserved for the GNU tool chain.
Finished: 17 information, 0 warning, 17 error and 1 fatal error messages.
Can Anybody help me solve this :(
Related
From what I understand, the main reason people separate function declarations and definitions is so that the functions can be used in multiple compilation units. So then I was wondering, what's the point of violating DRY this way, if structures don't have prototypes and would still cause ODR problems across compilation units? I decided to try and define a structure twice using a header across two compilation units, and then combining them, but the code compiled without any errors.
Here is what I did:
main.c:
#include "test.h"
int main() {
return 0;
}
a.c:
#include "test.h"
test.h:
#ifndef TEST_INCLUDED
#define TEST_INCLUDED
struct test {
int a;
};
#endif
Then I ran the following gcc commands.
gcc -c a.c
gcc -c main.c
gcc -o final a.o main.o
Why does the above work and not give an error?
C's one definition rule (C17 6.9p5) applies to the definition of a function or an object (i.e. a variable). struct test { int a; }; does not define any object; rather, it declares the identifier test as a tag of the corresponding struct type (6.7.2.3 p7). This declaration is local to the current translation unit (i.e. source file) and it is perfectly fine to have it in several translation units. For that matter, you can even declare the same identifier as a tag for different types in different source files, or in different scopes, so that struct test is an entirely different type in one file / function / block than another. It would probably be confusing, but legal.
If you actually defined an object in test.h, e.g. struct test my_test = { 42 };, then you would be violating the one definition rule, and the behavior of your program would be undefined. (But that does not necessarily mean you will get an error message; multiple definitions are handled in various different ways by different implementations.)
The key section in the standard is nearly indigestible, but §6.2.7 Compatible type and composite type covers the details, with some forward references:
¶1 Two types have compatible type if their types are the same. Additional rules for determining whether two types are compatible are described in 6.7.2 for type specifiers, in 6.7.3 for type qualifiers, and in 6.7.6 for declarators.55) Moreover, two structure, union, or enumerated types declared in separate translation units are compatible if their tags and members satisfy the following requirements: If one is declared with a tag, the other shall be declared with the same tag. If both are completed anywhere within their respective translation units, then the following additional requirements apply: there shall be a one-to-one correspondence between their members such that each pair of corresponding members are declared with compatible types; if one member of the pair is declared with an alignment specifier, the other is declared with an equivalent alignment specifier; and if one member of the pair is declared with a name, the other is declared with the same name. For two structures, corresponding members shall be declared in the same order. For two structures or unions, corresponding bit-fields shall have the same widths. For two enumerations, corresponding members shall have the same values.
¶2 All declarations that refer to the same object or function shall have compatible type; otherwise, the behavior is undefined.
¶3 A composite type can be constructed from two types that are compatible; it is a type that is compatible with both of the two types and satisfies the following conditions:
If both types are array types, the following rules are applied:
If one type is an array of known constant size, the composite type is an array of that size.
Otherwise, if one type is a variable length array whose size is specified by an expression that is not evaluated, the behavior is undefined.
Otherwise, if one type is a variable length array whose size is specified, the composite type is a variable length array of that size.
Otherwise, if one type is a variable length array of unspecified size, the composite type is a variable length array of unspecified size.
Otherwise, both types are arrays of unknown size and the composite type is an array of unknown size.
The element type of the composite type is the composite type of the two element types.
If only one type is a function type with a parameter type list (a function prototype), the composite type is a function prototype with the parameter type list.
If both types are function types with parameter type lists, the type of each parameter in the composite parameter type list is the composite type of the corresponding parameters.
These rules apply recursively to the types from which the two types are derived.
¶4 For an identifier with internal or external linkage declared in a scope in which a prior declaration of that identifier is visible,56) if the prior declaration specifies internal or external linkage, the type of the identifier at the later declaration becomes the composite type.
55) Two types need not be identical to be compatible.
56) As specified in 6.2.1, the later declaration might hide the prior declaration.
Emphasis added
The second part of ¶1 covers explicitly the case of structures, unions and enumerations declared in separate translation units. It is crucial to allowing separate compilation. Note footnote 55 too. However, if you use the same header to define a given structure (union, enumeration) in separate translation units, the chances of you not using a compatible type are small. It can be done if there is conditional compilation and the conditions are different in the two translation units, but you usually have to be trying quite hard to run into problems.
With C99 (and later standards) standard requires certain types to be available in the header <stdint.h>. For exact-width, e.g., int8_t, int16_t, etc..., they are optional and motivated in the standard why that is.
But for the uintptr_t and intptr_t type, they are also optional but I don't see a reason for them being optional instead of required.
On some platforms pointer types have much larger size than any integral type. I believe an example of such as platform would be IBM AS/400 with virtual instruction set defining all pointers as 128-bit. A more recent example of such platform would be Elbrus. It uses 128-bit pointers which are HW descriptors rather than normal addresses.
I have this MISRA C:2004 violation typedefs that indicate size and signedness should be used in place of the basic types
for example I have this piece of code, where I did not understand the right solution to avoid this violation
static int handlerCalled = 0;
int llvm_test_diagnostic_handler(void) {
LLVMContextRef C = LLVMGetGlobalContext();
LLVMContextSetDiagnosticHandler(C, &diagnosticHandler, &handlerCalled);
The MISRA rule is aimed at the fact that C does not define the exact size, range, or representation of its standard integer types. The stdint.h header mitigates this issue by providing several families of typedefs expressing the implementation-supported integer types that provide specific combinations of signedness, size, and representation. Each C implementation provides a stdint.h header appropriate for that implementation.
You should comply with the MISRA rule by using the types defined in your implementation's stdint.h header, choosing the types that meet your needs from among those it actually supports (or those you expect it to support). For example, if you want a signed integer type exactly 32 bits wide, with no padding bits, and expressed in two's complement representation, then that is int32_t -- if your implementation provides that at all (it would be surprising, but not impossible, for such a type not to be available).
For example,
#include <stdint.h>
// relies on the 'int32_t' definition from the above header:
static int32_t handlerCalled = 0;
The point I was raising in my comment was that you seemed to say that you not only included the header, but also defined your own typedef for uint32_t. You must not define your own typedef for this or other types in the scope of stdint.h. At best it is redundant to do so, but at worst it satisfies the MISRA checker yet breaks your code.
I observed that many userland code use types defined in
/usr/include/asm-generic/int-ll64.h, such as __u16, __u32 and alike. The comment in the header says:
Integer declarations for architectures which use "long long" for
64-bit types
Would it be safe to apply macros from stdint.h defining maximum values for unsigned integer types, e.g. UINT32_MAX, to objects of type __u32?
I am using a C library provided to me already compiled. I have limited information on the compiler, version, options, etc., used when compiling the library. The library interface uses enum both in structures that are passed and directly as passed parameters.
The question is: how can I assure or establish that when I compile code to use the provided library, that my compiler will use the same size for those enums? If it does not, the structures won't line up, and the parameter passing may be messed up, e.g. long vs. int.
My concern stems from the C99 standard, which states that the enum type:
shall be compatible with char, a signed integer type, or an unsigned
integer type. The choice of type is implementation-defined, but shall
be capable of representing the values of all the members of the
enumeration.
As far as I can tell, so long as the largest value fits, the compiler can pick any type it darn well pleases, effectively on a whim, potentially varying not only between compilers, but different versions of the same compiler and/or compiler options. It could pick 1, 2, 4, or 8-byte representations, resulting in potential incompatibilities in both structures and parameter passing. (It could also pick signed or unsigned, but I don't see a mechanism for that being a problem in this context.)
Am I missing something here? If I am not missing something, does this mean that enum should never be used in an API?
Update:
Yes, I was missing something. While the language specification doesn't help here, as noted by #Barmar the Application Binary Interface (ABI) does. Or if it doesn't, then the ABI is deficient. The ABI for my system indeed specifies that an enum must be a signed four-byte integer. If a compiler does not obey that, then it is a bug. Given a complete ABI and compliant compilers, enum can be used safely in an API.
APIs that use enum are depending on the assumption that the compiler will be consistent, i.e. given the same enum declaration, it will always choose the same underlying type.
While the language standard doesn't specifically require this, it would be quite perverse for a compiler to do anything else.
Furthermore, all compilers for a particular OS need to be consistent with the OS's ABI. Otherwise, you would have far more problems, such as the library using 64-bit int while the caller uses 32-bit int. Ideally, the ABI should constrain the representation of enums, to ensure compatibility.
More generally, the language specification only ensures compatibility between programs compiled with the same implementation. The ABI ensures compatibility between programs compiled with different implementations.
From the question:
The ABI for my system indeed specifies that an enum must be a signed four-byte integer. If a compiler does not obey that, then it is a bug.
I'm surprised about that. I suspect in reality you're compiler will select a 64-bit (8 byte) size for your enum if you define an enumerated constant with a value larger that 2^32.
On my platforms (MinGW gcc 4.6.2 targeting x86 and gcc 4,.4 on Linux targeting x86_64), the following code says that I get both 4 and 8 byte enums:
#include <stdio.h>
enum { a } foo;
enum { b = 0x123456789 } bar;
int main(void) {
printf("%lu\n", sizeof(foo));
printf("%lu", sizeof(bar));
return 0;
}
I compiled with -Wall -std=c99 switches.
I guess you could say that this is a compiler bug. But the alternatives of removing support for enumerated constants larger than 2^32 or always using 8-byte enums both seem undesirable.
Given that these common versions of GCC don't provide a fixed size enum, I think the only safe action in general is to not use enums in APIs.
Further notes for GCC
Compiling with "-pedantic" causes the following warnings to be generated:
main.c:4:8: warning: integer constant is too large for 'long' type [-Wlong-long]
main.c:4:12: warning: ISO C restricts enumerator values to range of 'int' [-pedantic]
The behavior can be tailored via the --short-enums and --no-short-enums switches.
Results with Visual Studio
Compiling the above code with VS 2008 x86 causes the following warnings:
warning C4341: 'b' : signed value is out of range for enum constant
warning C4309: 'initializing' : truncation of constant value
And with VS 2013 x86 and x64, just:
warning C4309: 'initializing' : truncation of constant value