SetWindowPos compile error in 64-bit Windows - c

I'm currently changing our codebase to make it compile under 64-bit architecture. Most of the changes I'm having to make are obvious, but this one has got me stumped. SetWindowPos has a second argument, hWndInsertAfter, that can be either a window handle, or one of the predefined values HWND_TOP, HWND_BOTTOM, HWND_TOPMOST and HWND_NOTOPMOST (see here for MSDN info). These values are defined in WinUser.h.
In 32-bit architecture, using one of those in a call to SetWindowPos works fine, but in 64-bit, the compiler complains thus:
warning C4306: 'type cast' : conversion from 'int' to 'HWND' of
greater size
This is because the #defines are casting [32-bit] integers as HWNDs, e.g.:
#define HWND_TOPMOST ((HWND)-1)
What do I need to change to make this compile in 64-bit architecture without the compiler throwing a warning? I can disable the warnings using #pragma warning( disable: 4306 ), or make my own definition using a 64-bit int in the #define, but surely there's a "proper" Microsoft way of doing this?

The warning is triggered because you're casting the 32-bit int value -1 to a 64-bit pointer type void* without any intervening cast to a 64-bit integer type such as intptr_t. MSVC should have suppressed the warning in this case since (A) it's triggered only by the expansion of the system-provided macro HWND_TOPMOST and (B) the offending int is a decimal literal, but apparently MSVC's developers didn't think of those heuristics.
There's nothing you can do in your code to silence the warning, unless you're happy with
#undef HWND_TOPMOST
#define HWND_TOPMOST ((HWND)(intptr_t)-1)
Alternatively, you can try to suppress it in the IDE. This thread suggests
Project Settings | C/C++ | General and turn off "Detect 64-bit portability issues"
or pass /wd4306 on the command line.

Ok, after MUCH testing, the problem was that my file was a .c file. I renamed it to .cpp and SetWindowPos then compiled without error (and conversely, in the new test app I created to try a 'bare bones' solution, when I renamed the default .cpp file to a .c file, it started complaining).
Looks like .c files don't want to be able to cast 32-bit int values to 64-bit pointers. Which makes sense, but doesn't explain why it works in .cpp files. If anyone has any ideas on why this is, do note it in the comments...

Related

C: Warning when casting int to int* on Windows 64-bit machine when working on 32-bit program

I'm working on a legacy 32-bit program where there are a lot of casts like DWORD* a = (DWORD*)b, where b is a native int, and I get lots of these warnings:
Cast to 'DWORD *' (aka 'unsigned int*') from smaller integer type 'int' ['clang: -Wint-to-pointer-cast]
Since the sizes are equal during compilation it's fine, but I don't see how Clang would know that. What can I do to satisfy this warning other than disabling it entirely?
EDIT: The premise of the question is bad due to my misunderstanding of Clang, a compiler, and clangd, the language server which invokes Clang. The language server didn't know I was targeting x86.
So the problem is (DWORD*)b but b is of type int. This means the code needs to be redesigned, because somebody is stuffing pointers into int. Microsoft made a special type for a pointer-sized integer: DWORD_PTR. Yeah sure there's one in stdint.h and you can use that one if you want, but if you're already using DWORD you might as well use DWORD_PTR. The problem didn't happen on this line. The problem happened on the line where b was assigned the value from a pointer.
Change type of b to intptr_t, uintptr_t, or DWORD_PTR and back-propigate the change until the errors go away. If you come to a place where you can't, that part of the code needs to be redesigned.
Microsoft's own compiler now yields warnings for this stuff even in 32 bit compilation when the type isn't one of the pointer-in-integer types. Best to head the warnings.
Stuffing pointers in integers is not a recommended practice anymore, but the Win32 API does it all over the place, so when in Rome ...

strtoull() Availbility in C89

I have been reading through the documentation for strtoul()/strtoull() from here, and under the "Conforming To" section towards to bottom, it makes these two points:
strtoul(): POSIX.1-2001, POSIX.1-2008, C89, C99 SVr4.
strtoull(): POSIX.1-2001, POSIX.1-2008, C99.
These two lines, in addition to other references throughout the document indicate to me that the function strtoull should not be available when compiling a program using the c89/c90 standard. However, when I run a quick test with gcc, it allows me to call this function, regardless of the standard that I specify.
First, the code I am using to test:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
unsigned long long x;
const char *str = "1234";
x = strtoull(str, NULL, 10);
printf("%llu\n", x);
return 0;
}
And here is my compilation command:
gcc test.c -std=c89 -pedantic -Wall -Wextra
Now, in fairness it does warn me of the compatibility issue:
test.c: In function ‘main’:
test.c:6:16: warning: ISO C90 does not support ‘long long’ [-Wlong-long]
unsigned long long x;
^~~~
test.c:9:6: warning: implicit declaration of function ‘strtoull’; did you mean ‘strtoul’? [-Wimplicit-function-declaration]
x = strtoull(str, NULL, 10);
^~~~~~~~
strtoul
test.c:11:9: warning: ISO C90 does not support the ‘ll’ gnu_printf length modifier [-Wformat=]
printf("%llu\n", x);
^~~~~~~~
These warning messages are exactly what I would expect given the documentation. It notifies me that the function I have specified cannot be found, and even that the C90 standard doesn't support unsigned long long. However, when I attempt to run this code, it works just fine, with no crashing or other types of errors. It prints the value 1234, as desired. So, based on this experiment, I have a few questions that I was hoping someone more seasoned than I could answer.
Is this a matter of me not providing the necessary compilation flags to enforce the 'strict' c98 standard?
Is this a case of me misunderstanding the documentation, or is there some documentation for gcc itself that I should refer to? And, if so, where could I find it?
Is there something fundamental about the compiling/linking process that I am not understanding, which explains this issue?
Why would I be warned of an incompatibility, even warned that the function I am calling does not exist, but the code still works with no issue?
Does this experiment imply that the -std=c89 -pedantic flags do not actually enforce the C89/C90 standard?
As a final note, I am not trying to say I want to use this function in C89, I was just curious about the compatibility restriction, and then confused about the answer.
Thanks in advance for any responses!
From a C89/C90 compiler's point of view, the only thing wrong with your code is the use of unsigned long long which looks like a syntax error. The standard requires only that the compiler produce a "diagnostic" in this case, and GCC has done so with its "ISO C90 does not support long long" warning. There is no requirement that this error should be fatal, and the compiler can decide to handle the code some other way if it wants. GCC obviously chooses to understand it as the long long type which it supports as an extension to C89.
The use of strtoull then just looks like some function that you made up, as C89 had no way of knowing that this name would be special in some future version of the standard. (Well, they did specify that more functions starting with str could be added to <string.h> in the future, but that doesn't make your code illegal for C89.) You haven't declared it, but C89 allowed implicit declarations, so it's understood to be declared as int strtoull();, i.e. returning int and with unspecified arguments. AFAIK no diagnostic was required for implicit declarations, but GCC chooses to issue one anyway. So it's treated like any other call to a function not defined in this source file, and the compiler presumes that some other part of your program (including the libraries you use) will define it.
And in fact some other part of your program does define it, namely libc, since your libc conforms to C99 and later. (You know, hopefully, that libc is not part of GCC.) C library authors generally don't provide a version of the library that only includes functions from a particular standard version, since having so many different libraries around would be awkward and inefficient. So linking succeeds.
Note, though, that because of the implicit declaration, the program may not actually work correctly. The compiler will generate code incorrectly assuming that strtoull returns int, which depending on your system's calling conventions, may cause all sorts of problems. On x86-64, it means that your program will only look at the low 32 bits of the result and will sign-extend them to 64 bits. So if you try to convert a number that fits in 32 bits but would not fit in long long, you'll get the wrong result. Example.
If you want a program that would work on a system that only supports C89 and nothing else, it's your responsibility to look at the diagnostics issued by the compiler and fix the corresponding problems. The -pedantic-errors option mentioned in comments can help with this, as it causes compilation to fail when such diagnostics are issued.
It would also help if you could find a C89-only libc, but that's not GCC's problem. But its implicit declaration warnings do give you some assistance in noticing that you have called a function which you may not have intended for your program to define.
As a final point, it's historically been part of GCC's design philosophy that they don't think "enforcing the standard" is really part of what they want to do. They saw their goal as writing a compiler that helps people write and compile programs that are useful, not a linter that checks for conformance with coding standards; they figured the latter should be a separate project, and not one that they were interested in. As such, they were liberal in providing extensions to the standard language, and not particularly diligent in providing ways for programs to avoid using them. They did provide the -pedantic option but apparently with some reluctance, as you can tell from the derogatory name.

Crash in flex/bison parser code while reading parameters with visual studio (x64) working in x32

I'm trying to port a legacy 32 bit parser code generated from flex.bison.
I must use Visual Studio 2019 and compile to x64 target.
A crash occures (reading access violation) while parsing the parameters in this code :
case 42:
{ registerTypedef( (yyvsp[(2) - (4)]), (yyvsp[(3) - (4)]) ); }
break;
Here is the called function definition :
void registerTypedef(char* typeref, char* typeName)
{
//SG_TRACE_INFO("registerTypedef %s %s", typeName, typeref);
std::string typeNameStr = typeName;
std::string typeRefStr = typeref;
TheSGFactory::GetInstance().SG_Factory::RegisterTypeDef(typeNameStr, typeRefStr);
The corresponding rule is the following :
declaration_typedef
: TYPEDEF TYPEDEF_NAME IDENTIFIER ';' { registerTypedef( $2, $3 ); }
| TYPEDEF basic_type IDENTIFIER ';' { registerTypedef( $2, $3 ); }
;
It looks like the yyvsp is accessed with negative index (2) - (4) = -2.
This should be OK as the same code is working perfectly with 32 bit compiler.
The C99 standard seems to be OK with this also.
I have tried to use latest flex/bison versions available under windows and unix. the generated code is quite similar and the issue is the same.
Is there a magic Visual Studio Option to make it accept negative index ?
Is there a magic Flex/bison parameter to use that would fix this issue ?
Thanks a lot !
You're almost certainly looking in the wrong place.
yyvsp always points to the top of the parser stack, so negative indexes are perfectly normal. And totally legal. The problem will be that the thing that's supposed to be a char* isn't a valid pointer, probably because the default semantic value type was not changed from int. On 32-bit architectures, you can often get away with stashing pointers into ints, since they are likely the same size. But 64-bit compiles will break, since half of the pointer will be truncated..
This error should be apparent if you compile with warnings enabled.
Note that nothing guarantees that YYSTYPE is the same in the lexical scanner and in the parser., since they are independent programs generated from different source files by different code generators. So it might be wrong in either or both. (Compiler warnings will help distinguish the cases.)
Your best bet is to ensure that YYSTYPE is correctly defined in the bison-generated header file to avoid type mismatch issues. The easiest way to do that is with the %define api.value.type bison declaration, but that's a relatively recent feature. The older style was to put #define YYSTYPE whatever in a bison %code requires block. And the even older style was to duplicate the YYSTYPE definition in both the .y and .l files. (Or to "fix" the problem by suppressing or ignoring compiler warnings, leaving the problem for some future maintenance programmer. :-) )
I think there was two issues here :
#rici was right concerning YYSTYPE of different type : they MUST be the same. In my case char*
The callback lexer code was using strdup(). Visual Studio 2019 by default resolve this function to a function returning int.
yylval = strdup(yytext);
This was corrupting the stack content
I had to force #include <string.h> to use the posix version returning char *
Note : i already needed to force include <stdlib.h> so that other "C" function points to correct versions (alloca ...)
Mystery solved ! Thanks a lot to all contributors

lldb in xcode detects integer called I to be a complex number

I have a C code, within which an int I gets declared and initialized. When I'm debugging within xcode, if I try to print the value of I, xcode tries to find a complex number:
(lldb) p I
error: <lldb wrapper prefix>:43:31: expected unqualified-id
using $__lldb_local_vars::I;
^
<user expression 3>:1760:11: expanded from here
#define I _Complex_I
^
<user expression 3>:7162:20: expanded from here
#define _Complex_I ( __extension__ 1.0iF )
When I try the same thing (stopping at the same exact line in the code) in the command line, without using xcode, it works fine:
(lldb) p I
(int) $0 = 56
I'm loading the following libraries:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <math.h>
which shouldn't even include complex numbers, no? I definitely don't have a macro that defines I to be the complex variable. The one I run in xcode, I compile with the default xcode tools. The one I run in the command line, I use gcc. Is this the difference, somehow? Is xcode including more libraries than I ask it to? Why is this happening and how can I prevent it?
Edit: I should also add that the variable explorer in xcode shows the value of I correctly, as an integer.
$__lldb_local_vars is an artificial namespace that lldb injects into the wrapper it sets up for your expression before compilation so that clang can find the frame's local variables and their types. The problem comes as others have noted because we also run the preprocessor when compiling your expression, and your variable name collides with a preprocessor symbol in the expression context.
Normally, debug information does not record macros at all, so you aren't seeing the complex.h version of I from your own use of it in your code. Rather, you are seeing the I macro because something has caused the Darwin module to be imported into lldb's expression context.
That can happen in two ways, either because you explicitly asked for it by running:
(lldb) expr #import Darwin
or because you built this program with -fmodules and your code imported the Darwin module by inserting a statement like the above.
Doing this by hand is a common trick explicitly to make #defines from the module visible to the expression parser. Since it is the visibility of the macro that is causing problems, then you will have to stop doing that if you want this expression to succeed.
OTOH, if lldb is doing this because the debug information recorded that some part of you code imported this module, you can turn off the behavior by putting:
settings set target.auto-import-clang-modules 0
in your ~/.lldbinit and restarting your debug session.
BTW, the p command (or the expression command that p is an alias for) evaluates the text you provide it as a regular expression using the language and in the context of the current frame, with as much access to symbols, defines and the like as lldb can provide. Most users also want to be able to access class information that might not be directly visible in the current frame, so it tends to cast as wide a net as possible looking for symbols and types in order to enable this.
It is a very powerful feature, but as you are seeing sometimes the desire to provide this wide access for expressions can cause conflicting definitions. And anyway, it is way more powerful than needed just to view a local variable.
lldb has another command: frame var (convenient alias v) that prints local variable values by directly accessing the memory pointed to by the debug information and presenting it using the type from the debug info. It supports a limited subset of C-like syntax for subelement reference; you can use * to dereference, . or -> and if the variable is an array [0] etc...
So unless you really do need to run an expression (for instance to access a computed property or call another function), v will be faster and because its implementation is simpler and more direct, it will have less chance of subtle failures than p.
If you also want to access the object definition of some ObjC or Swift local variable, the command vo or frame var -O will fetch the description of the local variable it finds using the v method.
I definitely don't have a macro that defines I to be the complex variable.
It looks like lldb is getting confused somehow, not an issue with your code, but without a MRE it is hard to say.
The one I run in xcode, I compile with the default xcode tools. The one I run in the command line, I use gcc. Is this the difference, somehow?
xcode uses "Apple clang" (an old, custom version) with libc++ by default, as far as I know. gcc is quite different and it may not even use libc++.
Having said that, since xcode shows the variable as an integer but lldb does not, it looks like something else is going on.
Is xcode including more libraries than I ask it to?
I don't think so given the program works and Xcode shows the value as an integer.
Why is this happening and how can I prevent it?
Hard to say since it is a closed source tool. Try to make an MRE. It usually helps debugging the issue and finding workarounds.
By definition a complex number is not defined as simply int
Additionally, as mentioned, complex I is defined in <complex.h>:
To construct complex numbers you need a way to indicate the imaginary
part of a number. There is no standard notation for an imaginary
floating point constant. Instead, complex.h defines two macros that
can be used to create complex numbers.
Macro: const float complex _Complex_I
This macro is a representation of the complex number “0+1i”. Multiplying a real floating-point value by _Complex_I gives a complex number whose value is purely imaginary. You can use this to construct complex constants:
3.0 + 4.0i = 3.0 + 4.0 * _Complex_I
Note that _Complex_I * _Complex_I has the value -1, but the type of that value is complex.
_Complex_I is a bit of a mouthful. complex.h also defines a shorter name for the same constant.
Macro: const float complex I
This macro has exactly the same value as _Complex_I. Most of the time it is preferable. However, it causes problems if you want to use the identifier I for something else. You can safely write
#include <complex.h>
#undef I
Reference here for GNU implementation
Include this header file (or similar from your environment), and no need to define it yourself

Does C compiler distinguish type defined in user's code and library code?

#include <sys/types.h>
//Line 2: typedef unsigned int uint;
//Line 3: typedef unsigned int uint;
int main() {
uint a;
return 0;
}
Given the above C code, it's compiled successfully, for uint is defined in <sys/types.h>. Even though it's not standardized, it's added there for Sys V compatibility, commented in the code.
Uncomment the second line of the above code still results successful compiling. As I understand it's not allowed to redefine one type, confirmed by uncommenting the second and third lines, which will result in one compiling error.
How come the compiler is smart enough to know uint is defined in standard library or user's code? Both gcc and clang give consistent behavior.
Edit:
Linking is not part of the game in this case. The error is reproduced with compile only, i.e (-c option).
Line number is added to reduce confusion.
Edit:
Uncomment the second line of the above code still results successful compiling. As I understand it's not allowed to redefine one type, confirmed by uncommenting the second and third lines, which will result in one compiling error.
I have no idea why I wrote this. Apparently, uncommenting Line 2 and Line 3 doesn't result in compilation error for gcc. Clang gives error for the default compiling options is much more strict and could be tuned by passing some parameters.
Here describes whether multiple typedef is allowed or not, which turns out to be quite complicated. Anyway just try to avoid duplicate typedef.
Repeating a declaration is perfectly valid in C, so if you uncomment both lines as you describe you will not see any error, contrary to what you say.
Having two different declarations for a name would be an error.
Repeating a definition is an error as well, but a typedef is not a definition (despite the def), it is a declaration.
Standard library is also user code, usually written by another user.
Uncomment the second line of the above code still results successful compiling. As I >understand it's not allowed to redefine one type, confirmed by uncommenting the second and third lines, which will result in one compiling error.
On my gcc it does not. (version 4.5.3)
How come the compiler is smart enough to know uint is defined in standard library or user's code? Both gcc and clang give consistent behavior.
The compiler knows no distinction between user code and the one in the standard library. Although the compiler can distinguish between standard library files and user code, I really do not see any reason to do so. All it sees is textual data that it can lex/parse/codegen.

Resources