I recently took over some C and firmware responsibilities at work, and am having trouble with what seems like a basic issue but one that I can't find the answer to. I'm not very experienced with C, but I've had many years of experience with higher level languages.
The firmware is written for a PIC18F4865 and I can't get it to compile and program correctly. It was originally written on MPLAB IDE 8 using the HI-TECH PICC18 compiler, but I moved up to MPLAB X IDE and have been having problems.
First, I was using the same HI-TECH PICC18 compiler and it appeared to program successfully, but the device was not reading correctly. I then switched to the XC8 compiler and began to get an error message during compile that I can't get around.
C:/_Sable/Firmware_C/lib\eeprom.h:10: error: no identifier in declaration
C:/_Sable/Firmware_C/lib\eeprom.h:10: error: ";" expected
The eeprom.h file is
#ifndef _EEPROM_H_
#define _EEPROM_H_
#define EE_ADDR(member) (offsetof(struct ee_map_s, (member)))
extern unsigned char eeprom_read(unsigned int); // this is line 10
extern void eeprom_write(unsigned int, unsigned char);
extern void ee_read(unsigned char, void *vp, unsigned char);
extern void ee_write(unsigned char, void *vp, unsigned char);
#endif
I looked around online and saw that this error can occur in a previous included file, and I checked that file and all appeared to be fine. I even rearranged the include order, think that the error message would change if that was the case, but the error still complains about this line.
I then thought maybe the function declaration was invalid because none of the parameters are named, so I changed line 10 to:
extern unsigned char eeprom_read(unsigned int addr)
This didn't change anything. But I did have a weird feeling that when I cleaned and built again, it was not re-compiling eeprom.h. I don't know if that happens or how to force it to recompile.
I don't know if fixing this will fix the firmware issues I'm having or if I need to go back to MPLAB IDE 8, but it is still something I'd like to fix.
Some header file is using a macro to #define eeprom_read into something else, possibly the empty string. If you use a different function name, #undef eeprom_read, or do something else to cause the header to no longer make that macro, it should work.
Related
I'm trying to port a legacy 32 bit parser code generated from flex.bison.
I must use Visual Studio 2019 and compile to x64 target.
A crash occures (reading access violation) while parsing the parameters in this code :
case 42:
{ registerTypedef( (yyvsp[(2) - (4)]), (yyvsp[(3) - (4)]) ); }
break;
Here is the called function definition :
void registerTypedef(char* typeref, char* typeName)
{
//SG_TRACE_INFO("registerTypedef %s %s", typeName, typeref);
std::string typeNameStr = typeName;
std::string typeRefStr = typeref;
TheSGFactory::GetInstance().SG_Factory::RegisterTypeDef(typeNameStr, typeRefStr);
The corresponding rule is the following :
declaration_typedef
: TYPEDEF TYPEDEF_NAME IDENTIFIER ';' { registerTypedef( $2, $3 ); }
| TYPEDEF basic_type IDENTIFIER ';' { registerTypedef( $2, $3 ); }
;
It looks like the yyvsp is accessed with negative index (2) - (4) = -2.
This should be OK as the same code is working perfectly with 32 bit compiler.
The C99 standard seems to be OK with this also.
I have tried to use latest flex/bison versions available under windows and unix. the generated code is quite similar and the issue is the same.
Is there a magic Visual Studio Option to make it accept negative index ?
Is there a magic Flex/bison parameter to use that would fix this issue ?
Thanks a lot !
You're almost certainly looking in the wrong place.
yyvsp always points to the top of the parser stack, so negative indexes are perfectly normal. And totally legal. The problem will be that the thing that's supposed to be a char* isn't a valid pointer, probably because the default semantic value type was not changed from int. On 32-bit architectures, you can often get away with stashing pointers into ints, since they are likely the same size. But 64-bit compiles will break, since half of the pointer will be truncated..
This error should be apparent if you compile with warnings enabled.
Note that nothing guarantees that YYSTYPE is the same in the lexical scanner and in the parser., since they are independent programs generated from different source files by different code generators. So it might be wrong in either or both. (Compiler warnings will help distinguish the cases.)
Your best bet is to ensure that YYSTYPE is correctly defined in the bison-generated header file to avoid type mismatch issues. The easiest way to do that is with the %define api.value.type bison declaration, but that's a relatively recent feature. The older style was to put #define YYSTYPE whatever in a bison %code requires block. And the even older style was to duplicate the YYSTYPE definition in both the .y and .l files. (Or to "fix" the problem by suppressing or ignoring compiler warnings, leaving the problem for some future maintenance programmer. :-) )
I think there was two issues here :
#rici was right concerning YYSTYPE of different type : they MUST be the same. In my case char*
The callback lexer code was using strdup(). Visual Studio 2019 by default resolve this function to a function returning int.
yylval = strdup(yytext);
This was corrupting the stack content
I had to force #include <string.h> to use the posix version returning char *
Note : i already needed to force include <stdlib.h> so that other "C" function points to correct versions (alloca ...)
Mystery solved ! Thanks a lot to all contributors
Really simple, this photo explains the problem, Visual Studio 2017 error: variable "InputCode" is not a type name
#ifndef INPUT_H
#define INPUT_H
typedef unsigned InputCode;
struct KeyboardInfo
{
char *name; /* OS dependant name; 0 terminates the list */
unsigned code; /* OS dependant code */
InputCode standardcode; /* CODE_xxx equivalent from list below, or CODE_OTHER if n/a */
};
#endif
There is nothing wrong with the code presented, neither when interpreted as C nor when interpreted as C++.
In particular, contrary to some of the commentary on the question, unsigned is a Standard-supported alias for unsigned int in both languages, just as long is a Standard supported alias for long int. There is thus no inherent problem with the typedef declaration itself, which, indeed, VS does not flag.
Wherever a typedef declaration is in scope, the identifier it declares -- InputCode in this case -- is valid for use as a type name, exactly as the code seems to expect. There is therefore no problem with the struct KeyboardInfo declaration, either.
If the Visual Studio compiler or IDE complains about the code presented then that constitutes a flaw in Visual Studio. However, you might find that VS compiles the code successfully despite the IDE flagging an issue in it.
I found that the problem was not in the first error obtained during compilation but in an error that was reported later. Very strange behaviour, using gcc or g++ the first reported error always is the problem
I am using visual studio 2017 .
First I wrote the following code :
void main()
{
printf("abcdefgh %d hjhjh %d", 5, 6);
getch();
}
It ran perfectly fine .
But after that I modified the code to the following :
void main()
{
char abc[100];
strcpy_S(abc, "premraj");
printf("%s", abc);
printf("abcdefgh %d hjhjh %d", 5, 6);
getch();
}
But now I am getting an error with getch stating that "'getch' undefined, assuming extern returning int"
But this new code has been built on the existing code which recognized getch perfectly , how can it not recognize getch the second time ?
I checked out the following question :
getch() is working without conio.h - how is that possible?
which also carried a similar problem but here with modifications only I got this error .
There is an informative answer there by user named "Fatal Error" but still I would like to know about this intriguing phenomenon that is coming in after modifications . What can be the reason behind this ?
P.S : The following was my header file declarations for the first time :
#include <stdio.h>
and the following for the second time :
#include <stdio.h>
#include <string.h>
Once upon a time, if you called a function which the compiler had never heard of, like this:
#include <stdio.h>
int main()
{
int x = foo();
printf("%d\n", foo);
}
Anyway, if you did that, the compiler quietly assumed that foo() was a function returning int. That is, the compiler behaved just as if you had typed
extern int foo();
somewhere before you called foo.
But in, I think, C99, the language was changed. It was no longer legal to call a function you had not explicitly declared. Because there was lots and lots of code out there that was written under the previous set of rules, most compilers did not immediately begin rejecting the old-style code. Some continued to quietly assume that unrecognized functions returned int. Others -- like yours -- began noisily assuming that unrecognized functions returned int, emitting warnings along the lines of "'foo' undefined, assuming extern returning int".
It sounds like your question is that some time ago, your code containing calls to getch() was accepted without warning, but today, you're getting the warning "'getch' undefined, assuming extern returning int". What changed?
One possibility is that your code changed slightly. If your code used to contain the line
#include <conio.h>
somewhere, that file would have contained a declaration along the lines of
extern int getch();
and this would have goven the compiler the declaration that it needed, and you would not have gotten the warning. But if today your code does not contain that #include line, that explain why the warning started cropping up.
It's also possible that your compiler has changed somehow. It's possible you're using a new version of the compiler, that's more fussy, that has gone from quietly assuming, to normally assuming. Or, it's possible that your compiler options have changed. Many compilers can be configured to accept different variants of the language, corresponding to the different versions of the language standards that have been released over the years. For example, if some time ago your compiler was compiling for language standard "C89", but today, it's doing "C99" or "C11", that would explain why it's now being noisy with this warning.
The change in the compiler could be a change in the defaults as configured by a system administrator, or a change in the way you're invoking the compiler, or a change in your project's Makefile, or a change in the language settings in your IDE, or something like that.
A few more points:
getch is not a Standard C function; it's specific to Windows. Your program would be more portable, in general, if you didn't use it. Are you sure you need it? (I know what it's for; what I don't know if there's some other way of keeping your program's output window on the screen after if exits.)
You should get in the habit of declaring main() as int, not void. (void will work well enough, but it's not correct, and if nothing else, you'll get lots of negative comments about it.)
I think there's something wrong with your call to strcpy_S, too,
I'm currently changing our codebase to make it compile under 64-bit architecture. Most of the changes I'm having to make are obvious, but this one has got me stumped. SetWindowPos has a second argument, hWndInsertAfter, that can be either a window handle, or one of the predefined values HWND_TOP, HWND_BOTTOM, HWND_TOPMOST and HWND_NOTOPMOST (see here for MSDN info). These values are defined in WinUser.h.
In 32-bit architecture, using one of those in a call to SetWindowPos works fine, but in 64-bit, the compiler complains thus:
warning C4306: 'type cast' : conversion from 'int' to 'HWND' of
greater size
This is because the #defines are casting [32-bit] integers as HWNDs, e.g.:
#define HWND_TOPMOST ((HWND)-1)
What do I need to change to make this compile in 64-bit architecture without the compiler throwing a warning? I can disable the warnings using #pragma warning( disable: 4306 ), or make my own definition using a 64-bit int in the #define, but surely there's a "proper" Microsoft way of doing this?
The warning is triggered because you're casting the 32-bit int value -1 to a 64-bit pointer type void* without any intervening cast to a 64-bit integer type such as intptr_t. MSVC should have suppressed the warning in this case since (A) it's triggered only by the expansion of the system-provided macro HWND_TOPMOST and (B) the offending int is a decimal literal, but apparently MSVC's developers didn't think of those heuristics.
There's nothing you can do in your code to silence the warning, unless you're happy with
#undef HWND_TOPMOST
#define HWND_TOPMOST ((HWND)(intptr_t)-1)
Alternatively, you can try to suppress it in the IDE. This thread suggests
Project Settings | C/C++ | General and turn off "Detect 64-bit portability issues"
or pass /wd4306 on the command line.
Ok, after MUCH testing, the problem was that my file was a .c file. I renamed it to .cpp and SetWindowPos then compiled without error (and conversely, in the new test app I created to try a 'bare bones' solution, when I renamed the default .cpp file to a .c file, it started complaining).
Looks like .c files don't want to be able to cast 32-bit int values to 64-bit pointers. Which makes sense, but doesn't explain why it works in .cpp files. If anyone has any ideas on why this is, do note it in the comments...
#include <sys/types.h>
//Line 2: typedef unsigned int uint;
//Line 3: typedef unsigned int uint;
int main() {
uint a;
return 0;
}
Given the above C code, it's compiled successfully, for uint is defined in <sys/types.h>. Even though it's not standardized, it's added there for Sys V compatibility, commented in the code.
Uncomment the second line of the above code still results successful compiling. As I understand it's not allowed to redefine one type, confirmed by uncommenting the second and third lines, which will result in one compiling error.
How come the compiler is smart enough to know uint is defined in standard library or user's code? Both gcc and clang give consistent behavior.
Edit:
Linking is not part of the game in this case. The error is reproduced with compile only, i.e (-c option).
Line number is added to reduce confusion.
Edit:
Uncomment the second line of the above code still results successful compiling. As I understand it's not allowed to redefine one type, confirmed by uncommenting the second and third lines, which will result in one compiling error.
I have no idea why I wrote this. Apparently, uncommenting Line 2 and Line 3 doesn't result in compilation error for gcc. Clang gives error for the default compiling options is much more strict and could be tuned by passing some parameters.
Here describes whether multiple typedef is allowed or not, which turns out to be quite complicated. Anyway just try to avoid duplicate typedef.
Repeating a declaration is perfectly valid in C, so if you uncomment both lines as you describe you will not see any error, contrary to what you say.
Having two different declarations for a name would be an error.
Repeating a definition is an error as well, but a typedef is not a definition (despite the def), it is a declaration.
Standard library is also user code, usually written by another user.
Uncomment the second line of the above code still results successful compiling. As I >understand it's not allowed to redefine one type, confirmed by uncommenting the second and third lines, which will result in one compiling error.
On my gcc it does not. (version 4.5.3)
How come the compiler is smart enough to know uint is defined in standard library or user's code? Both gcc and clang give consistent behavior.
The compiler knows no distinction between user code and the one in the standard library. Although the compiler can distinguish between standard library files and user code, I really do not see any reason to do so. All it sees is textual data that it can lex/parse/codegen.