I am trying to calculate the Greatest Common Denominator of two integers.
C Code:
#include <stdio.h>
int gcd(int x, int y);
int main()
{
int m,n,temp;
printf("Enter two integers: \n");
scanf("%d%d",&m,&n);
printf("GCD of %d & %d is = %d",m,n,gcd(m,n));
return 0;
}
int gcd(int x, int y)
{
int i,j,temp1,temp2;
for(i =1; i <= (x<y ? x:y); i++)
{
temp1 = x%i;
temp2 = y%i;
if(temp1 ==0 and temp2 == 0)
j = i;
}
return j;
}
In the if statement, note the logical operator. It is and not && (by mistake). The code works without any warning or error.
Is there an and operator in C? I am using orwellDev-C++ 5.4.2 (in c99 mode).
&& and and are alternate tokens and are functionally same, from section 2.6 Alternative tokens from the C++ draft standard:
Alternative Primary
and &&
Is one of the entries in the Table 2 - Alternative tokens and it says in subsection 2:
In all respects of the language, each alternative token behaves the same, respectively, as its primary token, except for its spelling. The set of alternative tokens is defined in Table 2.
As Potatoswatter points out, using and will most likely confuse most people, so it is probably better to stick with &&.
Important to note that in Visual Studio is not complaint in C++ and apparently does not plan to be.
Edit
I am adding a C specific answer since this was originally an answer to a C++ question but was merged I am adding the relevant quote from the C99 draft standard which is section 7.9 Alternative spellings <iso646.h> paragraph 1 says:
The header defines the following eleven macros (on the left) that expand
to the corresponding tokens (on the right):
and includes this line as well as several others:
and &&
We can also find a good reference here.
Update
Looking at your latest code update, I am not sure that you are really compiling in C mode, the release notes for OrwellDev 5.4.2 say it is using GCC 4.7.2. I can not get this to build in either gcc-4.7 nor gcc-4.8 using -x c to put into C language mode, see the live code here. Although if you comment the gcc line and use g++ it builds ok. It also builds ok under gcc if you uncomment #include <iso646.h>
Check out the page here iso646.h
This header defines 11 macro's that are the text equivalents of some common operators.
and is one of the defines.
Note that I can only test this for a C++ compiler so I'm not certain if you can use this with a strict C compiler.
EDIT I've just tested it with a C compiler here and it does work.
and is just an alternative token for &&.
We can easily quote the standard here :
2.6 Alternative tokens [lex.digraph]
In all respects of the language, each alternative token behaves the same, respectively, as its primary token, except for its spelling. The set of alternative tokens is defined in Table 2.
In table 2 :
Alternative | Primary
and | &&
But I suggest you to use &&. People used to C/C++ may get confused by and...
Since it is merged now, we are talking also about C, you can check this page ciso646 defining the alternatives tokens.
This header defines 11 macro constants with alternative spellings for those C++ operators not supported by the ISO646 standard character set.
From the C99 draft standard :
7.9 Alternative spellings <iso646.h>
The header defines the following eleven macros (on the left) that expand
to the corresponding tokens (on the right):
and &&
Basically and is just the text version of && in c.
You do however need to #include <iso646.h>. or it isn't going to compile.
You can read more here:
http://msdn.microsoft.com/en-us/library/c6s3h5a7%28v=vs.80%29.aspx
If the code in your question compiles without errors, either you're not really compiling in C99 mode or (less likely) your compiler is buggy. Or the code is incomplete, and there's a #include <iso646.h> that you haven't shown us.
Most likely you're actually invoking your compiler in C++ mode. To test this, try adding a declaration like:
int class;
A C compiler will accept this; a C++ compiler will reject it as a syntax error, since class is a keyword. (This may be a bit more reliable than testing the __cplusplus macro; a misconfigured development system could conceivably invoke a C++ compiler with the preprocessor in C mode.)
In C99, the header <iso646.h> defines 11 macros that provide alternative spellings for certain operators. One of these is
#define and &&
So you can write
if(temp1 ==0 and temp2 == 0)
in C only if you have a #include <iso646.h>; otherwise it's a syntax error.
<iso646.h> was added to the language by the 1995 amendment to the 1990 ISO C standard, so you don't even need a C99-compliant compiler to use it.
In C++, the header is unnecessary; the same tokens defined as macros by C's <iso646.h> are built-in alternative spellings. (They're defined in the same section of the C++ standard, 2.6 [lex.digraph], as the digraphs, but a footnote clarifies that the term "digraph" doesn't apply to lexical keywords like and.) As the C++ standard says:
In all respects of the language, each alternative token behaves the
same, respectively, as its primary token, except for its spelling.
You could use #include <ciso646> in a C++ program, but there's no point in doing so (though it will affect the behavior of #ifdef and).
I actually wouldn't advise using the alternative tokens, either in C or in C++, unless you really need to (say, in the very rare case where you're on a system where you can't easily enter the & character). Though they're more readable to non-programmers, they're likely to be less readable to someone with a decent knowledge of the C and/or C++ language -- as demonstrated by the fact that you had to ask this question.
It is compiling to you because I think you included iso646.h(ciso646.h) header file.
According to it and is identical to &&. If you don't include that it gives compiler error.
The and operator is the text equivalent of && Ref- AND Operator
The or operator is the text equivalent of || Ref.- OR Operator
So resA and resB are identical.
&& and and are synonyms and mean Logical AND in C++. For more info check Logical Operators in C++ and Operator Synonyms in C++.
Related
I was under the impression that you could only start variable names with letters and _, however while testing around, I also found out that you can start variable names with $, like so:
Code
#include <stdio.h>
int main() {
int myvar=13;
int $var=42;
printf("%d\n", myvar);
printf("%d\n", $var);
}
Output
13
42
According to this resource, it says that you can't start variable names with $ in C, which is wrong (at least when compiled using my gcc version, Apple LLVM version 10.0.1 (clang-1001.0.46.4)). Other resources that I found online also seem to suggest that variables can't start with $, which is why I'm confused.
Do these articles just fail to mention this nuance, and if so, why is this a feature of C?
In the C 2018 standard, clause 6.4.2, paragraph 1 allows implementations to allow additional characters in identifiers.
It defines an identifier to be an identifier-nondigit character followed by any number of identifier-nondigit or digit characters. It defines digit to be “0“ to “9”, and it defines the identifier-nondigit characters to be:
a nondigit, which is one of underscore, “a” to “z”, or “A” to “Z”,
a universal-character-name, or
other implementation-defined characters.
Thus, implementations may define other characters that are allowed in identifiers.
The characters included as universal-character-name are those listed in ranges in Annex D of the C standard.
The resource you link to is wrong in several places:
Variable names in C are made up of letters (upper and lower case) and digits.
This is false; identifiers may include underscores and the above universal characters in every conforming implementation and other characters in implementations that permit them.
$ not allowed -- only letters, and _
This is incorrect. The C standard does not require an implementation to allow “$”, but it does not disallow an implementation from allowing it. “$” is allowed by some implementations and not others. It can be said not to be a part of strictly conforming C programs, but it may be a part of conforming C programs.
This answers your question:
In GNU C, you may normally use dollar signs in identifier names. This is because many traditional C implementations allow such identifiers. However, dollar signs in identifiers are not supported on a few target machines, typically because the target assembler does not allow them.
This is allowed in GCC and LLVM because many traditional C implementations allow identifiers like this.
One such reason is that VMS commonly uses these, where a lot of system library routines have names like SYS$SOMETHING.
Here's a link to the GCC docs describing this:
https://gcc.gnu.org/onlinedocs/gcc/Dollar-Signs.html
Depends on the dialect of C and the options selected. Historically some Cs supported $ to be compatible with existing libraries when C was new. You may need to use a command line option to enable $ or another to turn if of if strictly conforming C is valuable to you.
A spot of history: in my early years I got into enough mainframe rooms to know that $ is one of what IBM mainframes called "national characters" of $,#, and # that could show up in identifiers of programming languages like PL/1 and mainframe assembler. This worked down to some mainframe spin-offs, such as the IBM 1130. It looked to me like early impact printers using pieces of shaped slugs to print with, and CRT terminals, could swap out these characters to meet the national needs of foreign customers. The IBM 1403 printer had many "print chains" to choose from for different human languages and technical purposes.
Some non-IBM identifiers picked up on at least some of these characters. GNU C, VMS, and JavaScript kept "$". "$" is the only character of old that seems to have survived to this day, even as an option, in most languages. The odd thing is back on early IBM days the underscore was invalid for identifier names.
TL;DR: it's the assembler not the compiler
Ok, so I did some research into this. It's not really allowed, but what excludes it as the assembly pass. Trying to do the following fails:
#include <stdio.h>
extern int $func();
int main() {
int myvar=13;
int $var=42;
printf("%d\n", myvar);
printf("%d\n", $var);
$func();
}
joshua#nova:/tmp$ gcc -c test.c
/tmp/ccg7zLVB.s: Assembler messages:
/tmp/ccg7zLVB.s:31: Error: operand type mismatch for `call'
joshua#nova:/tmp$
I pulled K&R C version 2 (this covers ANSI C) off my shelf and it says "Identifiers are a sequence of letters and digits. The first character must be a letter; the underscore _ character counts as a letter. Upper and lower case letters are different. Identifiers may have any length ... [obsolete verbiage omitted]."
This reference as clearly aged; and almost everybody accepts high-unicode as letters. What's going on is the back-end assembler sees symbols bytewise and every byte with the high bit set counts as a letter. If you're crazy enough to use shift-jis outside of string literals, chaos can ensue; but otherwise this tends to work well enough.
I accessed a draft of C18 which says identifier-nondigit: nondigit ; nondigit ; universal-character-name other-implementation-defined-characters. Therefore, implementations are allowed to permit additional characters.
For universal-character-name, we have a restriction: "A universal character name shall not specify a character whose short identifier is less than 00A0
other than 0024 ( $ ), 0040 ( # ), or 0060 (‘), nor one in the range D800 through DFFF inclusive."
The following code still chokes at the assembly pass as expected:
#include <stdio.h>
extern int \U00000024func();
int main()
{
return \U00000024func();
}
The following code builds:
#include <stdio.h>
extern int func\U00000024();
int main()
{
return func\U00000024();
}
Macro enable to easily alias keywords in C, but can it be used to change macro keywords too, so instead of:
#include <stdlib.h>
#define se if
one may write
#inkludu <stdlib.h>
#difinu se if
In other words, can preprocessing directives be aliased, preferably out of the code itself, for example with a compiler argument such as -D for gcc.
A simple test as the following will fail:
#define difinu define
#difinu valoro 2
int main() {
int aĵo = valoro;
return 0;
}
with the following error:
% clang makro.c -o makro
makro.c:2:2: error: invalid preprocessing directive
#difinu valoro 2
^
makro.c:5:16: error: use of undeclared identifier 'valoro'
int aĵo = valoro;
^
2 errors generated.
No. Macros do not change the ways preprocessor directives are handled (but of course can change according to conditional directives like #if). In particular, the following is wrong
///WRONG CODE
#define inc include
/// the following directive is not recognized as an include
#inc <stdio.h>
BTW, having
#define se if
is really a bad habit, even if it is possible. It makes your code se (x>0) {printf("negative x=%d\n", x);} much more difficult to read.
Consider perhaps preprocessing your source with some other preprocessor like m4 or GPP. This amounts to generate your C (or C++) code from something else.
My feeling is that metaprogramming is often a good idea: you write some specialized program which would e.g. emit C or C++ code (and you improve your build procedure, e.g. your Makefile, accordingly). But you might design a real C or C++ code generator (which would work on and process some kind of AST). Parser generators (incorrectly known as compiler-compilers) like ANTLR or bison are a good example of this. And Qt has moc.
See also this & that answers to related questions.
Don't forget to read several textbooks (notably related to compilers) before attempting your own code generator (or domain specific language implementations).
On a historical note, however please note that yes, some times ago there were compilers that made it possible to redefine the macro definitions, and a proof for this fact is the following entry from IOCCC 1985 which obviously compiled happily on a Vax 780/4.2BSD in those days:
http://ioccc.org/1985/lycklama/lycklama.c
which starts with:
#define o define
#o ___o write
#o ooo (unsigned)
Localized languages do exist. Algol68 (available under Linux) is such a very old language (1968), that by the way is one of the superior languages that exist:
IF x < y THEN x ELSE y FI := 3;
( x < y | x | y ) := 3;
IF x < y THEN sin ELSE cos FI(3.14)
In your case I would write an additional preprocessor say for .eocpp and .eohpp to .cpp and .hpp. That could x-translate the special characters, or even a dictionary translation.
"#define" is the keyword to pre processor for "MACRO substitution ( text replacement) for compilation/compiler". consider code compilation stages in c, 'pre processor->compiler->assembler->linker->loader'..
so, when you compile this code, pre processor trying to search the keyword "#difinu" which is not present.so you are getting the error from preprocessor stage itself.
moreover "#define" is single keyword, how can you expect pre processor to treat this as "#"+"define" . For example
#define game olympic
main (){
int abcgame =10;// will it become "abcolympic" ??
return;
}
I am a first year computer science student and my professor said #define is banned in the industry standards along with #if, #ifdef, #else, and a few other preprocessor directives. He used the word "banned" because of unexpected behaviour.
Is this accurate? If so why?
Are there, in fact, any standards which prohibit the use of these directives?
First I've heard of it.
No; #define and so on are widely used. Sometimes too widely used, but definitely used. There are places where the C standard mandates the use of macros — you can't avoid those easily. For example, §7.5 Errors <errno.h> says:
The macros are
EDOM
EILSEQ
ERANGE
which expand to integer constant expressions with type int, distinct positive values, and which are suitable for use in #if preprocessing directives; …
Given this, it is clear that not all industry standards prohibit the use of the C preprocessor macro directives. However, there are 'best practices' or 'coding guidelines' standards from various organizations that prescribe limits on the use of the C preprocessor, though none ban its use completely — it is an innate part of C and cannot be wholly avoided. Often, these standards are for people working in safety-critical areas.
One standard you could check the MISRA C (2012) standard; that tends to proscribe things, but even that recognizes that #define et al are sometimes needed (section 8.20, rules 20.1 through 20.14 cover the C preprocessor).
The NASA GSFC (Goddard Space Flight Center) C Coding Standards simply say:
Macros should be used only when necessary. Overuse of macros can make code harder to read and maintain because the code no longer reads or behaves like standard C.
The discussion after that introductory statement illustrates the acceptable use of function macros.
The CERT C Coding Standard has a number of guidelines about the use of the preprocessor, and implies that you should minimize the use of the preprocessor, but does not ban its use.
Stroustrup would like to make the preprocessor irrelevant in C++, but that hasn't happened yet. As Peter notes, some C++ standards, such as the JSF AV C++ Coding Standards (Joint Strike Fighter, Air Vehicle) from circa 2005, dictate minimal use of the C preprocessor. Essentially, the JSF AV C++ rules restrict it to #include and the #ifndef XYZ_H / #define XYZ_H / … / #endif dance that prevents multiple inclusions of a single header. C++ has some options that are not available in C — notably, better support for typed constants that can then be used in places where C does not allow them to be used. See also static const vs #define vs enum for a discussion of the issues there.
It is a good idea to minimize the use of the preprocessor — it is often abused at least as much as it is used (see the Boost preprocessor 'library' for illustrations of how far you can go with the C preprocessor).
Summary
The preprocessor is an integral part of C and #define and #if etc cannot be wholly avoided. The statement by the professor in the question is not generally valid: #define is banned in the industry standards along with #if, #ifdef, #else, and a few other macros is an over-statement at best, but might be supportable with explicit reference to specific industry standards (but the standards in question do not include ISO/IEC 9899:2011 — the C standard).
Note that David Hammen has provided information about one specific C coding standard — the JPL C Coding Standard — that prohibits a lot of things that many people use in C, including limiting the use of of the C preprocessor (and limiting the use of dynamic memory allocation, and prohibiting recursion — read it to see why, and decide whether those reasons are relevant to you).
No, use of macros is not banned.
In fact, use of #include guards in header files is one common technique that is often mandatory and encouraged by accepted coding guidelines. Some folks claim that #pragma once is an alternative to that, but the problem is that #pragma once - by definition, since pragmas are a hook provided by the standard for compiler-specific extensions - is non-standard, even if it is supported by a number of compilers.
That said, there are a number of industry guidelines and encouraged practices that actively discourage all usage of macros other than #include guards because of the problems macros introduce (not respecting scope, etc). In C++ development, use of macros is frowned upon even more strongly than in C development.
Discouraging use of something is not the same as banning it, since it is still possible to legitimately use it - for example, by documenting a justification.
Some coding standards may discourage or even forbid the use of #define to create function-like macros that take arguments, like
#define SQR(x) ((x)*(x))
because a) such macros are not type-safe, and b) somebody will inevitably write SQR(x++), which is bad juju.
Some standards may discourage or ban the use of #ifdefs for conditional compilation. For example, the following code uses conditional compilation to properly print out a size_t value. For C99 and later, you use the %zu conversion specifier; for C89 and earlier, you use %lu and cast the value to unsigned long:
#if __STDC_VERSION__ >= 199901L
# define SIZE_T_CAST
# define SIZE_T_FMT "%zu"
#else
# define SIZE_T_CAST (unsigned long)
# define SIZE_T_FMT "%lu"
#endif
...
printf( "sizeof foo = " SIZE_T_FMT "\n", SIZE_T_CAST sizeof foo );
Some standards may mandate that instead of doing this, you implement the module twice, once for C89 and earlier, once for C99 and later:
/* C89 version */
printf( "sizeof foo = %lu\n", (unsigned long) sizeof foo );
/* C99 version */
printf( "sizeof foo = %zu\n", sizeof foo );
and then let Make (or Ant, or whatever build tool you're using) deal with compiling and linking the correct version. For this example that would be ridiculous overkill, but I've seen code that was an untraceable rat's nest of #ifdefs that should have had that conditional code factored out into separate files.
However, I am not aware of any company or industry group that has banned the use of preprocessor statements outright.
Macros can not be "banned". The statement is nonsense. Literally.
For example, section 7.5 Errors <errno.h> of the C Standard requires the use of macros:
1 The header <errno.h> defines several macros, all relating to the reporting of error conditions.
2 The macros are
EDOM
EILSEQ
ERANGE
which expand to integer constant expressions with type int, distinct
positive values, and which are suitable for use in #if preprocessing
directives; and
errno
which expands to a modifiable lvalue that has type int and thread
local storage duration, the value of which is set to a positive error
number by several library functions. If a macro definition is
suppressed in order to access an actual object, or a program defines
an identifier with the name errno, the behavior is undefined.
So, not only are macros a required part of C, in some cases not using them results in undefined behavior.
No, #define is not banned. Misuse of #define, however, may be frowned upon.
For instance, you may use
#define DEBUG
in your code so that later on, you can designate parts of your code for conditional compilation using #ifdef DEBUG, for debug purposes only. I don't think anyone in his right mind would want to ban something like this. Macros defined using #define are also used extensively in portable programs, to enable/disable compilation of platform-specific code.
However, if you are using something like
#define PI 3.141592653589793
your teacher may rightfully point out that it is much better to declare PI as a constant with the appropriate type, e.g.,
const double PI = 3.141592653589793;
as it allows the compiler to do type checking when PI is used.
Similarly (as mentioned by John Bode above), the use of function-like macros may be disapproved of, especially in C++ where templates can be used. So instead of
#define SQ(X) ((X)*(X))
consider using
double SQ(double X) { return X * X; }
or, in C++, better yet,
template <typename T>T SQ(T X) { return X * X; }
Once again, the idea is that by using the facilities of the language instead of the preprocessor, you allow the compiler to type check and also (possibly) generate better code.
Once you have enough coding experience, you'll know exactly when it is appropriate to use #define. Until then, I think it is a good idea for your teacher to impose certain rules and coding standards, but preferably they themselves should know, and be able to explain, the reasons. A blanket ban on #define is nonsensical.
That's completely false, macros are heavily used in C. Beginners often use them badly but that's not a reason to ban them from industry. A classic bad usage is #define succesor(n) n + 1. If you expect 2 * successor(9) to give 20, then you're wrong because that expression will be translated as 2 * 9 + 1 i.e. 19 not 20. Use parenthesis to get the expected result.
No. It is not banned. And truth to be told, it is impossible to do non-trivial multi-platform code without it.
No your professor is wrong or you misheard something.
#define is a preprocessor macro, and preprocessor macros are needed for conditional compilation and some conventions, which aren't simply built in the C language. For example, in a recent C standard, namely C99, support for booleans had been added. But it's not supported "native" by the language, but by preprocessor #defines. See this reference to stdbool.h
Macros are used pretty heavily in GNU land C, and without conditional preprocessor commands there's be no way to properly handle multiple inclusions of the same source files, so that makes them seem like essential language features to me.
Maybe your class is actually on C++, which despite many people's failure to do so, should be distinguished from C as it is a different language, and I can't speak for macros there. Or maybe the professor meant he's banning them in his class. Anyhow I'm sure the SO community would be interested in hearing which standard he's talking about, since I'm pretty sure all C standards support the use of macros.
Contrary to all of the answers to date, the use of preprocessor directives is oftentimes banned in high-reliability computing. There are two exceptions to this, the use of which are mandated in such organizations. These are the #include directive, and the use of an include guard in a header file. These kinds of bans are more likely in C++ rather than in C.
Here's but one example: 16.1.1 Use the preprocessor only for implementing include guards, and including header files with include guards.
Another example, this time for C rather than C++: JPL Institutional Coding Standard for the C Programming Language . This C coding standard doesn't go quite so far as banning the use of the preprocessor completely, but it comes close. Specifically, it says
Rule 20 (preprocessor use)
Use of the C preprocessor shall be limited to file inclusion and simple macros. [Power of Ten Rule 8].
I'm neither condoning nor decrying those standards. But to say they don't exist is ludicrous.
If you want your C code to interoperate with C++ code, you will want to declare your externally visible symbols, such as function declarations, in the extern "C" namespace. This is often done using conditional compilation:
#ifdef __cplusplus
extern "C" {
#endif
/* C header file body */
#ifdef __cplusplus
}
#endif
Look at any header file and you will see something like this:
#ifndef _FILE_NAME_H
#define _FILE_NAME_H
//Exported functions, strucs, define, ect. go here
#endif /*_FILE_NAME_H */
These define are not only allowed, but critical in nature as each time the header file is referenced in files it will be included separately. This means without the define you are redefining everything in between the guard multiple times which best case fails to compile and worse case leaves you scratching your head later why your code doesn't work the way you want it to.
The compiler will also use define as seen here with gcc that let you test for things like the version of the compiler which is very useful. I'm currently working on a project that needs to compile with avr-gcc, but we have a testing environment that we also run our code though. To prevent the avr specific files and registers from keeping our test code from running we do something like this:
#ifdef __AVR__
//avr specific code here
#endif
Using this in the production code, the complementary test code can compile without using the avr-gcc and the code above is only compiled using avr-gcc.
If you had just mentioned #define, I would have thought maybe he was alluding to its use for enumerations, which are better off using enum to avoid stupid errors such as assigning the same numerical value twice.
Note that even for this situation, it is sometimes better to use #defines than enums, for instance if you rely on numerical values exchanged with other systems and the actual values must stay the same even if you add/delete constants (for compatibility).
However, adding that #if, #ifdef, etc. should not be used either is just weird. Of course, they should probably not be abused, but in real life there are dozens of reasons to use them.
What he may have meant could be that (where appropriate), you should not hardcode behaviour in the source (which would require re-compilation to get a different behaviour), but rather use some form of run-time configuration instead.
That's the only interpretation I could think of that would make sense.
I saw a line of C that looked like this:
!ErrorHasOccured() ??!??! HandleError();
It compiled correctly and seems to run ok. It seems like it's checking if an error has occurred, and if it has, it handles it. But I'm not really sure what it's actually doing or how it's doing it. It does look like the programmer is trying express their feelings about errors.
I have never seen the ??!??! before in any programming language, and I can't find documentation for it anywhere. (Google doesn't help with search terms like ??!??!). What does it do and how does the code sample work?
??! is a trigraph that translates to |. So it says:
!ErrorHasOccured() || HandleError();
which, due to short circuiting, is equivalent to:
if (ErrorHasOccured())
HandleError();
Guru of the Week (deals with C++ but relevant here), where I picked this up.
Possible origin of trigraphs or as #DwB points out in the comments it's more likely due to EBCDIC being difficult (again). This discussion on the IBM developerworks board seems to support that theory.
From ISO/IEC 9899:1999 §5.2.1.1, footnote 12 (h/t #Random832):
The trigraph sequences enable the input of characters that are not defined in the Invariant Code Set as
described in ISO/IEC 646, which is a subset of the seven-bit US ASCII code set.
Well, why this exists in general is probably different than why it exists in your example.
It all started half a century ago with repurposing hardcopy communication terminals as computer user interfaces. In the initial Unix and C era that was the ASR-33 Teletype.
This device was slow (10 cps) and noisy and ugly and its view of the ASCII character set ended at 0x5f, so it had (look closely at the pic) none of the keys:
{ | } ~
The trigraphs were defined to fix a specific problem. The idea was that C programs could use the ASCII subset found on the ASR-33 and in other environments missing the high ASCII values.
Your example is actually two of ??!, each meaning |, so the result is ||.
However, people writing C code almost by definition had modern equipment,1 so my guess is: someone showing off or amusing themself, leaving a kind of Easter egg in the code for you to find.
It sure worked, it led to a wildly popular SO question.
ASR-33 Teletype
1. For that matter, the trigraphs were invented by the ANSI committee, which first met after C become a runaway success, so none of the original C code or coders would have used them.
It's a C trigraph. ??! is |, so ??!??! is the operator ||
As already stated ??!??! is essentially two trigraphs (??! and ??! again) mushed together that get replaced-translated to ||, i.e the logical OR, by the preprocessor.
The following table containing every trigraph should help disambiguate alternate trigraph combinations:
Trigraph Replaces
??( [
??) ]
??< {
??> }
??/ \
??' ^
??= #
??! |
??- ~
Source: C: A Reference Manual 5th Edition
So a trigraph that looks like ??(??) will eventually map to [], ??(??)??(??) will get replaced by [][] and so on, you get the idea.
Since trigraphs are substituted during preprocessing you could use cpp to get a view of the output yourself, using a silly trigr.c program:
void main(){ const char *s = "??!??!"; }
and processing it with:
cpp -trigraphs trigr.c
You'll get a console output of
void main(){ const char *s = "||"; }
As you can notice, the option -trigraphs must be specified or else cpp will issue a warning; this indicates how trigraphs are a thing of the past and of no modern value other than confusing people who might bump into them.
As for the rationale behind the introduction of trigraphs, it is better understood when looking at the history section of ISO/IEC 646:
ISO/IEC 646 and its predecessor ASCII (ANSI X3.4) largely endorsed existing practice regarding character encodings in the telecommunications industry.
As ASCII did not provide a number of characters needed for languages other than English, a number of national variants were made that substituted some less-used characters with needed ones.
(emphasis mine)
So, in essence, some needed characters (those for which a trigraph exists) were replaced in certain national variants. This leads to the alternate representation using trigraphs comprised of characters that other variants still had around.
I'm using VS 2010 Pro.
First, C doesn't have a bool type? I just have to use int with 0/1. Seems odd as most languages consider boolean a standard type.
Also I have Visual Studio 2010 Pro but doesn't have a "C Project". I just created an Empty C++ Project. The file names end with .c
The problem with this is the keywords are messed up (shows bool as highlighted/valid in the editor, but compiler doesn't like it).
I went to repair/add components and they have C#, F#, C++, Visual Basic; but no C?
Newest C standard (C99) has bool type indeed. Just include stdbool.h and you can use it. Unfortunately MSVC does not haver proper support for C at all. Only partial C89.
The current C language (C99) has a bool type (actually _Bool, but including stdbool.h declares a typedef alias bool for it), but since you're using MSVC, that's not available to you. In any case, using boolean types in C is completely non-idiomatic and largely useless. Just use int like everyone else. Or if you need a giant array of them, make your own bit-array implementation.
C did not have an actual Boolean type until C99.
As a result, idiomatic C doesn't really use boolean-valued symbols or expressions as such (i.e., you won't see many explicit tests against "true" or "false"). Instead, any zero-valued integral expression or a NULL pointer will evaluate to "false", and any non-zero-valued integral expression or a non-NULL pointer will evaluate to "true". So you'll see a lot of code like:
foo *bar = malloc(sizeof *bar * ...);
if (bar) // equivalent to writing bar != NULL
{
// bar is non-NULL
}
Relational and equality expressions such as a == b or c < d will evaluate to an integral type with a value of either 1 (true) or 0 (false).
Some people introduce their own TRUE or FALSE symbolic constants by doing something like
#define TRUE (1) // or (!FALSE), or (1==1), or...
#define FALSE (0) // or (!TRUE), or (1==0), or ...
Unforunately, some of those people occasionally manage to misspell 0 or 1 (or the expressions that are supposed to evaluate to 0 or 1); I once spent an afternoon chasing my tail because someone screwed up and dropped a header where TRUE == FALSE.
Not coincidentally, that was the day I stopped using symbolic constants for Boolean values altogether.
See R.'s answer for information about the bool type.
Unfortunately, MSVC doesn't support C99 when it's compiling C code - it has bits and pieces (generally things in the C99 library that are required by C++), but for the most part it only supports C90.
As for bool still being highlighted in the editor - the highlighting in MSVC may be sophisticated, but it doesn't take into account the differentiation between C, C++, and C++/CLI. For example, if you use a construct that's CLI-only, it'll be highlighted as such even if your project has nothing to do with CLI.
If you're developing in C, I'd recommend a different compiler as VC++ is not a modern C compilier and does not support the C99 standard. If you're on windows try MinGW, which basically gets you GCC with access to Windows-y API stuff.
If you're set on using Visual Studio, create your own header file to use instead of stdbool.h:
#pragma once
#define false 0
#define true 1
#define bool int
I found that Visual Studio 2010 complained if I tried to use a typedef instead of a #define to define bool.
Concerning the bool type:
In C, any non-zero value is regarded as "true" (and zero is "false"). This comes in handy when, say, checking the value of a pointer:
if ((ptr = malloc(sizeof(foo))) != 0) ...
can be shortened to:
if (ptr = malloc(sizeof(foo))) ...
C was designed to be a "mid-level" language, i.e. in-between assembler and traditional "high-level" languages. It was also designed to be compact/concise. So it has a minimalist flavor, exemplified in the its support for "shorthand" like the above, and also in the omission of a built-in Boolean data type (up to C99, as others have pointed out).
Many libraries/frameworks (ones that I'm aware of anyway) do something like the following
#define BOOL int
#define FALSE 0
#define TRUE (!FALSE)
This does mean that you should avoid directly comparing values/results to TRUE. Consider the following. Given int a = 2; int b = 3;, then both if (a) and if (b) evaluate to true, but a and b are not equal.
Concerning syntax highlighting:
C++ does have a bool type, which I'm guessing is why the compiler highlights the word. However, the fact that your source file ends it .c marks it as C code, so the type isn't allowed.
Seems like the syntax highlighting should catch this, though.
Concerning the absence of C components:
If I understand the question correctly: the short answer is, in order to do "managed code" (ie .NET) development -- which is what you'd have to be doing in order to use .NET components -- you need to use a language supported by the .NET runtime, i.e. C#, VB(.NET), F#, or C++.
(C++ is available in both "managed" and "unmanaged" flavors, meaning you can develop either against .NET or the Windows API.)
Are you under some sort of directive to use C as opposed to other languages?