Check if a variable has been declared using Fortran 77? - loops

I'm working on some code where a great deal of the variables are named abc1, abc2, abc3, etc. I'm wondering if anyone knows if it's possible to check if the variable has been set so I can loop through a group of them easily, e.g.
do lbl1 i = 1,100
IF (.NOT. NULL(abc&i)) THEN
print*, abc&i
END IF
lbl1..continue
Any info would be great, thanks very much.

There is no way to do this from within Fortran: there is no intrinsic to check that a variable has been defined (other than NULL() and that only works for pointers). You have three real options here:
Get the compiler to complain about the use of undefined variables at compile time. I cannot think of a compiler which does not do this if you turn on its standard warnings. For example, g95 will say "Warning (113): Variable 'a' at (1) is used but not set" when used with -Wall but will just produce code which produces random rubbish if not. The problem with this approach is that not all such cases can be caught at compile time - think about passing an undefined variable into a subroutine when you compile the two procedures separately before linking.
Make all variables "invalid" and check for this in the program. One way would be to do this by hand (in the code) but Pete's second approach using a compiler flag is better. This is easer with reals than integers because you can set the invalid value of an undefined variable to NaN which should cause the executable to stop running (and give a useful backtrace) if it's used without being defined. For g95 -freal=NaN and -fpointer=invalid are useful, -finteger=-9999 may help but probably will not give quite as helpful debugging info.
Do the checks at runtime by monitoring how the executable is accessing memory. I've had success with Valgrind's memcheck. All you need to do is compile the code with debugging flags (-g or whatever) and run your program via valgrind with --undef-value-errors=yes --track-origins=yes and you should get a useful report of which variables were used undefined with backtraces for each case. This will be quite slow (all memory access gets tracked and a bitmap of status updated) but it does work, even for Fortran.
In practice 1 and 2 can be combined to catch most cases - and you really want most cases sorted out before trying to wade a massive valgrind output looking for the difficult cases.

I can think of two related options:
When the program starts up, set all of these variables to an invalid value (-9999). Check the value at run-time.
Some compilers have flags to do just this. For example, the IBM compiler lets you initialize to a specific hex value:
-qinitauto=<hex_value> | -qnoinitauto
Initializes each byte or word of storage for
automatic variables to the specified hexadecimal
value <hex_value>. This generates extra code and
should only be used for error determination.
Default: -qnoinitauto
However, as the man page says, "This generates extra code and should only be used for error determination."

Related

Calculate an arbitrary type's size without executing a program

Given any type in a C program, I would like to know it's size, such as one would when executing the following line,
printf("%d\n",sizeof(myType));
without actually executing the program. The solution must work with arbitrary types whose size can be determined at compile time, such as user-defined structs.
The rationale is that, if it can be known at compile time, there should be a way to extract that information without having to run the program. Possibly something a bit more elegant than having to parse the resulting assembler source or binary for literal contstants, but if that's the only way, I'll take what I can get.
This question doesn't quite work for me, since OP's solution relies on executing the code, and the most voted answer relies on the preprocessor info directive actually expanding macros (my toolchain doesn't apparently).
For background, I'm developing for PIC18 MCUs and using the XC8 compiler.
What I ultimately want is to verify that some structures I've defined take up their expected size in memory.
This is a classic use case for static assertions. If your compiler supports _Static_assert, you can write
_Static_assert(sizeof(mystruct) == expected_size, "Invalid struct size.");
demo 1
If you use an older compiler that does not support C-1x yet, use a common work-around that relies on declaring an array type with negative size:
#define CHECK_SIZE(x,e) if(sizeof(char[2*(sizeof(x)==e)-1])==1);
demo 2

C dummy operations

I cant imagine what the compiler does when for instance there is no lvalue for instance like this :
number>>1;
My intuition tells me that the compiler will discard this line from compilation due to optimizations and if the optimization is removed what happens?
Does it use a register to do the manipulation? or does it behave like if it was a function call so the parameters are passed to the stack, and than the memory used is marked as freed? OR does it transform that to an NOP operation?
Can I see what is happening using the VS++ debugger?
Thank your for your help.
In the example you give, it discards the operation. It knows the operation has no side effects and therefore doesn't need to emit the code to execute the statement in order to produce a correct program. If you disable optimizations, the compiler may still emit code. If you enable optimizations, the compiler may still emit code, too -- it's not perfect.
You can see the code the compiler emits using the /FAsc command line option of the Microsoft compiler. That option creates a listing file which has the object code output of the compiler interspersed with the related source code.
You can also use "view disassembly" in the debugger to see the code generated by the compiler.
Using either "view disassembly" or /FAsc on optimized code, I'd expect to see no emitted code from the compiler.
Assuming that number is a regular variable of integer type (not volatile) then any competent optimizing compiler (Microsoft, Intel, GNU, IBM, etc) will generate exactly NOTHING. Not a nop, no registers are used, etc.
If optimization is disabled off (in a "debug build"), then the compiler may well "do what you asked for", because it doesn't realize it doesn't have side-effects from the code. In this case, the value will be loaded into a register, shifted right once. The result of this is not stored anywhere. The compiler will perform "useless code elimination" as one of the optimization steps - I'm not sure which one, but for this sort of relatively simple thing, I expect the compiler to figure out with fairly basic optimization settings. Some cases, where loops are concerned, etc, the compiler may not optimize away the code until some more advanced optimization settings are enabled.
As mentioned in the comments, if the variable is volatile, then the read of the memory reprsented by number will have to be made, as the compiler MUST read volatile memory.
In Visual studio, if you "view disassembly", it should show you the code that the compiler generated.
Finally, if this was C++, there is also the possibility that the variable is not a regular integer type, the function operator>> is being called when this code is seen by the compiler - this function may have side-effects besides returning a result, so may well have to be performed. But this can't be the case in C, since there is no operator overloading.

Handling null pointers on AIX with GCC C

We have a code written in C that sometimes doesn’t handle zero pointers very well.
The code was originally written on Solaris and such pointers cause a segmentation fault. Not ideal but better than ploughing on.
Our experience is that if you read from a null pointer on AIX you get 0. If you use the xlc compiler you can add an option -qcheck=all to trap these pointers. But we use gcc (and want to continue using that compiler). Does gcc provide such an option?
Does gcc provide such an option?
I'm sheepishly volunteering the answer no, it doesn't. Although I can't cite the absence of information regarding gcc and runtime NULL checks.
The problem you're tackling is that you're trying to make undefined behavior a little more defined in a program that's poorly-written.
I recommend that you bite the bullet and either switch to xlc or manually add NULL checks to the code until the bad behavior has been found and removed.
Consider:
Making a macro to null-check a pointer
Adding that macro after pointer assignments
Adding that macro to the entry point of functions that accept pointers
As bugs are removed, you can begin to remove these checks.
Please do us all a favor and add proper NULL checks to your code. Not only will you have a slight gain in performance by checking for NULL only when needed, rather than having the compiler perform the check everywhere, but your code will be more portable to other platforms.
And let's not mention the fact that you will be more likely to print a proper error message rather than have the compiler drop some incomprehensible stack dump/source code location/error code that will not help your users at all.
AIX uses the concept of a NULL page. Essentially, NULL (i.e. virtual address 0x0) is mapped to a location that contains a whole bunch of zeros. This allows string manipulation code e.t.c. to continue despite encountering a NULL pointer.
This is contrary to most other Unix-like systems, but it is not in violation of the C standard, which considers dereferencing NULL an undefined operation. In my opinion, though, this is woefully broken: it takes an application that would crash violently and turns it into one that ignores programming errors silently, potentially producing totally incorrect results.
As far as I know, GCC has no options to work around fundamentally broken code. Even historically supported patterns, such as writable string literals, have been slowly phased out in newer GCC versions.
There might be some support when using memory debugging options such as -fmudflap, but I don't really know - in any case you should not use debugging code in production systems, especially for forcing broken code to work.
Bottom line: I don't think that you can avoid adding explicit NULL checks.
Unfortunately we now come to the basic question: Where should the NULL checks be added?. I suppose having the compiler add such checks indiscriminately would help, provided that you add an explicit check when you discover an issue.
Unfortunately, there is no Valgrind support for AIX. If you have the cash, you might want to have a look at IBM Rational Purify Plus for AIX - it might catch such errors.
It might also be possible to use xlc on a testing system and gcc for everything else, but unfortunately they are not fully compatible.

Harmful C Source File Check?

Is there a way to programmatically check if a single C source file is potentially harmful?
I know that no check will yield 100% accuracy -- but am interested at least to do some basic checks that will raise a red flag if some expressions / keywords are found. Any ideas of what to look for?
Note: the files I will be inspecting are relatively small in size (few 100s of lines at most), implementing numerical analysis functions that all operate in memory. No external libraries (except math.h) shall be used in the code. Also, no I/O should be used (functions will be run with in-memory arrays).
Given the above, are there some programmatic checks I could do to at least try to detect harmful code?
Note: since I don't expect any I/O, if the code does I/O -- it is considered harmful.
Yes, there are programmatic ways to detect the conditions that concern you.
It seems to me you ideally want a static analysis tool to verify that the preprocessed version of the code:
Doesn't call any functions except those it defines and non I/O functions in the standard library,
Doesn't do any bad stuff with pointers.
By preprocessing, you get rid of the problem of detecting macros, possibly-bad-macro content, and actual use of macros. Besides, you don't want to wade through all the macro definitions in standard C headers; they'll hurt your soul because of all the historical cruft they contain.
If the code only calls its own functions and trusted functions in the standard library, it isn't calling anything nasty. (Note: It might be calling some function through a pointer, so this check either requires a function-points-to analysis or the agreement that indirect function calls are verboten, which is actually probably reasonable for code doing numerical analysis).
The purpose of checking for bad stuff with pointers is so that it doesn't abuse pointers to manufacture nasty code and pass control to it. This first means, "no casts to pointers from ints" because you don't know where the int has been :-}
For the who-does-it-call check, you need to parse the code and name/type resolve every symbol, and then check call sites to see where they go. If you allow pointers/function pointers, you'll need a full points-to analysis.
One of the standard static analyzer tool companies (Coverity, Klocwork) likely provide some kind of method of restricting what functions a code block may call. If that doesn't work, you'll have to fall back on more general analysis machinery like our DMS Software Reengineering Toolkit
with its C Front End. DMS provides customizable machinery to build arbitrary static analyzers, for the a language description provided to it as a front end. DMS can be configured to do exactly the test 1) including the preprocessing step; it also has full points-to, and function-points-to analyzers that could be used to the points-to checking.
For 2) "doesn't use pointers maliciously", again the standard static analysis tool companies provide some pointer checking. However, here they have a much harder problem because they are statically trying to reason about a Turing machine. Their solution is either miss cases or report false positives. Our CheckPointer tool is a dynamic analysis, that is, it watches the code as it runs and if there is any attempt to misuse a pointer CheckPointer will report the offending location immediately. Oh, yes, CheckPointer outlaws casts from ints to pointers :-} So CheckPointer won't provide a static diagnostic "this code can cheat", but you will get a diagnostic if it actually attempts to cheat. CheckPointer has rather high overhead (all that checking costs something) so you probably want to run you code with it for awhile to gain some faith that nothing bad is going to happen, and then stop using it.
EDIT: Another poster says There's not a lot you can do about buffer overwrites for statically defined buffers. CheckPointer will do those tests and more.
If you want to make sure it's not calling anything not allowed, then compile the piece of code and examine what it's linking to (say via nm). Since you're hung up on doing this by a "programmatic" method, just use python/perl/bash to compile then scan the name list of the object file.
There's not a lot you can do about buffer overwrites for statically defined buffers, but you could link against an electric-fence type memory allocator to prevent dynamically allocated buffer overruns.
You could also compile and link the C-file in question against a driver which would feed it typical data while running under valgrind which could help detect poorly or maliciously written code.
In the end, however, you're always going to run up against the "does this routine terminate" question, which is famous for being undecidable. A practical way around this would be to compile your program and run it from a driver which would alarm-out after a set period of reasonable time.
EDIT: Example showing use of nm:
Create a C snippet defining function foo which calls fopen:
#include <stdio.h>
foo() {
FILE *fp = fopen("/etc/passwd", "r");
}
Compile with -c, and then look at the resulting object file:
$ gcc -c foo.c
$ nm foo.o
0000000000000000 T foo
U fopen
Here you'll see that there are two symbols in the foo.o object file. One is defined, foo, the name of the subroutine we wrote. And one is undefined, fopen, which will be linked to its definition when the object file is linked together with the other C-files and necessary libraries. Using this method, you can see immediately if the compiled object is referencing anything outside of its own definition, and by your rules, can considered to be "bad".
You could do some obvious checks for "bad" function calls like network IO or assembly blocks. Beyond that, I can't think of anything you can do with just a C file.
Given the nature of C you're just about going to have to compile to even get started. Macros and such make static analysis of C code pretty difficult.

C optimization breaks algorithm

I am programming an algorithm that contains 4 nested for loops. The problem is at at each level a pointer is updated. The innermost loop only uses 1 of the pointers. The algorithm does a complicated count. When I include a debugging statement that logs the combination of the indexes and the results of the count I get the correct answer. When the debugging statement is omitted, the count is incorrect. The program is compiled with the -O3 option on gcc. Why would this happen?
Always put your code through something like valgrind, Purify, etc, before blaming the optimizer. Especially when blaming things related to pointers.
It's not to say the optimizer isn't broken, but more than likely, it's you. I've worked on various C++ compilers and seen my share of seg faults that only happen with optimized code. Quite often, people do things like forget to count the \0 when allocating space for a string, etc. And it's just luck at that point on which pages you're allocated when the program runs with different -O settings.
Also, important questions: are you dealing with restricted pointers at all?
Print out the assembly code generated by the compiler, with optimizations. Compare to an assembly language listing of the code without optimizations.
The compiler may have figured out the some of the variables can be eliminated. They were not used in the computation. You can try to match wits with the compiler and factor out variables that are not used.
The compiler may have substituted a for loop with an equation. In some cases (after removing unused variables), the loop can be replaced by a simple equation. For example, a loop that adds 1 to a variable can be replaced by a multiplication statement.
You can tell the compiler to let a variable be by declaring it as volatile. The volatile keyword tells the compiler that the variable's value may be altered by means outside of the program and the compiler should not cache nor eliminate the variable. This is a popular technique in embedded systems programming.
Most likely your program somehow exploits undefined behaviour which works in your favour without optimisation, but with -O3 optimisation it turns against you.
I had a similar experience with one my project - it works fine with -O2 but breaks with -O3. I used setjmp()/longjmp() heavily in my code and I had to make half of variables volatile to get it working so I decided that -O2 is good enough.
Sounds like something is accessing memory that it shouldn't. Debugging symbols are famous for postponing bad news.
Is it pure C or there's any crazy thing like inline assembly?
However, run it on valgrind to check whether this might be happening. Also, did you try compiling with different optimization levels? And without debugging & optimizations?
Without code this is difficult, but here's some things that I've seen before.
Debugging print statements often end up being the only user of a value that the compiler knows about. Without the print statement the compiler thinks that it can do away with any operations and memory requirements that would otherwise be required to compute or store that value.
A similar thing happens when you have side effects included within the argument list of your print statement.
printf("%i %i\n", x, y = x - z);
Another type of error can be:
for( i = 0; i < END; i++) {
int *a = &i;
foo(a);
}
if (bar) {
int * a;
baz(a);
}
This code would likely have the intended result because the compiler would probably choose to store both a variables in the same location, so the second a would have the last value that the other a had.
inline functions can have some strange behavior or you somehow rely on them not being inlined (or sometimes the other way round), which is often the case for unoptimized code.
You should definitely try compiling with warnings turned up to the maximum (-Wall for gcc).
That will often tell you about the risky code.
(edit)
Just thought of another.
If you have more than one way to reference a variable then you can have issues that work right without optimization, but break when optimization is turned up. There are two main ways this can happen.
The first is if a value can be changed by a signal handler or another thread. You need to tell the compiler about that so it will know that any access to assume that the value needs to be reloaded and/or stored. This is done by using the volatile keyword.
The second is aliasing. This is when you create two different ways to access the same memory. Compilers usually are quick to assume that you are aliasing with pointers, but not always. Also, they're are optimization flags for some that tell them to be less quick to make those assumptions, as well as ways that you could fool the compiler (crazy stuff like while (foo != bar) { foo++; } *foo = x; not being obviously a copy of bar to foo).

Resources