What is the fastest way to find unused enum members?
Commenting values out one by one won't work because I have almost 700 members and want to trim off a few unused ones.
I am not aware of any compiler warning, but you could possibly try with splint static analyzer tool. According to its documentation (emphasis mine):
Splint detects constants, functions, parameters, variables, types,
enumerator members, and structure or union fields that are declared
but never used.
As I checked, it works as intented. Here is example code:
#include <stdio.h>
enum Month { JAN, FEB, MAR };
int main()
{
enum Month m1 = JAN;
printf("%d\n", m1);
}
By running the splint command, you will obtain following messages:
main.c:3:19: Enum member FEB not used
A member of an enum type is never used. (Use -enummemuse to inhibit warning)
main.c:3:24: Enum member MAR not used
Note that »unused« is a relatively dangerous term here.
typedef enum type_t { VALUE_A, VALUE_B, VALUE_C } type_t;
int main() {
printf("A = %d, ", VALUE_A);
printf("C = %d", VALUE_C);
return 0;
}
will print A = 0, C = 2, but removing the »unused« VALUE_B changes the output to A = 0, C = 1.
If you persist such values, do arithmetic on it or anything in that area you might end up changing the behavior of your program.
Change the names of all the enums (by, say, adding a _ before their name). Compile. You'll get a lot of errors because it won't find the previous enum names (obviously). A bit of grep-foo and making sure the compiler / build system doesn't stop on the first error - and you'll have a list of all the enums in use!
At least, that's how I'd do it.
Related
Just curious, what actually happens if I define a zero-length array int array[0]; in code? GCC doesn't complain at all.
Sample Program
#include <stdio.h>
int main() {
int arr[0];
return 0;
}
Clarification
I'm actually trying to figure out if zero-length arrays initialised this way, instead of being pointed at like the variable length in Darhazer's comments, are optimised out or not.
This is because I have to release some code out into the wild, so I'm trying to figure out if I have to handle cases where the SIZE is defined as 0, which happens in some code with a statically defined int array[SIZE];
I was actually surprised that GCC does not complain, which led to my question. From the answers I've received, I believe the lack of a warning is largely due to supporting old code which has not been updated with the new [] syntax.
Because I was mainly wondering about the error, I am tagging Lundin's answer as correct (Nawaz's was first, but it wasn't as complete) -- the others were pointing out its actual use for tail-padded structures, while relevant, isn't exactly what I was looking for.
An array cannot have zero size.
ISO 9899:2011 6.7.6.2:
If the expression is a constant expression, it shall have a value greater than zero.
The above text is true both for a plain array (paragraph 1). For a VLA (variable length array), the behavior is undefined if the expression's value is less than or equal to zero (paragraph 5). This is normative text in the C standard. A compiler is not allowed to implement it differently.
gcc -std=c99 -pedantic gives a warning for the non-VLA case.
As per the standard, it is not allowed.
However it's been current practice in C compilers to treat those declarations as a flexible array member (FAM) declaration:
C99 6.7.2.1, §16: As a special case, the last element of a structure with more than one named member may have an incomplete array type; this is called a flexible array member.
The standard syntax of a FAM is:
struct Array {
size_t size;
int content[];
};
The idea is that you would then allocate it so:
void foo(size_t x) {
Array* array = malloc(sizeof(size_t) + x * sizeof(int));
array->size = x;
for (size_t i = 0; i != x; ++i) {
array->content[i] = 0;
}
}
You might also use it statically (gcc extension):
Array a = { 3, { 1, 2, 3 } };
This is also known as tail-padded structures (this term predates the publication of the C99 Standard) or struct hack (thanks to Joe Wreschnig for pointing it out).
However this syntax was standardized (and the effects guaranteed) only lately in C99. Before a constant size was necessary.
1 was the portable way to go, though it was rather strange.
0 was better at indicating intent, but not legal as far as the Standard was concerned and supported as an extension by some compilers (including gcc).
The tail padding practice, however, relies on the fact that storage is available (careful malloc) so is not suited to stack usage in general.
In Standard C and C++, zero-size array is not allowed..
If you're using GCC, compile it with -pedantic option. It will give warning, saying:
zero.c:3:6: warning: ISO C forbids zero-size array 'a' [-pedantic]
In case of C++, it gives similar warning.
It's totally illegal, and always has been, but a lot of compilers
neglect to signal the error. I'm not sure why you want to do this.
The one use I know of is to trigger a compile time error from a boolean:
char someCondition[ condition ];
If condition is a false, then I get a compile time error. Because
compilers do allow this, however, I've taken to using:
char someCondition[ 2 * condition - 1 ];
This gives a size of either 1 or -1, and I've never found a compiler
which would accept a size of -1.
Another use of zero-length arrays is for making variable-length object (pre-C99). Zero-length arrays are different from flexible arrays which have [] without 0.
Quoted from gcc doc:
Zero-length arrays are allowed in GNU C. They are very useful as the last element of a structure that is really a header for a variable-length object:
struct line {
int length;
char contents[0];
};
struct line *thisline = (struct line *)
malloc (sizeof (struct line) + this_length);
thisline->length = this_length;
In ISO C99, you would use a flexible array member, which is slightly different in syntax and semantics:
Flexible array members are written as contents[] without the 0.
Flexible array members have incomplete type, and so the sizeof operator may not be applied.
A real-world example is zero-length arrays of struct kdbus_item in kdbus.h (a Linux kernel module).
I'll add that there is a whole page of the online documentation of gcc on this argument.
Some quotes:
Zero-length arrays are allowed in GNU C.
In ISO C90, you would have to give contents a length of 1
and
GCC versions before 3.0 allowed zero-length arrays to be statically initialized, as if they were flexible arrays. In addition to those cases that were useful, it also allowed initializations in situations that would corrupt later data
so you could
int arr[0] = { 1 };
and boom :-)
Zero-size array declarations within structs would be useful if they were allowed, and if the semantics were such that (1) they would force alignment but otherwise not allocate any space, and (2) indexing the array would be considered defined behavior in the case where the resulting pointer would be within the same block of memory as the struct. Such behavior was never permitted by any C standard, but some older compilers allowed it before it became standard for compilers to allow incomplete array declarations with empty brackets.
The struct hack, as commonly implemented using an array of size 1, is dodgy and I don't think there's any requirement that compilers refrain from breaking it. For example, I would expect that if a compiler sees int a[1], it would be within its rights to regard a[i] as a[0]. If someone tries to work around the alignment issues of the struct hack via something like
typedef struct {
uint32_t size;
uint8_t data[4]; // Use four, to avoid having padding throw off the size of the struct
}
a compiler might get clever and assume the array size really is four:
; As written
foo = myStruct->data[i];
; As interpreted (assuming little-endian hardware)
foo = ((*(uint32_t*)myStruct->data) >> (i << 3)) & 0xFF;
Such an optimization might be reasonable, especially if myStruct->data could be loaded into a register in the same operation as myStruct->size. I know nothing in the standard that would forbid such optimization, though of course it would break any code which might expect to access stuff beyond the fourth element.
Definitely you can't have zero sized arrays by standard, but actually every most popular compiler gives you to do that. So I will try to explain why it can be bad
#include <cstdio>
int main() {
struct A {
A() {
printf("A()\n");
}
~A() {
printf("~A()\n");
}
int empty[0];
};
A vals[3];
}
I am like a human would expect such output:
A()
A()
A()
~A()
~A()
~A()
Clang prints this:
A()
~A()
GCC prints this:
A()
A()
A()
It is totally strange, so it is a good reason not to use empty arrays in C++ if you can.
Also there is extension in GNU C, which gives you to create zero length array in C, but as I understand it right, there should be at least one member in structure prior, or you will get very strange examples as above if you use C++.
I have learned from school-books that a typical definition of an enum if like this:
enum weather {
sunny,
windy,
cloudy,
rain,
} weather_outside;
and then declare a var like:
enum weather weather_outside = rain;
My question is, if it is possible to use enumerated constants just by saying e.g. rain which keeps the integer 3, what is exactly the use and the point of having a type-like more complicated deceleration as enum weather weather_outside = rain; to have weather_outside being equal to 3 (since enum values can only be compile-time constants)? Why not just use a const or a macro for it? I am a bit confused whether enums are really necessary at all?!
Enumerations in C are largely for convenience, as they are not strongly typed. They were created to express named options, but limitations of language development and the ways people adopted them for other uses led to the current situation where they are little more than named integer values.
Enumerations support situations where we have various distinct options, such as the weather conditions you show, and want to name them. Ideally, enumerations would be strongly typed, so that rain would not be easily convertible to 3 or vice-versa; writing either int x = rain; or enum weather x = 3; would yield a warning or error from the compiler.
However, there are problems doing this. Consider when we want to write code that processes all values in an enumeration, such as:
for (enum weather i = sunny; i <= rain; i = i+1)
DoSomethingWithWeatherCondition(i);
Take a look at that update expression, i = i+1. It is perfectly natural to an experienced C programmer. (We could write i++, but that is the same thing, and it is spelled out here for illustration.) We know it updates i to the next value. But, when we think about it, what is i+1? i is an enumeration value, and 1 is an int. So we are adding two different things.
To make that work, C treated enumeration values as integers. This allows i+1 to be calculated in the ordinary way as the addition of two integers. Further, then the result is an int, and we have i = some int result, which means we have to allow assigning an int to an enum weather.
Maybe one solution to this would have been to define addition of enumeration values and integers, so that i+1 would not need to treat i as an integer; it would just be define to return the next value in the enumeration after i. But early C development did not do this. It would have been more work, and new features in programming languages were not developed all at once with foresight about what would be useful or what problems might arise. They were often developed bit-by-bit, trying out new things with a little prototype code.
So, enumeration values were integers. Once they were integers, people started using them for purposes beyond the simple original purpose. Enumerations were useful for defining constants that could be used where the compiler needed constant expressions, including array dimensions and initial values for static objects. const did not exist at the time, but it would not have served because, having defined const int x = 3; or even static const int x = 3;, we could not use that x in float array[x];. (Variable-length arrays did not exist at the time, and even now they are not available for static objects.) We also could not use x in int m = 2*x+3; when the definition of m is outside of a function (so it defines a static object). However, if x were defined as an enumeration value rather than an int, it could be used for these purposes.
This lead to enumerations being used in situations where things were not really being enumerated. For example, they are often used for bit-masks of various kinds:
enum
{
DeviceIsReadable = 1,
DeviceIsWriteable = 2,
DeviceSupportsRandomAccess = 4,
DeviceHasFeatureX = 8,
…
}
Once people started using enumerations this way, it was too late to make enumerations strongly typed and define arithmetic on them. These bit masks have to be usable with the bitwise operators |, &, and ^, not just +1. And people were using them for arbitrary constants and arithmetic on them. It would have been too difficult to redefine this part of the C language and change existing code.
So enumerations never developed as properly separate types in C.
This is not the correct syntax:
warning: unused variable 'weather_outside'
This example:
enum light {green, yellow, red};
enum weather { sunny, windy, cloudy, rain,};
enum weather wout;
wout = red; // mismatch
gives a warning with -Wextra:
implicit conversion from 'enum light' to 'enum weather'
This can help prevent errors.
const was not there in the beginning and can be a good alternative; but with an enum you do not have to assign a number - but you can:
enum {sunny, windy, cloudy,
rain = 100, snow}
This must be the most compact way to get two separated regions (0,1,2,100,101).
Your code is invalid. When you write
enum weather {
sunny,
windy,
cloudy,
rain,
} weather_outside;
you already declared a new type called enum weather and a new variable named weather_outside. Doing enum weather weather_outside = rain; will create a new variable with the same name so all compilers I've tried emit errors on that
So the correct way is to remove the first variable definition
enum weather {
// ...
};
enum weather weather_outside = rain;
or use typedef to avoid the use of enum everywhere
typedef enum {
// ...
} weather;
weather weather_outside = rain;
The latter may not be good practice in C due to namespace pollution, and is prohibited in Linux kernel
Back to the main question.
what is exactly the use and the point of having a type-like more complicated deceleration as enum weather weather_outside = rain; to have weather_outside being equal to 3 (since enum values can only be compile-time constants)? Why not just use a const or a macro for it? I am a bit confused whether enums are really necessary at all?!
Semantically 3 doesn't mean anything, and nothing prevents rain from changing value when a new enum member is inserted before it. A named value is always better than a magic number. Besides that way it limits the range of values that weather_outside can accept. If you see or have to do weather_outside = 123 then you know there's something wrong
And to avoid using magical numbers I could also just use a macro as well #define RAIN 3
ALL CAPS ARE HARDER TO READ, and macros are generally discouraged over inline (for functions) or const (for values). But most importantly:
enum allows the debugger to show the current value as name, which is super helpful when debugging. No one knows what 123 means but they surely understand what windy represents
It may be not as useful in this example, but suppose you have a huge enum of 200 different values, how do you know what the 155th item is without counting? The middle items may also be renumbered so their values doesn't correspond to their positions anymore
I'm sure you won't be able to remember all those 200 values when you have 200 const or #define lines. Keep looking at the header file for the value is tedious. And how would you get values of const int my_weather = sunny | windy or #define RAIN (cloudy + floody)? No need to keep track of those with enum. It just works
enum {
sunny = X,
windy = Y,
my_weather = sunny | windy,
cloudy,
floody,
rain = cloudy + rain
}
enum allows you to use the constant in an array declaration
enum array_items {
SPRING,
SUMMER,
FALL,
WINTER,
NUMBER_OF_SEASONS
};
int a[NUMBER_OF_SEASONS] = { 1, 2, 3, 4};
// const int MAX_LENGTH = 4;
// int b[MAX_LENGTH]; /* doesn't work without VLA */
return a[0];
And for an example where the type may be useful, enum text_color { ... }; void set_text_color(enum text_color col); – mediocrevegetable1
I can as well call set_text_color(2) and get no warning what so ever from my compiler!
It's a limitation of C and gcc, because enum is just an integer type in C instead of a real type like in C++, so probably gcc can't do a lot of checks for it. But ICC can warn you about that. Clang also has better warnings than gcc. See:
How to make gcc warn about passing wrong enum to a function
Is there a warning for assigning an enum variable with a value out of the range of the enum?
Warn if invalid value for an enum is passed?
Typesafe enums in C?
Sure enum in C doesn't prevent you from shooting yourself like enum class in C++ but it's much better than macros or constants
Functionally the two methods are equivalent, but enums allow you to better express that something is one of several named choices. It is easier to understand "Rainy" than "3", or "South" rather than "2". It also puts a limit on which values the enumeration can take*.
Using a typedef can help in making the code less verbose:
typedef enum
{
SUNNY,
WINDY,
CLOUDY,
RAIN
} Weather;
Weather weather_outside = RAIN;
switch (weather_outside)
{
case SUNNY:
printf("It's sunny\n");
break;
case WINDY:
printf("It's windy\n");
break;
// ...
}
An additional advantage here is that the compiler may emit a warning if not all enumerated values are handled in the switch, which is wouldn't have if weather_outside was an integer.
Taking a look at function declarations, this:
void takeStep(Direction d)
is more expressive than:
void takeStep(int d)
Of course, you could write int direction, but this is using a variable name to express a type.
[*] It is technically allowed to write Weather weather_outside = 12, as enum values are integer constants, but this should look like a code smell.
Yes, at one level, using an integer variable and a set of preprocessor #defines is just about completely equivalent to using an enum. You achieve the same things: A small number of distinct values, with no necessary numeric interpretation, encoded compactly as (generally) small integers, but represented in source code by more meaningful symbolic names.
But the preprocessor is, to some extent, a kludge, and modern practice recommends avoiding its use when possible. And enums, since they are known to the compiler proper, have a number of additional advantages:
type safety — the compiler can warn you if you use values that don't belong (e.g. weather = green)
debugging — a debugger can show you the value of an enumeration as its symbolic name, not as an inscrutable number
additional warnings — the compiler can warn you if you do a switch on an enumeration but forget one of the cases
Enums are integers and can be used as constant expressions.
enum weather {
sunny,
windy,
cloudy,
rain,
} weather_outside;
int main(void)
{
int weather = cloudy;
printf("%d\n", rain);
printf("`weather`==%d\n", weather);
}
https://godbolt.org/z/939KvreEY
If it happens and we initialize the union with two values I know that it will take the int number but I really want to know what happens behind the scenes
#include <stdio.h>
typedef union x
{
int y;
char x[6];
};
int main(void)
{
union x first={4,"AAAAAA"};
printf("%d\n",first.y);
printf("%s\n",first.x);
return 0;
}
Did this compile? it should not. the memory allocated will be written over for every time you repurpose the union.
You can define a union with many members, but only one member can contain a value at any given time. Unions provide an efficient way of using the same memory location for multiple-purpose.
The code in the question should not compile without at least a diagnostic about 'too many initializers' for the union variable. You might also get a warning about a useless storage class specifier in empty declaration because the typedef doesn't actually define an alias for union x.
Suppose you revised the code to use designated initializers, like this:
#include <stdio.h>
union x
{
int y;
char x[6];
};
int main(void)
{
union x first = { .y = 4, .x = "AAAAA" };
printf("%d\n", first.y);
printf("%s\n", first.x);
return 0;
}
This would compile and run, but with the compiler set to fussy, you might get warnings like warning: initialized field overwritten [-Woverride-init].
Note that there is one less A in the initializer for .x shown above than in the original. That ensures that the value is a (null-terminated) string, not just an array of bytes. In this context, the designated initializer for .x overrides the designated initializer for .y, and therefore the value in .x is fully valid. The output I got, for example, was:
1094795585
AAAAA
The decimal number corresponds to hex 0x41414141 as might be expected.
Note that I removed the pointless typedef. My default compilation rules wouldn't accept the code; I had to cancel -Werror and -Wextra options to get it to compile. The original code compiled with warnings without the -Werror to convert the warnings into error. Even adding -pedantic didn't trigger an error for the extra initializer (though the warning was always given, as required by the standard).
First off, typechecking is not exactly the correct term I'm looking for, so I'll explain:
Say I want to use an anonymous union, I make the union declaration in the struct const, so after initialization the values will not change. This should allow for statically checking whether the uninitialized member of the union is being accessed. In the below example the instances are initialized for either a (int) or b (float). After this initialization I would like to not be able to access the other member:
struct Test{
const union{
const int a;
const float b;
};
};
int main(){
struct Test intContainer = { .a=5 };
struct Test floatContainer = { .b=3.0 };
int validInt = intContainer.a;
int validFloat = floatContainer.b;
// For these, it could be statically determined that these values are not in use (therefore invalid access)
int invalidInt = floatContainer.a;
float invalidFloat = intContainer.b;
return 0;
}
I'd hope to have the last two assignments to give an error (or at least a warning), but it gives none (using gcc 4.9.2). Is C designed to not check for this, or is it actually a shortcoming of the language/compiler? Or is it just plain stupid to want to use such a pattern?
In my eyes it looks like it has a lot of potential if this was a feature, so can someone explain to me why I can't use this as a way to differentiate between two "sub-types" of a same struct (one for each union value). (Potentially any suggestions how I can still do something like this?)
EDIT:
So apparently it is not in the language standard, and also compilers don't check it. Still I personally think it would be a good feature to have, since it's just eliminating manually checking for the union's contents using tagged unions. So I wonder, does anyone have an idea why it is not featured in the language (or it's compilers)?
I'd hope to have the last two assignments to give an error (or at least a warning), but it gives none (using gcc 4.9.2). Is C designed to not check for this, or is it actually a shortcoming of the language/compiler?
This is a correct behavior of the compiler.
float invalidInt = floatContainer.a;
float invalidFloat = intContainer.b;
In the first declaration you are initializing a float object with an int value and in the second you are initializing a float object with a float value. In C you can assign (or initialize) any arithmetic types to any arithmetic types without any cast required. So no diagnostic required.
In your specific case you are also reading union members that are not the same members as the union member last used to store its value. Assuming the union members are of the same size (e.g., float and int here), this is a specified behavior and no diagnostic is required. If the size of union members are different, the behavior is unspecified (but still, no diagnostic required).
I have a very big constant array that is initialized at compile time.
typedef enum {
VALUE_A, VALUE_B,...,VALUE_GGF
} VALUES;
const int arr[VALUE_GGF+1] = { VALUE_A, VALUE_B, ... ,VALUE_GGF};
I want to verify that the array is initialized properly, something like:
if (arr[VALUE_GGF] != VALUE_GGF) {
printf("Error occurred. arr[VALUE_GGF]=%d\n", arr[VALUE_GGF]);
exit(1);
}
My problem is that I want to verify this at compile time. I've read about compile-time assert in C in this thread: C Compiler asserts. However, the solution offered there suggests to define an array using a negative value as size for a compilation error:
#define CASSERT(predicate, file) _impl_CASSERT_LINE(predicate,__LINE__,file)
#define _impl_PASTE(a,b) a##b
#define _impl_CASSERT_LINE(predicate, line, file) \
typedef char _impl_PASTE(assertion_failed_##file##_,line)[2*!!(predicate)-1];
and use:
CASSERT(sizeof(struct foo) == 76, demo_c);
The solution offered dosn't work for me as I need to verify my constant array values and C doesn't allow to init an array using constant array values:
int main() {
const int i = 8;
int b[i]; //OK in C++
int b[arr[0]]; //C2057 Error in VS2005
Is there any way around it? Some other compile-time asserts?
In the code below, see the extra assignement to pointers declared with fixed length in lines 6 and 9.
This will give errors on compile time if the 2 arrays are not initialized for all values of the WORKDAYS enum. Gcc says: test.c:6:67: warning: initialization from incompatible pointer type [enabled by default]
Imagine some manager adding SATURDAY to the work week enum. Without the extra checks the program will compile, but it will crash with segmentation violation when run.
The downside of this approach is that it takes up some extra memory (I have not tested if this is optimized away by the compiler).
It is also a little hackish and probably some comments are required in the code for the next guy...
Please observe that the arrays that are tested should not declare the array size. Setting the array size will ensure that you have reserved the data, but not ensure that it contains something valid.
#include <stdio.h>
typedef enum { MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, NOF_WORKDAYS_IN_WEEK } WORKDAYS;
const char * const workday_names[] = { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" };
const char * const (*p_workday_name_test)[NOF_WORKDAYS_IN_WEEK] = &workday_names;
const int workday_efforts[] = { 12, 23, 40, 20, 5 };
const int (*p_workday_effort_test)[NOF_WORKDAYS_IN_WEEK] = &workday_efforts;
main ()
{
WORKDAYS i;
int total_effort = 0;
printf("Always give 100 %% at work!\n");
for(i = MONDAY; i < NOF_WORKDAYS_IN_WEEK; i++)
{
printf(" - %d %% %s\n",workday_efforts[i], workday_names[i]);
total_effort += workday_efforts[i];
}
printf(" %d %% in total !\n", total_effort);
}
By the way, the output of the program is:
Always give 100 % at work!
- 12 % Monday
- 23 % Tuesday
- 40 % Wednesday
- 20 % Thursday
- 5 % Friday
100 % in total !
The problem is that in C++ a compile-time constant expression has the following limitations (5.19 Constant expressions):
An integral constant-expression can involve only literals (2.13), enumerators, const variables or static data members of integral or enumeration types initialized with constant expressions (8.5), non-type template parameters of integral or enumeration types, and sizeof expressions. Floating literals (2.13.3) can appear only if they are cast to integral or enumeration types. Only type conversions to integral or enumeration types can be used. In particular, except in sizeof expressions, functions, class objects, pointers, or references shall not be used, and assignment, increment, decrement, function-call, or comma operators shall not be used.
Remember that an array indexing expression is really just pointer arithmetic in disguise (arr[0] is really arr + 0), and pointers can't be used in constant expressions, even if they're pointers to const data. So I think you're out of luck with a compile time assertion for checking array contents.
C is even more limited than C++ in where these kinds of expressions can be used at compile time.
But given C++'s complexity, maybe someone can come up with a think-outside-the-box solution.
You can express your assertion as a property to check with a static analyzer and let the analyzer do the check. This has some of the properties of what you want to do:
the property is written in the source code,
it doesn't pollute the generated binary code.
However, it is different from a compile-time assertion because it needs a separate tool to be run on the program for checking. And perhaps it's a sanity check on the compiler you were trying to do, in which case this doesn't help because the static analyzer doesn't check what the compiler does, only what it should do.
ADDED: if it's for QA, then writing "formal" assertions that can be verified statically is all the rage nowadays. The approach below is very similar to .NET contracts that you may have heard about, but it is for C.
You may not think much of static analyzers, but it is loops and function calls that cause them to become imprecise. It's easier for them to get a clear picture of what is going on at initialization time, before any of these have happened.
Some analyzers advertise themselves as "correct", that is, they do not remain silent if the property you write is outside of their capabilities. In this case they complain that they can't prove it. If this happens, after you have convinced yourself that the problem is with the analyzer and not with your array, you'll be left where you are now, looking for another way.
Taking the example of the analyzer I am familiar with:
const int t[3] = {1, 2, 3};
int x;
int main(){
//# assert t[2] == 3 ;
/* more code doing stuff */
}
Run the analyzer:
$ frama-c -val t.i
...
t.i:7: Warning: Assertion got status valid.
Values of globals at initialization
t[0] ∈ {1; }
[1] ∈ {2; }
[2] ∈ {3; }
x ∈ {0; }
...
In the logs of the analyzer, you get:
its version of what it thinks the initial values of globals are,
and its interpretation of the assertion you wrote in the //# comment. Here it goes through the assertion a single time and finds it valid.
People who use this kind of tool build scripts to extract the information they're interested in from the logs automatically.
However, as a negative note, I have to point out that if you are afraid a test could eventually be forgotten, you should also worry about the mandatory static analyzer pass being forgotten after code modifications.
No. Compile-time assertion doesn't work in your case at all, because the array "arr[ARR_SIZE]" won't exist until the linking phase.
EDIT: but sizeof() seems different so at least you could do as the below:
typedef enum {VALUE_A, VALUE_B,...,VALUE_GGF} VALUES;
const int arr[] = { VALUE_A, VALUE_B, ... ,VALUE_GGF};
#define MY_ASSERT(expr) {char uname[(expr)?1:-1];uname[0]=0;}
...
// If initialized count of elements is/are not correct,
// the compiler will complain on the below line
MY_ASSERT(sizeof(arr) == sizeof(int) * ARR_SIZE)
I had tested the code on my FC8 x86 system and it works.
EDIT: noted that #sbi figured "int arr[]" case out already. thanks
As I'm using a batch file to compile and pack my application, I think that the easiset solution would be to compile another simple program that will run through all of my array and verify the content is correct.
I can run the test program through the batch file and stop compilation of the rest of the program if the test run fails.
I can't imagine why you'd feel the need to verify this at compile time, but there is one wierd/verbose hack that could be used:
typedef enum {
VALUE_A, VALUE_B,...,VALUE_GGF
} VALUES;
struct {
static const VALUES elem0 = VALUE_A;
static const VALUES elem1 = VALUE_B;
static const VALUES elem2 = VALUE_C;
...
static const VALUES elem4920 = VALUE_GGF;
const int operator[](int offset) {return *(&elem0+offset);}
} arr;
void func() {
static_assert(arr.elem0 == VALUE_A, "arr has been corrupted!");
static_assert(arr.elem4920 == VALUE_GFF, "arr has been corrupted!");
}
All of this works at compile time. Very hackish and bad form though.