Writing to a 2D Array via Pointer Notation - c

I'm having trouble understanding why incrementing the pointers in pnArryCpy below is incorrect. I figured out how to copy the array using pointer notation a different way, but I need to understand what's wrong with this (e.g., (* tgt_s)++; where int (*tgt_s)[cs]), and why tgt_s is an lvalue (e.g., tgt_s++ is valid) but *tgt_s is not (really) an lvalue.
int main(void)
{
int arr1[2][4] = { {1, 2, 3, 4}, {6, 7, 8, 9} };
int arr2[2][4];
pnArrCpy(4, arr1, arr2, arr2+2); // copies 2d array using pointer notation
// - this is where the problem is.
printarr(2, 4, arr2); // this just prints the array and works fine - not at issue
putchar('\n');
return 0;
}
void pnArrCpy(int cs, int (*src)[cs], int (*tgt_s)[cs], int (*tgt_e)[cs])
{
while (tgt_s < tgt_e)
{
**tgt_s=**src;
(* tgt_s)++; // older versions of gcc warn "target of assignment not really
// an lvalue", latest versions throw an error
(* src)++; // but no errors are runtime
}
return;
}
// trucated rest of program since it's not relevant, just the function for printing
// the array
Under the older gcc, the program compiles and displays the correct results, namely:
1 2 3 4
6 7 8 9
Mac OS 10.8.2
gcc 4.7.2 gave me the error
gcc 4.2.1 was only giving me warnings
Thank you!!
EDIT: Reason I'm using variable length arrays: this function is part of another program, and this one is just a driver I was using to troubleshoot pnArrCpy. In the actual program, the array dimensions and contents are user defined, hence use of VLA.

The thing is:
int (*tgt_s)[cs] is a pointer to an array. Take a few seconds to think about that, it's a bit of an exotic pointer
*tgt_s is therefore an array
arrays are not modifiable lvalues
What makes it hardest to understand is the way you're using the C99 feature of passing cs and then using it in the parameter list.
If you want to learn more about VLAs as function arguments, check out this excellent post.

Related

What is the use of the second subscript of 2-D array, when it is passed to a function?

In The C Programming Language, they have said that only the first dimension (subscript) of an array is free to be specified; second subscript must be specified:
If a two-dimensional array is to be passed to a function, the
parameter declaration in the function must include the number of
columns; the number of rows is irrelevant, since what is passed is, as
before, a pointer to an array of rows, where each row is an array of 5
ints. In this particular case, it is a pointer to objects that are
arrays of 5 ints. Thus if the array daytab is to be passed to a
function f, the declaration of f would be:
f(int daytab[][5]) { ... }
More generally, only the first dimension
(subscript) of an array is free; all the others have to be specified.
But in my programme, when I change the value of the second subscript (column), the programme still works the same.
For instance, the programme
#include"stdio.h"
void arf(int[][2]); // I wrote 2 instead of 5
main() {
int a[][5] = {{1,2,3,4,5}, {9,29,39,49,59}};
arf(a);
}
void arf(int arr[][2]) { //here too, changed the 2nd subscript
size_t i;
printf("%d", arr[0][4]); //a[0][4] is 5
}
prints the output 5, instead of a garbage value (my expectation).
I'm in two minds about whether it's advisable to discuss what is ultimately undefined behaviour. Here's a minimally modified version of the code in the question:
#include <stdio.h>
void arf(int arr[][2]);
int main(void)
{
int a[][5] = { { 1, 2, 3, 4, 5 }, { 0, 9, 8, 7, 6 } };
arf(a);
}
void arf(int arr[][2])
{
printf("%d\n", arr[0][4]);
printf("%d\n", arr[1][4]);
}
When compiled with GCC 7.3.0 on a Mac, I get:
$ gcc -o arr2d-13 arr2d-13.c
arr2d-13.c: In function ‘main’:
arr2d-13.c:8:9: warning: passing argument 1 of ‘arf’ from incompatible pointer type [-Wincompatible-pointer-types]
arf(a);
^
arr2d-13.c:3:6: note: expected ‘int (*)[2]’ but argument is of type ‘int (*)[5]’
void arf(int arr[][2]);
^~~
$
That alone should be enough to tell you that you're doing it wrong. I had to suppress my usual compilation flags; the code is not acceptable to me because it has that compiler warning. However, when run, the output is:
5
9
Why? Well, it is officially undefined behaviour, but …
The layout of the array in memory looks like:
╔═══╦═══╦═══╦═══╦═══╦═══╦═══╦═══╦═══╦═══╗
║ 1 ║ 2 ║ 3 ║ 4 ║ 5 ║ 0 ║ 9 ║ 8 ║ 7 ║ 6 ║
╚═══╩═══╩═══╩═══╩═══╩═══╩═══╩═══╩═══╩═══╝
When, despite its warnings, the array is passed to arf(), the function thinks that row 0 of the input array starts with the element containing 1, and row 1 of the input array starts with the element containing 3, and so on — because you told the compiler that the function takes an array with 2 elements per row.
When you misuse subscript 4 (the declaration of the argument says that the valid second subscripts are 0 and 1), then it adds 4 to the start address of the row, and ends up printing 5 in the first case; it prints 9 in the second case because the counting starts from the element containing 3 (0, 1, 2, 3, 4 elements later is the 9).
If your compiler wasn't warning you about the type mismatch, you need to get a better compiler. If your compiler was warning you about the type mismatch, you need to pay attention to your compiler. At this stage in your programming career, if the compiler deigns to warn you about your code, it has spotted a bug you need to fix. I still regard compiler warnings like that — and I normally compile with options to enforce my rule (gcc -O3 -g -std=c11 -Wall -Wextra -Werror -Wmissing-prototypes -Wstrict-prototypes) — but I've only been coding in C for nearly 35 years, so I know there's more for me to learn.
You also need to ensure you're compiling in C11 mode (the current standard), or in C99 (the old standard) — there is no real excuse for compiling in C90 (the archaic standard) or pre-standard modes.
C99 said main() on its own is outdated because there is no return type. Always specify the return type explicitly, preferably using int main(void) when you ignore the argument list, or int main() if you insist (there are examples in the standard that use that notation).
The behavior of casting a multidimensional array to another array the same size but with different geometry is not undefined: the standard guarantees that the elements of a multidimensional array will be laid out contiguously, in row-major order.
If you want to get the compiler to change the geometry of the array, you can cast to a pointer of the desired dimensions and then dereference, as in:
int main(void)
{
const int a[][5] = { { 1, 2, 3, 4, 5 }, { 0, 9, 8, 7, 6 } };
arf(*(const int (*const)[][2])a);
}
The use of the other subscript, i.e. casting to an array of dimensions [5][2] (or [sizeof(a)/sizeof(int[2])][2] for a bit more resiliency) is to tell the compiler how many rows there are, so it can potentially catch bounds errors at compile time. In this specific example, where the information is just thrown away on the receiving end, that would not do you any good, but C also lets you declare a function prototype like void arf( const ptrdiff_t m, const ptrdiff_t n, const int a[m][n]). C++ does not, but still lets you declare void arf(const int a[rows][cols]), where rows and cols are constexpr values.
Note that you should always, always, always check your array bounds for overflows in C and C++.

Why does this code print addresses?

Why didn't I get a compile time error while accidentally printing only one dimension of a 2D array?
#include <stdio.h>
void main() {
int i;
int arr[2][3] = { 1, 2, 3, 4, 5, 6 }; //<- Declared a 2D array
for (i = 0; i < 6; i++) {
printf("%d\n", arr[i]); // <- Accidently forgot a dimension
}
}
I should have received a compile time error but instead I got a group of addresses! Why? What did arr[0] mean in this context to the compiler?
An expression with array type evaluates to a pointer to the first array element in most contexts (a notable exception, among others, is the sizeof operator).
In your example, arr[i] has array type. So it evaluates to a pointer of type int (*)[] (a pointer to an array). That's what's getting printed. Printing a pointer with %d is undefined behavior, because printf() will read the pointer as if it was an int.
Felix Palmen's answer explains the observed behavior.
Regarding your second question: the reason why you don't get a warning is you did not ask for them.
Compilers are notoriously lenient by default and will accept broken code including obvious undefined behavior. This particular one is not obvious because printf() accepts any number of extra arguments after the initial format string.
You can instruct your compiler to emit many useful warnings to avoid silly mistakes and detect non obvious programming errors.
gcc -Wall -Wextra -Werror
clang -Weverything -Werror
option /W3 or /W4 with Microsoft Visual Studio.
gcc and clang will complain about the sloppy initializer for array arr. It should read:
int arr[2][3] = { { 1, 2, 3 }, { 4, 5, 6 } };
The print loop is indeed surprising, did you really mean to print the array with a single loop?
Note also that the standard prototype for main without arguments is int main(void).

Initialize an Array Literal Without a Size

I'm curious about the following expression:
int ints[] = { 1, 2, 3 };
This seems to compile fine even in c89 land with clang. Is there documentation about this? I can't seem to figure out the correct terminology to use when searching for it (and I'd rather not go through and read the entire c89 spec again).
Is this an extension? Is the compiler simply inferring the size of the array?
EDIT: I just remembered you guys like chunks of code that actually compile so here it is:
/* clang tst.c -o tst -Wall -Wextra -Werror -std=c89 */
int main(int argc, const char *argv[]) {
int ints[] = { 1, 2, 3 };
(void)(ints); (void)(argc); (void)(argv);
return 0;
}
It's part of standard C since C89:
§3.5.7 Initialization
If an array of unknown size is initialized, its size is determined by the number of initializers provided for its members. At the end of its initializer list, the array no longer has incomplete type.
In fact, there is an almost exact example:
Example:
The declaration
int x[] = { 1, 3, 5 };
defines and initializes x as a one-dimensional array object that has three members, as no size was specified and there are three initializers.
Is this an extension?
no, this is standard, for all versions of the C standard
by the = the array type is "incomplete" and then is completed by means of the initialization
Is the compiler simply inferring the size of the
array?
yes

passing a two-dimmensional array to function

I am trying to compile the following simple code in Workbench:
1. typedef float matrixType[3][3]
2.
3. void my_func(matrixType matrix)
4. {
5. printf("matrix[0][0] = %g\n",matrix[0][0]);
6. }
7.
8. void main()
9. {
10. matrixType my_matrix = {{0,1,2},{3,4,5},{6,7,8}};
11. matrixType* ptr_matrix = &my_matrix;
12.
13. my_func(*ptr_matrix);
14. }
I receive the following warning:
test.c:13: warning: passing arg 1 of `my_func' from incompatible pointer type
I can't understand, what am I doing wrong. The compilation of the same code in Visual Studio works without any warnings, but in Workbench something is going wrong.
Thanks.
With gcc (GCC) 4.5.3 with all warnings turned on it also compiles fine after making the following changes:
Add a semicolon after the first line.
Add #include <stdio.h> at top.
Change the return type of main to int.
Add return 0; as the last line.
The void main() is not correct C even though it appears in various books, manuals, and web tutorials. On some architectures it will cause strange problems, usually as the program terminates.
Taking the address of an array type is challenging the workbench type checker. I'm not going to drag out the C standard to figure out if the workbench warning is correct. It's probably a bug.
But I'm pretty sure that if you recode this way you will see no errors with any compiler:
#include <stdio.h>
typedef float rowType[3];
typedef rowType matrixType[3];
void my_func(matrixType matrix)
{
printf("matrix[0][0] = %g\n",matrix[0][0]);
}
int main()
{
matrixType my_matrix = {{0,1,2},{3,4,5},{6,7,8}};
rowType* ptr_matrix = my_matrix;
my_func(ptr_matrix);
return 0;
}
The reason is that my_matrix is automatically converted to a pointer to it's first element in the assignment
rowType* ptr_matrix = my_matrix;
This is just as in
char s[] = "hello world!";
char *p = s;
the array name s is converted to a pointer to its first element.
The parameter in void my_func(matrixType matrix) has a type identical to rowType* because all arrays are also passed as pointers to first elements. So all the types in this code must match in a way that's very clearly defined in the C standard. &my_matrix may not be incorrect, but it's an "edge case" more likely to expose type checking bugs.
You are missing a semicolon at the end of line 1.

C Compile-Time assert with constant array

I have a very big constant array that is initialized at compile time.
typedef enum {
VALUE_A, VALUE_B,...,VALUE_GGF
} VALUES;
const int arr[VALUE_GGF+1] = { VALUE_A, VALUE_B, ... ,VALUE_GGF};
I want to verify that the array is initialized properly, something like:
if (arr[VALUE_GGF] != VALUE_GGF) {
printf("Error occurred. arr[VALUE_GGF]=%d\n", arr[VALUE_GGF]);
exit(1);
}
My problem is that I want to verify this at compile time. I've read about compile-time assert in C in this thread: C Compiler asserts. However, the solution offered there suggests to define an array using a negative value as size for a compilation error:
#define CASSERT(predicate, file) _impl_CASSERT_LINE(predicate,__LINE__,file)
#define _impl_PASTE(a,b) a##b
#define _impl_CASSERT_LINE(predicate, line, file) \
typedef char _impl_PASTE(assertion_failed_##file##_,line)[2*!!(predicate)-1];
and use:
CASSERT(sizeof(struct foo) == 76, demo_c);
The solution offered dosn't work for me as I need to verify my constant array values and C doesn't allow to init an array using constant array values:
int main() {
const int i = 8;
int b[i]; //OK in C++
int b[arr[0]]; //C2057 Error in VS2005
Is there any way around it? Some other compile-time asserts?
In the code below, see the extra assignement to pointers declared with fixed length in lines 6 and 9.
This will give errors on compile time if the 2 arrays are not initialized for all values of the WORKDAYS enum. Gcc says: test.c:6:67: warning: initialization from incompatible pointer type [enabled by default]
Imagine some manager adding SATURDAY to the work week enum. Without the extra checks the program will compile, but it will crash with segmentation violation when run.
The downside of this approach is that it takes up some extra memory (I have not tested if this is optimized away by the compiler).
It is also a little hackish and probably some comments are required in the code for the next guy...
Please observe that the arrays that are tested should not declare the array size. Setting the array size will ensure that you have reserved the data, but not ensure that it contains something valid.
#include <stdio.h>
typedef enum { MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, NOF_WORKDAYS_IN_WEEK } WORKDAYS;
const char * const workday_names[] = { "Monday", "Tuesday", "Wednesday", "Thursday", "Friday" };
const char * const (*p_workday_name_test)[NOF_WORKDAYS_IN_WEEK] = &workday_names;
const int workday_efforts[] = { 12, 23, 40, 20, 5 };
const int (*p_workday_effort_test)[NOF_WORKDAYS_IN_WEEK] = &workday_efforts;
main ()
{
WORKDAYS i;
int total_effort = 0;
printf("Always give 100 %% at work!\n");
for(i = MONDAY; i < NOF_WORKDAYS_IN_WEEK; i++)
{
printf(" - %d %% %s\n",workday_efforts[i], workday_names[i]);
total_effort += workday_efforts[i];
}
printf(" %d %% in total !\n", total_effort);
}
By the way, the output of the program is:
Always give 100 % at work!
- 12 % Monday
- 23 % Tuesday
- 40 % Wednesday
- 20 % Thursday
- 5 % Friday
100 % in total !
The problem is that in C++ a compile-time constant expression has the following limitations (5.19 Constant expressions):
An integral constant-expression can involve only literals (2.13), enumerators, const variables or static data members of integral or enumeration types initialized with constant expressions (8.5), non-type template parameters of integral or enumeration types, and sizeof expressions. Floating literals (2.13.3) can appear only if they are cast to integral or enumeration types. Only type conversions to integral or enumeration types can be used. In particular, except in sizeof expressions, functions, class objects, pointers, or references shall not be used, and assignment, increment, decrement, function-call, or comma operators shall not be used.
Remember that an array indexing expression is really just pointer arithmetic in disguise (arr[0] is really arr + 0), and pointers can't be used in constant expressions, even if they're pointers to const data. So I think you're out of luck with a compile time assertion for checking array contents.
C is even more limited than C++ in where these kinds of expressions can be used at compile time.
But given C++'s complexity, maybe someone can come up with a think-outside-the-box solution.
You can express your assertion as a property to check with a static analyzer and let the analyzer do the check. This has some of the properties of what you want to do:
the property is written in the source code,
it doesn't pollute the generated binary code.
However, it is different from a compile-time assertion because it needs a separate tool to be run on the program for checking. And perhaps it's a sanity check on the compiler you were trying to do, in which case this doesn't help because the static analyzer doesn't check what the compiler does, only what it should do.
ADDED: if it's for QA, then writing "formal" assertions that can be verified statically is all the rage nowadays. The approach below is very similar to .NET contracts that you may have heard about, but it is for C.
You may not think much of static analyzers, but it is loops and function calls that cause them to become imprecise. It's easier for them to get a clear picture of what is going on at initialization time, before any of these have happened.
Some analyzers advertise themselves as "correct", that is, they do not remain silent if the property you write is outside of their capabilities. In this case they complain that they can't prove it. If this happens, after you have convinced yourself that the problem is with the analyzer and not with your array, you'll be left where you are now, looking for another way.
Taking the example of the analyzer I am familiar with:
const int t[3] = {1, 2, 3};
int x;
int main(){
//# assert t[2] == 3 ;
/* more code doing stuff */
}
Run the analyzer:
$ frama-c -val t.i
...
t.i:7: Warning: Assertion got status valid.
Values of globals at initialization
t[0] ∈ {1; }
[1] ∈ {2; }
[2] ∈ {3; }
x ∈ {0; }
...
In the logs of the analyzer, you get:
its version of what it thinks the initial values of globals are,
and its interpretation of the assertion you wrote in the //# comment. Here it goes through the assertion a single time and finds it valid.
People who use this kind of tool build scripts to extract the information they're interested in from the logs automatically.
However, as a negative note, I have to point out that if you are afraid a test could eventually be forgotten, you should also worry about the mandatory static analyzer pass being forgotten after code modifications.
No. Compile-time assertion doesn't work in your case at all, because the array "arr[ARR_SIZE]" won't exist until the linking phase.
EDIT: but sizeof() seems different so at least you could do as the below:
typedef enum {VALUE_A, VALUE_B,...,VALUE_GGF} VALUES;
const int arr[] = { VALUE_A, VALUE_B, ... ,VALUE_GGF};
#define MY_ASSERT(expr) {char uname[(expr)?1:-1];uname[0]=0;}
...
// If initialized count of elements is/are not correct,
// the compiler will complain on the below line
MY_ASSERT(sizeof(arr) == sizeof(int) * ARR_SIZE)
I had tested the code on my FC8 x86 system and it works.
EDIT: noted that #sbi figured "int arr[]" case out already. thanks
As I'm using a batch file to compile and pack my application, I think that the easiset solution would be to compile another simple program that will run through all of my array and verify the content is correct.
I can run the test program through the batch file and stop compilation of the rest of the program if the test run fails.
I can't imagine why you'd feel the need to verify this at compile time, but there is one wierd/verbose hack that could be used:
typedef enum {
VALUE_A, VALUE_B,...,VALUE_GGF
} VALUES;
struct {
static const VALUES elem0 = VALUE_A;
static const VALUES elem1 = VALUE_B;
static const VALUES elem2 = VALUE_C;
...
static const VALUES elem4920 = VALUE_GGF;
const int operator[](int offset) {return *(&elem0+offset);}
} arr;
void func() {
static_assert(arr.elem0 == VALUE_A, "arr has been corrupted!");
static_assert(arr.elem4920 == VALUE_GFF, "arr has been corrupted!");
}
All of this works at compile time. Very hackish and bad form though.

Resources