I'm having some trouble with a pointer declaration that one of my co-workers wants to use because of Misra C requirements. Misra (Safety Critical guideline) won't let us mere Programmers use pointers, but will let us operate on arrays bytes. He intends to procur a pointer to an array of bytes (so we don't pass the actual array on the stack.)
// This is how I would normally do it
//
void Foo(uint8_t* pu8Buffer, uint16_t u16Len)
{
}
// This is how he has done it
//
void Foo(uint8_t (*pu8Buffer)[], uint16_t u16Len)
{
}
The calling function looks something like;
void Bar(void)
{
uint8_t u8Payload[1024]
uint16_t u16PayloadLen;
// ...some code to fill said array...
Foo(u8Payload, u16PayloadLen);
}
But, when pu8Buffer is accessed in Foo(), the array is wrong. Obviously not passing what it is expecting. The array is correct in the calling function, but not inside Foo()
I think he has created an array of pointers to bytes, not a pointer to an array of bytes.
Anyone care to clarify? Foo(&u8Payload, u16PayloadLen); doesn't work either.
In void Foo(uint8_t (*pu8Buffer)[], uint16_t u16Len), pu8Buffer is a pointer to an (incomplete) array of uint8_t. pu8Buffer has an incomplete type; it is a pointer to an array whose size is unknown. It may not be used in expressions where the size is required (such as pointer arithmetic; pu8Buffer+1 is not allowed).
Then *pu8Buffer is an array whose size is unknown. Since it is an array, it is automatically converted in most situations to a pointer to its first element. Thus, *pu8Buffer becomes a pointer to the first uint8_t of the array. The type of the converted *pu8Buffer is complete; it is a pointer to uint8_t, so it may be used in address arithmetic; *(*pu8Buffer + 1), (*pu8Buffer)[1], and 1[*pu8Buffer] are all valid expressions for the uint8_t one beyond *pu8Buffer.
I take it you are referring to MISRA-C:2004 rule 17.4 (or 2012 rule 18.4). Even someone like me who is a fan of MISRA finds this rule to be complete nonsense. The rationale for the rule is this (MISRA-C:2012 18.4):
"Array indexing using the array subscript syntax, ptr[expr], is the
preferred form of pointer arithmetic because it is often clearer and
hence less error prone than pointer manipulation. Any explicitly
calculated pointer value has the potential to access unintended or
invalid memory addresses. Such behavior is also possible with array
indexing, but the subscript syntax may ease the task of manual review.
Pointer arithmetic in C can be confusing to the novice The expression
ptr+1 may be mistakenly interpreted as the addition of 1 to the
address held in ptr. In fact the new memory address depends on the
size in bytes of the pointer's target. This misunderstanding can lead
to unexpected behaviour if sizeof is applied incorrectly."
So it all boils down to MISRA worrying about beginner programmers confusing ptr+1 to have the outcome we would have when writing (uint8_t*)ptr + 1. The solution, in my opinion, is to educate the novice programmers, rather than to restrict the professional ones (but then if you hire novice programmers to write safety-critical software with MISRA compliance, understanding pointer arithmetic is probably the least of your problems anyhow).
Solve this by writing a permanent deviation from this rule!
If you for reasons unknown don't want to deviate, but to make your current code MISRA compliant, simply rewrite the function as
void Foo(uint8_t pu8Buffer[], uint16_t u16Len)
and then replace all pointer arithmetic with pu8Buffer[something]. Then suddenly the code is 100% MISRA compatible according to the MISRA:2004 exemplar suite. And it is also 100% functionally equivalent to what you already have.
Related
This question goes out to the C gurus out there:
In C, it is possible to declare a pointer as follows:
char (* p)[10];
.. which basically states that this pointer points to an array of 10 chars. The neat thing about declaring a pointer like this is that you will get a compile time error if you try to assign a pointer of an array of different size to p. It will also give you a compile time error if you try to assign the value of a simple char pointer to p. I tried this with gcc and it seems to work with ANSI, C89 and C99.
It looks to me like declaring a pointer like this would be very useful - particularly, when passing a pointer to a function. Usually, people would write the prototype of such a function like this:
void foo(char * p, int plen);
If you were expecting a buffer of an specific size, you would simply test the value of plen. However, you cannot be guaranteed that the person who passes p to you will really give you plen valid memory locations in that buffer. You have to trust that the person who called this function is doing the right thing. On the other hand:
void foo(char (*p)[10]);
..would force the caller to give you a buffer of the specified size.
This seems very useful but I have never seen a pointer declared like this in any code I have ever ran across.
My question is: Is there any reason why people do not declare pointers like this? Am I not seeing some obvious pitfall?
What you are saying in your post is absolutely correct. I'd say that every C developer comes to exactly the same discovery and to exactly the same conclusion when (if) they reach certain level of proficiency with C language.
When the specifics of your application area call for an array of specific fixed size (array size is a compile-time constant), the only proper way to pass such an array to a function is by using a pointer-to-array parameter
void foo(char (*p)[10]);
(in C++ language this is also done with references
void foo(char (&p)[10]);
).
This will enable language-level type checking, which will make sure that the array of exactly correct size is supplied as an argument. In fact, in many cases people use this technique implicitly, without even realizing it, hiding the array type behind a typedef name
typedef int Vector3d[3];
void transform(Vector3d *vector);
/* equivalent to `void transform(int (*vector)[3])` */
...
Vector3d vec;
...
transform(&vec);
Note additionally that the above code is invariant with relation to Vector3d type being an array or a struct. You can switch the definition of Vector3d at any time from an array to a struct and back, and you won't have to change the function declaration. In either case the functions will receive an aggregate object "by reference" (there are exceptions to this, but within the context of this discussion this is true).
However, you won't see this method of array passing used explicitly too often, simply because too many people get confused by a rather convoluted syntax and are simply not comfortable enough with such features of C language to use them properly. For this reason, in average real life, passing an array as a pointer to its first element is a more popular approach. It just looks "simpler".
But in reality, using the pointer to the first element for array passing is a very niche technique, a trick, which serves a very specific purpose: its one and only purpose is to facilitate passing arrays of different size (i.e. run-time size). If you really need to be able to process arrays of run-time size, then the proper way to pass such an array is by a pointer to its first element with the concrete size supplied by an additional parameter
void foo(char p[], unsigned plen);
Actually, in many cases it is very useful to be able to process arrays of run-time size, which also contributes to the popularity of the method. Many C developers simply never encounter (or never recognize) the need to process a fixed-size array, thus remaining oblivious to the proper fixed-size technique.
Nevertheless, if the array size is fixed, passing it as a pointer to an element
void foo(char p[])
is a major technique-level error, which unfortunately is rather widespread these days. A pointer-to-array technique is a much better approach in such cases.
Another reason that might hinder the adoption of the fixed-size array passing technique is the dominance of naive approach to typing of dynamically allocated arrays. For example, if the program calls for fixed arrays of type char[10] (as in your example), an average developer will malloc such arrays as
char *p = malloc(10 * sizeof *p);
This array cannot be passed to a function declared as
void foo(char (*p)[10]);
which confuses the average developer and makes them abandon the fixed-size parameter declaration without giving it a further thought. In reality though, the root of the problem lies in the naive malloc approach. The malloc format shown above should be reserved for arrays of run-time size. If the array type has compile-time size, a better way to malloc it would look as follows
char (*p)[10] = malloc(sizeof *p);
This, of course, can be easily passed to the above declared foo
foo(p);
and the compiler will perform the proper type checking. But again, this is overly confusing to an unprepared C developer, which is why you won't see it in too often in the "typical" average everyday code.
I would like to add to AndreyT's answer (in case anyone stumbles upon this page looking for more info on this topic):
As I begin to play more with these declarations, I realize that there is major handicap associated with them in C (apparently not in C++). It is fairly common to have a situation where you would like to give a caller a const pointer to a buffer you have written into. Unfortunately, this is not possible when declaring a pointer like this in C. In other words, the C standard (6.7.3 - Paragraph 8) is at odds with something like this:
int array[9];
const int (* p2)[9] = &array; /* Not legal unless array is const as well */
This constraint does not seem to be present in C++, making these type of declarations far more useful. But in the case of C, it is necessary to fall back to a regular pointer declaration whenever you want a const pointer to the fixed size buffer (unless the buffer itself was declared const to begin with). You can find more info in this mail thread: link text
This is a severe constraint in my opinion and it could be one of the main reasons why people do not usually declare pointers like this in C. The other being the fact that most people do not even know that you can declare a pointer like this as AndreyT has pointed out.
The obvious reason is that this code doesn't compile:
extern void foo(char (*p)[10]);
void bar() {
char p[10];
foo(p);
}
The default promotion of an array is to an unqualified pointer.
Also see this question, using foo(&p) should work.
I also want to use this syntax to enable more type checking.
But I also agree that the syntax and mental model of using pointers is simpler, and easier to remember.
Here are some more obstacles I have come across.
Accessing the array requires using (*p)[]:
void foo(char (*p)[10])
{
char c = (*p)[3];
(*p)[0] = 1;
}
It is tempting to use a local pointer-to-char instead:
void foo(char (*p)[10])
{
char *cp = (char *)p;
char c = cp[3];
cp[0] = 1;
}
But this would partially defeat the purpose of using the correct type.
One has to remember to use the address-of operator when assigning an array's address to a pointer-to-array:
char a[10];
char (*p)[10] = &a;
The address-of operator gets the address of the whole array in &a, with the correct type to assign it to p. Without the operator, a is automatically converted to the address of the first element of the array, same as in &a[0], which has a different type.
Since this automatic conversion is already taking place, I am always puzzled that the & is necessary. It is consistent with the use of & on variables of other types, but I have to remember that an array is special and that I need the & to get the correct type of address, even though the address value is the same.
One reason for my problem may be that I learned K&R C back in the 80s, which did not allow using the & operator on whole arrays yet (although some compilers ignored that or tolerated the syntax). Which, by the way, may be another reason why pointers-to-arrays have a hard time to get adopted: they only work properly since ANSI C, and the & operator limitation may have been another reason to deem them too awkward.
When typedef is not used to create a type for the pointer-to-array (in a common header file), then a global pointer-to-array needs a more complicated extern declaration to share it across files:
fileA:
char (*p)[10];
fileB:
extern char (*p)[10];
Well, simply put, C doesn't do things that way. An array of type T is passed around as a pointer to the first T in the array, and that's all you get.
This allows for some cool and elegant algorithms, such as looping through the array with expressions like
*dst++ = *src++
The downside is that management of the size is up to you. Unfortunately, failure to do this conscientiously has also led to millions of bugs in C coding, and/or opportunities for malevolent exploitation.
What comes close to what you ask in C is to pass around a struct (by value) or a pointer to one (by reference). As long as the same struct type is used on both sides of this operation, both the code that hand out the reference and the code that uses it are in agreement about the size of the data being handled.
Your struct can contain whatever data you want; it could contain your array of a well-defined size.
Still, nothing prevents you or an incompetent or malevolent coder from using casts to fool the compiler into treating your struct as one of a different size. The almost unshackled ability to do this kind of thing is a part of C's design.
You can declare an array of characters a number of ways:
char p[10];
char* p = (char*)malloc(10 * sizeof(char));
The prototype to a function that takes an array by value is:
void foo(char* p); //cannot modify p
or by reference:
void foo(char** p); //can modify p, derefernce by *p[0] = 'f';
or by array syntax:
void foo(char p[]); //same as char*
I would not recommend this solution
typedef int Vector3d[3];
since it obscures the fact that Vector3D has a type that you
must know about. Programmers usually dont expect variables of the
same type to have different sizes. Consider :
void foo(Vector3d a) {
Vector3d b;
}
where sizeof a != sizeof b
Maybe I'm missing something, but... since arrays are constant pointers, basically that means that there's no point in passing around pointers to them.
Couldn't you just use void foo(char p[10], int plen); ?
type (*)[];
// points to an array e.g
int (*ptr)[5];
// points to an 5 integer array
// gets the address of the array
type *[];
// points to an array of pointers e.g
int* ptr[5]
// point to an array of five integer pointers
// point to 5 adresses.
On my compiler (vs2008) it treats char (*p)[10] as an array of character pointers, as if there was no parentheses, even if I compile as a C file. Is compiler support for this "variable"? If so that is a major reason not to use it.
IAR Embedded Workbench for msp430. selected C standard 99
Hello, I am new to pointers and stuck in one place. Here is a part of code:
void read_SPI_CS_P2(uint8_t read_from, int save_to, uint8_t bytes_to_read)
{
uint8_t * ptr;
ptr = save_to;
...
From what I read about pointers I assume that:
uint8_t * ptr; - here I declare what type of data pointer points to (I wanna save uint8_t value)
ptr = save_to; - here I assign adress of memory I would like to write to (it's 0xF900 so int)
It gives me an Error[Pe513]: a value of type "int" cannot be assigned to an entity of type "uint8_t *"
The question is.. why? Size of data that will be saved (to save_to) and size of memory adress can't be different?
From a syntax point of view you can cast an int directly to a pointer changing its value. but is not probably a good idea or something that will help you in this case.
You've to use the compiler and the linker support to instruct them that you want some data at a specific location in memory. Usually you can do this (with IAR toolchains) with #pragma location syntax, using something like:
__no_init volatile uint8_t g_u8SavingLocation # 0xF900;
You cannot simply set the value of a pointer in memory and start to write at that location. The Linker is in charge to decide where stuffs goes in memory, and you can instruct with #pragma and linker setup files on what you want to achieve.
C language has no immediate feature that would allow you to create pointers to arbitrary numerical addresses. In C language pointers are obtained by taking addresses of existing objects (using unary & operator) or by receiving pointers from memory allocation functions like malloc. Additionally, you can create null pointers. In all such cases you never know and never need to know the actual numerical value of the pointer.
If you want to make your pointer to point to a specific address, then formally the abstract C language cannot help you with it. There's simply no such feature in the language. However, in the implementation-dependent portions of the language a specific compiler can guarantee, that if you forcefully convert an integral value (containing a numerical address) to pointer type, the resultant pointer will point to that address. So, if in your case the numerical address is stored in save_to variable, you can force it into a pointer by using an explicit cast
ptr = (unit8_t *) save_to;
The cast is required. Assigning integral values to pointers directly is not allowed in standard C. (It used to be allowed in pre-standard C though).
A better type to store integral addresses would be uintptr_t. int, or any other signed type, is certainly not a good choice for that purpose.
P.S. I'm not sure what point you are trying to make when you talk about sizes. There's no relation between the "size of data that will be saved" and size of memory address. Why?
Hi I'm sure this must be a common question but I can't find the answer when I search for it. My question basically concerns two pointers. I want to compare their addresses and determine if one is bigger than the other. I would expect all addresses to be unsigned during comparison. Is this true, and does it vary between C89, C99 and C++? When I compile with gcc the comparison is unsigned.
If I have two pointers that I'm comparing like this:
char *a = (char *) 0x80000000; //-2147483648 or 2147483648 ?
char *b = (char *) 0x1;
Then a is greater. Is this guaranteed by a standard?
Edit to update on what I am trying to do. I have a situation where I would like to determine that if there's an arithmetic error it will not cause a pointer to go out of bounds. Right now I have the start address of the array and the end address. And if there's an error and the pointer calculation is wrong, and outside of the valid addresses of memory for the array, I would like to make sure no access violation occurs. I believe I can prevent this by comparing the suspect pointer, which has been returned by another function, and determining if it is within the acceptable range of the array. The question of negative and positive addresses has to do with whether I can make the comparisons, as discussed above in my original question.
I appreciate the answers so far. Based on my edit would you say that what I'm doing is undefined behavior in gcc and msvc? This is a program that will run on Microsoft Windows only.
Here's an over simplified example:
char letters[26];
char *do_not_read = &letters[26];
char *suspect = somefunction_i_dont_control(letters,26);
if( (suspect >= letters) && (suspect < do_not_read) )
printf("%c", suspect);
Another edit, after reading AndreyT's answer it appears to be correct. Therefore I will do something like this:
char letters[26];
uintptr_t begin = letters;
uintptr_t toofar = begin + sizeof(letters);
char *suspect = somefunction_i_dont_control(letters,26);
if( ((uintptr_t)suspect >= begin) && ((uintptr_t)suspect < toofar ) )
printf("%c", suspect);
Thanks everyone!
Pointer comparisons cannot be signed or unsigned. Pointers are not integers.
C language (as well as C++) defines relative pointer comparisons only for pointers that point into the same aggregate (struct or array). The ordering is natural: the pointer that points to an element with smaller index in an array is smaller. The pointer that points to a struct member declared earlier is smaller. That's it.
You can't legally compare arbitrary pointers in C/C++. The result of such comparison is not defined. If you are interested in comparing the numerical values of the addresses stored in the pointers, it is your responsibility to manually convert the pointers to integer values first. In that case, you will have to decide whether to use a signed or unsigned integer type (intptr_t or uintptr_t). Depending on which type you choose, the comparison will be "signed" or "unsigned".
The integer-to-pointer conversion is wholly implementation defined, so it depends on the implementation you are using.
That said, you are only allowed to relationally compare pointers that point to parts of the same object (basically, to subobjects of the same struct or elements of the same array). You aren't allowed to compare two pointers to arbitrary, wholly unrelated objects.
From a draft C++ Standard 5.9:
If two pointers p and q of the same type point to different objects
that are not members of the same object or elements of the same array
or to different functions, or if only one of them is null, the results
of p<q, p>q, p<=q, and p>=q are unspecified.
So, if you cast numbers to pointers and compare them, C++ gives you unspecified results. If you take the address of elements you can validly compare, the results of comparison operations are specified independently of the signed-ness of the pointer types.
Note unspecified is not undefined: it's quite possible to compare pointers to different objects of the same type that aren't in the same structure or array, and you can expect some self-consistent result (otherwise it'd be impossible to use such pointers as keys in trees, or to sort a vector of such pointers, binary search the vector etc., where a consistent intuitive overall < ordering is needed).
Note that in very old C++ Standards the behaviour was undefined - like the 2005 WG14/N1124 draft andrewdski links to under James McNellis's answer -
To complement the other answers, comparison between pointers that point to different objects depends on the standard.
In C99 (ISO/IEC 9899:1999 (E)), §6.5.8:
5 [...] In all other cases, the behavior is undefined.
In C++03 (ISO/IEC 14882:2003(E)), §5.9:
-Other pointer comparisons are unspecified.
I know several of the answers here say you cannot compare pointers unless they point to within the same structure, but that's a red herring and I'll try to explain why. One of your pointers points to the start of your array, the other to the end, so they are pointing to the same structure. A language lawyer could say that if your third pointer points outside of the object, the comparison is undefined, so x >= array.start might be true for all x. But this is no issue, since at the point of comparison C++ cannot know if the array isn't embedded in an even bigger structure. Furthermore, if your address space is linear, like it's bound to be these days, your pointer comparison will be implemented as an (un)signed integer comparison, since any other implementation would be slower. Even in the times of segments and offsets, (far) pointer comparison was implemented by first normalising the pointer and then comparing them as integers.
What this all boils down to then, is that if your compiler is okay, comparing the pointers without worrying about the signs should work, if all you care about is that the pointer points within the array, since the compiler should make the pointers signed or unsigned depending on which of the two boundaries a C++ object may straddle.
Different platforms behave differently in this matter, which is why C++ has to leave it up to the platform. There are even platforms in which both addresses near 0 and 80..00h are not mappable or already taken at process start-up. In that case, it doesn't matter, as long as you're consistent about it.
Sometimes this can cause compatibility issues. As an example, in Win32 pointers are unsigned. Now, it used to be the case that of the 4GB address space only the lower half (more precisely 10000h ... 7FFFFFFFh, because of the NULL-Pointer Assignment Partition) was available to applications; high addresses were only available to the kernel. This caused some people to put addresses in signed variables, and their programs would keep working since the high bit was always 0. But then came /3GB switch, which made almost 3 GB available to applications (more precisely 10000h ... BFFFFFFFh) and the application would crash or behave erratically.
You explicitly state your program will be Windows-only, which uses unsigned pointers. However, maybe you'll change your mind in the future, and using intptr_t or uintptr_t is bad for portability. I also wonder if you should be doing this at all... if you're indexing into an array it might be safer to compare indices instead. Suppose for example that you have a 1 GB array at 1500000h ... 41500000h, consisting of 16,384 elements of 64 kB each. Suppose you accidentally look up index 80,000 – clearly out of range. The pointer calculation will yield 39D00000h, so your pointer check will allow it, even though it shouldn't.
With a 32-bit OS, we know that the pointer size is 4 bytes, so sizeof(char*) is 4 and sizeof(int*) is 4, etc. We also know that when you increment a char*, the byte address (offset) changes by sizeof(char); when you increment an int*, the byte address changes by sizeof(int).
My question is:
How does the OS know how much to increment the byte address for sizeof(YourType)?
The compiler only knows how to increment a pointer of type YourType * if it knows the size of YourType, which is the case if and only if the complete definition of YourType is known to the compiler at this point.
For example, if we have:
struct YourType *a;
struct YourOtherType *b;
struct YourType {
int x;
char y;
};
Then you are allowed to do this:
a++;
but you are not allowed to do this:
b++;
..since struct YourType is a complete type, but struct YourOtherType is an incomplete type.
The error given by gcc for the line b++; is:
error: arithmetic on pointer to an incomplete type
The OS doesn't really have anything to do with that - it's the compiler's job (as #zneak mentioned).
The compiler knows because it just compiled that struct or class - the size is, in the struct case, pretty much the sum of the sizes of all the struct's contents.
It is primarily an issue for the C (or C++) compiler, and not primarily an issue for the OS per se.
The compiler knows its alignment rules for the basic types, and applies those rules to any type you create. It can therefore establish the alignment requirement and size of YourType, and it will ensure that it increments any YourType* variable by the correct value. The alignment rules vary by hardware (CPU), and the compiler is responsible for knowing which rules to apply.
One key point is that the size of YourType must be such that when you have an array:
YourType array[20];
then &array[1] == &array[0] + 1. The byte address of &array[1] must be incremented by sizeof(YourType), and (assuming YourType is a structure), each of the elements of array[1] must be properly aligned, just as the elements of array[0] must be properly aligned.
Also remember types are defined in your compiled code to match the hardware you are working on. It is entirely up to the source code that is used to work this out.
So a low end chipset 16 bit targeted C program might have need to define types differently to a 32 bit system.
The programming language and compiler are what govern your types. Not the OS or hardware.
Although of course trying to stick a 32 bit number into a 16 bit register could be a problem!
C pointers are typed, unlike some old languages like PL/1. This not only allows the size of the object to be known, but so widening operations and formatting can be carried out. For example getting the data at *p, is that a float, a double, or a char? The compiler needs to know (think divisions, for example).
Of course we do have a typeless pointer, a void *, which you cannot do any arithmetic with simply because the compiler has no idea how much to add to the address.
This question goes out to the C gurus out there:
In C, it is possible to declare a pointer as follows:
char (* p)[10];
.. which basically states that this pointer points to an array of 10 chars. The neat thing about declaring a pointer like this is that you will get a compile time error if you try to assign a pointer of an array of different size to p. It will also give you a compile time error if you try to assign the value of a simple char pointer to p. I tried this with gcc and it seems to work with ANSI, C89 and C99.
It looks to me like declaring a pointer like this would be very useful - particularly, when passing a pointer to a function. Usually, people would write the prototype of such a function like this:
void foo(char * p, int plen);
If you were expecting a buffer of an specific size, you would simply test the value of plen. However, you cannot be guaranteed that the person who passes p to you will really give you plen valid memory locations in that buffer. You have to trust that the person who called this function is doing the right thing. On the other hand:
void foo(char (*p)[10]);
..would force the caller to give you a buffer of the specified size.
This seems very useful but I have never seen a pointer declared like this in any code I have ever ran across.
My question is: Is there any reason why people do not declare pointers like this? Am I not seeing some obvious pitfall?
What you are saying in your post is absolutely correct. I'd say that every C developer comes to exactly the same discovery and to exactly the same conclusion when (if) they reach certain level of proficiency with C language.
When the specifics of your application area call for an array of specific fixed size (array size is a compile-time constant), the only proper way to pass such an array to a function is by using a pointer-to-array parameter
void foo(char (*p)[10]);
(in C++ language this is also done with references
void foo(char (&p)[10]);
).
This will enable language-level type checking, which will make sure that the array of exactly correct size is supplied as an argument. In fact, in many cases people use this technique implicitly, without even realizing it, hiding the array type behind a typedef name
typedef int Vector3d[3];
void transform(Vector3d *vector);
/* equivalent to `void transform(int (*vector)[3])` */
...
Vector3d vec;
...
transform(&vec);
Note additionally that the above code is invariant with relation to Vector3d type being an array or a struct. You can switch the definition of Vector3d at any time from an array to a struct and back, and you won't have to change the function declaration. In either case the functions will receive an aggregate object "by reference" (there are exceptions to this, but within the context of this discussion this is true).
However, you won't see this method of array passing used explicitly too often, simply because too many people get confused by a rather convoluted syntax and are simply not comfortable enough with such features of C language to use them properly. For this reason, in average real life, passing an array as a pointer to its first element is a more popular approach. It just looks "simpler".
But in reality, using the pointer to the first element for array passing is a very niche technique, a trick, which serves a very specific purpose: its one and only purpose is to facilitate passing arrays of different size (i.e. run-time size). If you really need to be able to process arrays of run-time size, then the proper way to pass such an array is by a pointer to its first element with the concrete size supplied by an additional parameter
void foo(char p[], unsigned plen);
Actually, in many cases it is very useful to be able to process arrays of run-time size, which also contributes to the popularity of the method. Many C developers simply never encounter (or never recognize) the need to process a fixed-size array, thus remaining oblivious to the proper fixed-size technique.
Nevertheless, if the array size is fixed, passing it as a pointer to an element
void foo(char p[])
is a major technique-level error, which unfortunately is rather widespread these days. A pointer-to-array technique is a much better approach in such cases.
Another reason that might hinder the adoption of the fixed-size array passing technique is the dominance of naive approach to typing of dynamically allocated arrays. For example, if the program calls for fixed arrays of type char[10] (as in your example), an average developer will malloc such arrays as
char *p = malloc(10 * sizeof *p);
This array cannot be passed to a function declared as
void foo(char (*p)[10]);
which confuses the average developer and makes them abandon the fixed-size parameter declaration without giving it a further thought. In reality though, the root of the problem lies in the naive malloc approach. The malloc format shown above should be reserved for arrays of run-time size. If the array type has compile-time size, a better way to malloc it would look as follows
char (*p)[10] = malloc(sizeof *p);
This, of course, can be easily passed to the above declared foo
foo(p);
and the compiler will perform the proper type checking. But again, this is overly confusing to an unprepared C developer, which is why you won't see it in too often in the "typical" average everyday code.
I would like to add to AndreyT's answer (in case anyone stumbles upon this page looking for more info on this topic):
As I begin to play more with these declarations, I realize that there is major handicap associated with them in C (apparently not in C++). It is fairly common to have a situation where you would like to give a caller a const pointer to a buffer you have written into. Unfortunately, this is not possible when declaring a pointer like this in C. In other words, the C standard (6.7.3 - Paragraph 8) is at odds with something like this:
int array[9];
const int (* p2)[9] = &array; /* Not legal unless array is const as well */
This constraint does not seem to be present in C++, making these type of declarations far more useful. But in the case of C, it is necessary to fall back to a regular pointer declaration whenever you want a const pointer to the fixed size buffer (unless the buffer itself was declared const to begin with). You can find more info in this mail thread: link text
This is a severe constraint in my opinion and it could be one of the main reasons why people do not usually declare pointers like this in C. The other being the fact that most people do not even know that you can declare a pointer like this as AndreyT has pointed out.
The obvious reason is that this code doesn't compile:
extern void foo(char (*p)[10]);
void bar() {
char p[10];
foo(p);
}
The default promotion of an array is to an unqualified pointer.
Also see this question, using foo(&p) should work.
I also want to use this syntax to enable more type checking.
But I also agree that the syntax and mental model of using pointers is simpler, and easier to remember.
Here are some more obstacles I have come across.
Accessing the array requires using (*p)[]:
void foo(char (*p)[10])
{
char c = (*p)[3];
(*p)[0] = 1;
}
It is tempting to use a local pointer-to-char instead:
void foo(char (*p)[10])
{
char *cp = (char *)p;
char c = cp[3];
cp[0] = 1;
}
But this would partially defeat the purpose of using the correct type.
One has to remember to use the address-of operator when assigning an array's address to a pointer-to-array:
char a[10];
char (*p)[10] = &a;
The address-of operator gets the address of the whole array in &a, with the correct type to assign it to p. Without the operator, a is automatically converted to the address of the first element of the array, same as in &a[0], which has a different type.
Since this automatic conversion is already taking place, I am always puzzled that the & is necessary. It is consistent with the use of & on variables of other types, but I have to remember that an array is special and that I need the & to get the correct type of address, even though the address value is the same.
One reason for my problem may be that I learned K&R C back in the 80s, which did not allow using the & operator on whole arrays yet (although some compilers ignored that or tolerated the syntax). Which, by the way, may be another reason why pointers-to-arrays have a hard time to get adopted: they only work properly since ANSI C, and the & operator limitation may have been another reason to deem them too awkward.
When typedef is not used to create a type for the pointer-to-array (in a common header file), then a global pointer-to-array needs a more complicated extern declaration to share it across files:
fileA:
char (*p)[10];
fileB:
extern char (*p)[10];
Well, simply put, C doesn't do things that way. An array of type T is passed around as a pointer to the first T in the array, and that's all you get.
This allows for some cool and elegant algorithms, such as looping through the array with expressions like
*dst++ = *src++
The downside is that management of the size is up to you. Unfortunately, failure to do this conscientiously has also led to millions of bugs in C coding, and/or opportunities for malevolent exploitation.
What comes close to what you ask in C is to pass around a struct (by value) or a pointer to one (by reference). As long as the same struct type is used on both sides of this operation, both the code that hand out the reference and the code that uses it are in agreement about the size of the data being handled.
Your struct can contain whatever data you want; it could contain your array of a well-defined size.
Still, nothing prevents you or an incompetent or malevolent coder from using casts to fool the compiler into treating your struct as one of a different size. The almost unshackled ability to do this kind of thing is a part of C's design.
You can declare an array of characters a number of ways:
char p[10];
char* p = (char*)malloc(10 * sizeof(char));
The prototype to a function that takes an array by value is:
void foo(char* p); //cannot modify p
or by reference:
void foo(char** p); //can modify p, derefernce by *p[0] = 'f';
or by array syntax:
void foo(char p[]); //same as char*
I would not recommend this solution
typedef int Vector3d[3];
since it obscures the fact that Vector3D has a type that you
must know about. Programmers usually dont expect variables of the
same type to have different sizes. Consider :
void foo(Vector3d a) {
Vector3d b;
}
where sizeof a != sizeof b
Maybe I'm missing something, but... since arrays are constant pointers, basically that means that there's no point in passing around pointers to them.
Couldn't you just use void foo(char p[10], int plen); ?
type (*)[];
// points to an array e.g
int (*ptr)[5];
// points to an 5 integer array
// gets the address of the array
type *[];
// points to an array of pointers e.g
int* ptr[5]
// point to an array of five integer pointers
// point to 5 adresses.
On my compiler (vs2008) it treats char (*p)[10] as an array of character pointers, as if there was no parentheses, even if I compile as a C file. Is compiler support for this "variable"? If so that is a major reason not to use it.