I was doing system programming in C on creating a server. There was a bug that caused serious issues that results cannot be returned correctly. I solved the issue by doing a line separation, but did not understand why this solved the issue.
Original code that caused serious issues:
int Bytes, Size = cache[index].len;
New code that solved the issue:
int Bytes = cache[index].len;
Size = Bytes;
What is the difference between my original code and new code? Are they not identical?
They're not identical at all. The first code:
int Bytes, Size = cache[index].len;
Declares two variables, Bytes and Size, both of type int; Size is initialized to the value of cache[index].len and Bytes is uninitialized (its value is indeterminate).
The second code (I'm inserting int in the second line to make it a declaration-with-initializer, since I'm assuming this is what you meant):
int Bytes = cache[index].len;
int Size = Bytes;
Declares the same two variables of the same type; but here, it is Bytes that is initialized to the value of cache[index].len and Bytes is then copied to Size.
Related
I am currently working on a personal AVR project and wrote a function for transmitting a double to my serial LCD screen using UART. For this, I convert a double into a char array and then transmit it char by char, serially.
What confuses me is what the length of the char buffer array has to be. The length of the array does not seem to correspond to the size of a double. Funnily enough, I can set the the length of the array to 0, and it will still work, like this:
void USART_Transmit_Double(double doubleNumber) {
char aNumberAsString[0];
dtostrf(doubleNumber, 3, 0, aNumberAsString);
USART_Transmit_String(aNumberAsString);
}
Here I am printing number 1234 in the main loop and it appears on my LCD just fine. If I change the precision to 1 however, the entire thing goes haywire.
I tested the same function on my Arduino and keep getting completely different results, on it each character seems to correspond to each element of the array, so if I wanted to print 1234.2, I would have to set the buffer length to 6.
So what's going on here?
You are running into undefined behavior. In particular, in this link, the phrase: Examples of undefined behavior are memory accesses outside of array bounds,... would likely apply to your described scenario. It may work a thousand times if the memory you are attempting to access (but do not own) does not happen to violate another variable's memory space. The first time it does, your program will crash.
The following statement, although legal, creates a useless variable, and is likely the reason you are seeing odd results:
char aNumberAsString[0];
Addressing your comment:
You may be confusing the index value necessary to create a buffer containing a single value, with the index value necessary to access that value once the variable is created. i.e.
char aNumberAsString[1] = 'A'; //creates an array of 1 char ([1])
will create a variable that can contain a single char value.
char value = aNumberAsString[0]; // accesses the only value in array ([0])
EXAMPLE:
See an example of how to use the dtostrf function here.
Note the size of the variables this example is using:
static float f_val = 123.6794;
static char outstr[15];
to support the following call:
dtostrf(f_val,7, 3, outstr);
Which produces the following output:
123.679
While freeing some pointers, I get an access violation.
In order to know what's going on, I've decided to ask to free the pointers at an earlier stage in the code, even directly after memory has been allocated, and still it crashes.
It means that something is seriously wrong in the way my structures are handled in memory.
I know that in a previous version of the code, there was a keyword before the definition of some variables, but that keyword is lost (it was part of a #define clause I can't find back).
Does anybody know what's wrong in this piece of code or what the mentioned keyword should be?
typedef unsigned long longword;
typedef struct part_tag { struct part_tag *next;
__int64 fileptr;
word needcount;
byte loadflag,lock;
byte partdat[8192];
} part;
static longword *partptrs;
<keyword> part *freepart;
<keyword> part *firstpart;
void alloc_parts (void) {
part *ps;
int i;
partptrs = (longword*)malloc (number_of_parts * sizeof(longword)); // number... = 50
ps = (part*)&freepart;
for (i=0; i<number_of_parts; i++) {
ps->next = (struct part_tag*)malloc(sizeof(part));
partptrs[i] = (longword)ps->next;
ps = ps->next;
ps->fileptr = 0; ps->loadflag = 0; ps->lock = 0; ps->needcount = 0; // fill in "ps" structure
};
ps->next = nil;
firstpart = nil;
for (i=0; i<number_of_parts; i++) {
ps = (part*)partptrs[i];
free(ps); <-- here it already crashes at the first occurence (i=0)
};
}
Thanks in advance
In the comments somebody asks why I'm freeing pointers directly after allocating them. This is not how the program originally was written, but in order to know what's causing the access violation I've rewritten in that style.
Originally:
alloc_parts();
<do the whole processing>
free_parts();
In order to analyse the access violation I've adapted the alloc_parts() function into the source code excerpt I've written there. The point is that even directly after allocating memory, the freeing is going wrong. How is that even possible?
In the meanwhile I've observed another weird phenomena:
While allocating the memory, the values of ps seem to be "complete" address values. While trying to free the memory, the values of ps only contain the last digits of the memory addresses.
Example of complete address : 0x00000216eeed6150
Example of address in freeing loop : 0x00000000eeed6150 // terminating digits are equal,
// so at least something is right :-)
This problem was caused by the longword type: it seems that this type was too small to hold entire memory addresses. I've replaced this by another type (unsigned long long) but the problem still persists.
Finally, after a long time of misery, the problem is solved:
The program was originally meant as a 32-bit application, which means that the original type unsigned long was sufficient to keep memory addresses.
However, this program gets compiled now as a 64-bit application, hence the mentioned type is not sufficiently large anymore to keep 64-bit memory addresses, hence another type has been used for solving this issue:
typedef intptr_t longword;
This solves the issue.
#Andrew Henle: sorry, I didn't realise that your comment contained the actual solution to this problem.
I know there is several questions about that which gives good (and working) solutions, but none IMHO which says clearly what is the best way to achieve this.
So, suppose we have some 2D array :
int tab1[100][280];
We want to make a pointer that points to this 2D array.
To achieve this, we can do :
int (*pointer)[280]; // pointer creation
pointer = tab1; //assignation
pointer[5][12] = 517; // use
int myint = pointer[5][12]; // use
or, alternatively :
int (*pointer)[100][280]; // pointer creation
pointer = &tab1; //assignation
(*pointer)[5][12] = 517; // use
int myint = (*pointer)[5][12]; // use
OK, both seems to work well. Now I would like to know :
what is the best way, the 1st or the 2nd ?
are both equals for the compiler ? (speed, perf...)
is one of these solutions eating more memory than the other ?
what is the more frequently used by developers ?
//defines an array of 280 pointers (1120 or 2240 bytes)
int *pointer1 [280];
//defines a pointer (4 or 8 bytes depending on 32/64 bits platform)
int (*pointer2)[280]; //pointer to an array of 280 integers
int (*pointer3)[100][280]; //pointer to an 2D array of 100*280 integers
Using pointer2 or pointer3 produce the same binary except manipulations as ++pointer2 as pointed out by WhozCraig.
I recommend using typedef (producing same binary code as above pointer3)
typedef int myType[100][280];
myType *pointer3;
Note: Since C++11, you can also use keyword using instead of typedef
using myType = int[100][280];
myType *pointer3;
in your example:
myType *pointer; // pointer creation
pointer = &tab1; // assignation
(*pointer)[5][12] = 517; // set (write)
int myint = (*pointer)[5][12]; // get (read)
Note: If the array tab1 is used within a function body => this array will be placed within the call stack memory. But the stack size is limited. Using arrays bigger than the free memory stack produces a stack overflow crash.
The full snippet is online-compilable at gcc.godbolt.org
int main()
{
//defines an array of 280 pointers (1120 or 2240 bytes)
int *pointer1 [280];
static_assert( sizeof(pointer1) == 2240, "" );
//defines a pointer (4 or 8 bytes depending on 32/64 bits platform)
int (*pointer2)[280]; //pointer to an array of 280 integers
int (*pointer3)[100][280]; //pointer to an 2D array of 100*280 integers
static_assert( sizeof(pointer2) == 8, "" );
static_assert( sizeof(pointer3) == 8, "" );
// Use 'typedef' (or 'using' if you use a modern C++ compiler)
typedef int myType[100][280];
//using myType = int[100][280];
int tab1[100][280];
myType *pointer; // pointer creation
pointer = &tab1; // assignation
(*pointer)[5][12] = 517; // set (write)
int myint = (*pointer)[5][12]; // get (read)
return myint;
}
Both your examples are equivalent. However, the first one is less obvious and more "hacky", while the second one clearly states your intention.
int (*pointer)[280];
pointer = tab1;
pointer points to an 1D array of 280 integers. In your assignment, you actually assign the first row of tab1. This works since you can implicitly cast arrays to pointers (to the first element).
When you are using pointer[5][12], C treats pointer as an array of arrays (pointer[5] is of type int[280]), so there is another implicit cast here (at least semantically).
In your second example, you explicitly create a pointer to a 2D array:
int (*pointer)[100][280];
pointer = &tab1;
The semantics are clearer here: *pointer is a 2D array, so you need to access it using (*pointer)[i][j].
Both solutions use the same amount of memory (1 pointer) and will most likely run equally fast. Under the hood, both pointers will even point to the same memory location (the first element of the tab1 array), and it is possible that your compiler will even generate the same code.
The first solution is "more advanced" since one needs quite a deep understanding on how arrays and pointers work in C to understand what is going on. The second one is more explicit.
int *pointer[280]; //Creates 280 pointers of type int.
In 32 bit os, 4 bytes for each pointer. so 4 * 280 = 1120 bytes.
int (*pointer)[100][280]; // Creates only one pointer which is used to point an array of [100][280] ints.
Here only 4 bytes.
Coming to your question, int (*pointer)[280]; and int (*pointer)[100][280]; are different though it points to same 2D array of [100][280].
Because if int (*pointer)[280]; is incremented, then it will points to next 1D array, but where as int (*pointer)[100][280]; crosses the whole 2D array and points to next byte. Accessing that byte may cause problem if that memory doen't belongs to your process.
Ok, this is actually four different question. I'll address them one by one:
are both equals for the compiler? (speed, perf...)
Yes. The pointer dereferenciation and decay from type int (*)[100][280] to int (*)[280] is always a noop to your CPU. I wouldn't put it past a bad compiler to generate bogus code anyways, but a good optimizing compiler should compile both examples to the exact same code.
is one of these solutions eating more memory than the other?
As a corollary to my first answer, no.
what is the more frequently used by developers?
Definitely the variant without the extra (*pointer) dereferenciation. For C programmers it is second nature to assume that any pointer may actually be a pointer to the first element of an array.
what is the best way, the 1st or the 2nd?
That depends on what you optimize for:
Idiomatic code uses variant 1. The declaration is missing the outer dimension, but all uses are exactly as a C programmer expects them to be.
If you want to make it explicit that you are pointing to an array, you can use variant 2. However, many seasoned C programmers will think that there's a third dimension hidden behind the innermost *. Having no array dimension there will feel weird to most programmers.
I was writing a code using malloc for something and then faced a issue so i wrote a test code which actually sums up the whole confusion which is below::
# include <stdio.h>
# include <stdlib.h>
# include <error.h>
int main()
{
int *p = NULL;
void *t = NULL;
unsigned short *d = NULL;
t = malloc(2);
if(t == NULL) perror("\n ERROR:");
printf("\nSHORT:%d\n",sizeof(short));
d =t;
(*d) = 65536;
p = t;
*p = 65536;
printf("\nP:%p: D:%p:\n",p,d);
printf("\nVAL_P:%d ## VAL_D:%d\n",(*p),(*d));
return 0;
}
Output:: abhi#ubuntu:~/Desktop/ad/A1/CC$ ./test
SHORT:2
P:0x9512008: D:0x9512008:
VAL_P:65536 ## VAL_D:0
I am allocating 2 bytes of memory using malloc. Malloc which returns a void * pointer is stored in a void* pointer 't'.
Then after that 2 pointers are declared p - integer type and d - of short type. then i assigned t to both of them*(p =t and d=t)* that means both d & p are pointing to same mem location on heap.
on trying to save 65536(2^16) to (*d) i get warning that large int value is truncated which is as expected.
Now i again saved 65536(2^16) to (*p) which did not caused any warning.
*On printing both (*p) and (d) i got different values (though each correct for there own defined pointer type).
My question are:
Though i have allocated 2 bytes(i.e 16 bits) of heap mem using malloc how am i able to save 65536 in those two bytes(by using (p) which is a pointer of integer type).??
i have a feeling that the cause of this is automatic type converion of void to int* pointer (in p =t) so is it that assigning t to p leads to access to memory regions outside of what is allocated through malloc . ??.
Even though all this is happening how the hell derefrencing the same memory region through (*p) and (*d) prints two different answers( though this can also be explained if what i am thinking the cause in question 1).
Can somebody put some light on this, it will be really appreciated..and also if some one can explain the reasons behind this..
Many thanks
Answering your second question first:
The explanation is the fact that an int is generally 4 bytes, and the most significant bytes may be stored in the first two positions. A short, which is only 2 bytes, also stores its data in the first two positions. Clearly, then, storing 65536 in an int and a short, but pointing at the same memory location, will cause the data to be stored offset by two bytes for the int in relation to the short, with the two least significant bytes of the int corresponding to the storage for the short.
Therefore, when the compiler prints *d, it interprets this as a short and looks at the area corresponding to storage for a short, which is not where the compiler previously stored the 65536 when *p was written. Note that writing *p = 65536; overwrote the previous *d = 65536;, populating the two least significant bytes with 0.
Regarding the first question: The compiler does not store the 65536 for *p within 2 bytes. It simply goes outside the bounds of the memory you've allocated - which is likely to cause a bug at some point.
In C there is no protection at all for writing out of bounds of an allocation. Just don't do it, anything can happen. Here it seems to work for you because by some coincidence the space behind the two bytes you allocated isn't used for something else.
1) The granularity of the OS memory manager is 4K. An ovewrite by one bit is unlikely to trigger an AV/segfault, but will it corrupt any data in the adjacent location, leading to:
2) Undefined behaviour. This set of behaviour includes 'aparrently correct operation', (for now!).
I thought that I couldn't retrieve the length of an allocated memory block like the simple .length function in Java. However, I now know that when malloc() allocates the block, it allocates extra bytes to hold an integer containing the size of the block. This integer is located at the beginning of the block; the address actually returned to the caller points to the location just past this length value. The problem is, I can't access that address to retrieve the block length.
#include <stdlib.h>
#include <stdio.h>
int main(void)
{
char *str;
str = (char*) malloc(sizeof(char)*1000);
int *length;
length = str-4; /*because on 32 bit system, an int is 4 bytes long*/
printf("Length of str:%d\n", *length);
free(str);
}
**Edit:
I finally did it. The problem is, it keeps giving 0 as the length instead of the size on my system is because my Ubuntu is 64 bit. I changed str-4 to str-8, and it works now.
If I change the size to 2000, it produces 2017 as the length. However, when I change to 3000, it gives 3009. I am using GCC.
You don't have to track it by your self!
size_t malloc_usable_size (void *ptr);
But it returns the real size of the allocated memory block!
Not the size you passed to malloc!
What you're doing is definitely wrong. While it's almost certain that the word just before the allocated block is related to the size, even so it probably contains some additional flags or information in the unused bits. Depending on the implementation, this data might even be in the high bits, which would cause you to read the entirely wrong length. Also it's possible that small allocations (e.g. 1 to 32 bytes) are packed into special small-block pages with no headers, in which case the word before the allocated block is just part of another block and has no meaning whatsoever in relation to the size of the block you're examining.
Just stop this misguided and dangerous pursuit. If you need to know the size of a block obtained by malloc, you're doing something wrong.
I would suggest you create your own malloc wrapper by compiling and linking a file which defines my_malloc() and then overwiting the default as follows:
// my_malloc.c
#define malloc(sz) my_malloc(sz)
typedef struct {
size_t size;
} Metadata;
void *my_malloc(size_t sz) {
size_t size_with_header = sz + sizeof(Metadata);
void* pointer = malloc(size_with_header);
// cast the header into a Metadata struct
Metadata* header = (Metadata*)pointer;
header->size = sz;
// return the address starting after the header
// since this is what the user needs
return pointer + sizeof(Metadata);
}
then you can always retrieve the size allocated by subtracting sizeof(Metadata), casting that pointer to Metadata and doing metadata->size:
Metadata* header = (Metadata*)(ptr - sizeof(Metadata));
printf("Size allocated is:%lu", header->size); // don't quote me on the %lu ;-)
You're not supposed to do that. If you want to know how much memory you've allocated, you need to keep track of it yourself.
Looking outside the block of memory returned to you (before the pointer returned by malloc, or after that pointer + the number of bytes you asked for) will result in undefined behavior. It might work in practice for a given malloc implementation, but it's not a good idea to depend on that.
This is not Standard C. However, it is supposed to work on Windows operatings systems and might to be available on other operating systems such as Linux (msize?) or Mac (alloc_size?), as well.
size_t _msize( void *memblock );
_msize() returns the size of a memory block allocated in the heap.
See this link:
http://msdn.microsoft.com/en-us/library/z2s077bc.aspx
This is implementation dependent
Every block you're allocating is precedeed by a block descriptor. Problem is, it dependends on system architecture.
Try to find the block descriptor size for you own system. Try take a look at you system malloc man page.