Would running this code occupy about 4_000_000 bytes of memory?
my uint32 #array;
#array[1_000_000] = 1;
If you assign element 1_000_000 and each element is 4 bytes, that would be 4_000_004 bytes of memory. So strictly speaking, the answer is "No" :-)
But less pedantically: native arrays are guaranteed to be laid out consecutively in memory, so such an assignment would at least allocate a single block of 4 x 1_000_001 = 4_000_004 bytes of memory. As Christoph stated in his comment, if you want to make sure it is all it will ever allocate, you need to make it a shaped native array. You will get upper bounds checks as a bonus as well.
Related
Let's say I know the max size of my new would be 8 but in most cases, the logical size would be lower given the program's logic.
In this scenario, is it more memory and time efficient to?:
A. Begin with an array with a physical size of 1 and write to it and afterwards extend (dynamically allocate and freeing the old array) it by one each time and then continue writing to it.
B. Begin with an array of the given max size (in our case 8) and write every needed slot in this array and at the end shrink it to the logical size (dynamically allocated and freeing the old array).
In the given code, there is a method that
for(i = pos - 1 ; i < size -1 ; i++)
{
a[i] = a[i+1] ;
}
but suppose the size is 4 and I want to delete the 4th position value of the array.
In this scenerio I am not able to understand how this code will work.
It seems you do not quite understand how arrays are in C.
In C, an array is a continuous sequence of items of the same type and therefore same size.
The system will allocate space for the array initially.
For example when you say int a[4] and run your function, a is of type "array of int", each entry is type of 'int'.
Mostly, an int value needs 4 bytes of space. And the [4] means space for four ints will be allocated. That is, it will allocate 4*4=16 bytes in your memory.
For example, down here is the spaces allocated by a:
01010101 first byte (start of a[0])
01111110 second byte
00101001 third byte
00000111 fourth byte
01100000 fifth byte (start of a[1])
.....
.....
.....
01010101 16th byte
While it is just allocated, the bytes value is unknown. You can initialize or assign value to it.
Okay, then when you use a to do something, you can say a[1] to access the int represented by the 4 bytes ranging from the 5th byte to the 8th byte.
What will happen if you say a[10]?
The space is not allocated by your program! Accessing it is an error. But only if you are lucky maybe an error tip window showing "Access Violation at memory 0x6463a80 (the number is just example)" or you get a value unknown without a noticeable warning and that would be worse!
From your question, I know that you want to say,
"Oh gosh, I originally have int a[4], but later in the code, I want to shrink it to something like int a[3]!"
The solution is: just ignore a[3], view it as it does not exist! Never use a[3] then that's okay!
If you want to have a full access of operations like "Add" "insert" "Remove" etc. Array is not suitable. Consider C++. It has std library, and there is Vector type. It's not array, it works different from this. Search Google or chat me if u want to know.
You may be not completely understand what I am saying here but please feel free to ask. I'm almost on the same boat with you and I am willing to teach & help you.
I'm making a program using c.
I have many arrays and size of each array is not so small.
(more than 10,000 elements in each array).
And, there are set of arrays that are accessed and computed together frequently.
For example,
a_1[index] = constant * a_2[index];
b_1[index] = constant * b_2[index];
a_1 is compute with a_2 and b_1 is computed with b_2.
Suppose that I have a~z_1 and a~z_2 arrays, in my case,
is there significant 'execution speed' difference between the following 2 different memory allocation ways.
allocating memory in order of a~z_1 followed by a~z_2
allocating a_1,a_2 followed by b_1,b_2, c_1,c_2 and others?
1.
MALLOC(a_1);
MALLOC(b_1);
...
MALLOC(z_1);
MALLOC(a_2);
...
MALLOC(z_2);
2.
MALLOC(a_1);
MALLOC(a_2);
MALLOC(b_1);
MALLOC(b_2);
...
MALLOC(z_1);
MALLOC(z_2);
I think allocating memory in second way will be faster because of hit rate.
Because arrays allocated in similar times will be in the similar address, those arrays will be uploaded in the cash or RAM at the same time, and therefore computer does not need to upload arrays in several times to compute one line of code.
For example, to compute
a_1[index] = constant * a_2[index];
, upload a_1 and a_2 at the same time not separately.
(Is it correct?)
However, for me, in terms of maintenance, allocating in first way is much easier.
I have AA_a~AA_z_1,AA_a~AA_z_2, BB_a~BB_z_1, CC_a~CC_z~1 and other arrays.
Because I can efficiently use MACRO in the following way to allocate memory.
Like,
#define MALLOC_GROUP(GROUP1,GROUP2)
MALLOC(GROUP1##_a_##GROUP2);
MALLOC(GROUP1##_b_##GROUP2);
...
MALLOC(GROUP1##_z_##GROUP2)
void allocate(){
MALLOC_GROUP(AA,1);
MALLOC_GROUP(AA,2);
MALLOC_GROUP(BB,2);
}
To sum, is allocating sets of arrays, computed together, at the similar time affects to the execution speed of the program?
Thank you.
This question already has answers here:
Is calloc(4, 6) the same as calloc(6, 4)?
(7 answers)
Closed 4 months ago.
What is the difference between calloc(10,4) and calloc(1,40)?
I see this behavior:
Thing** things = (Thing**)calloc(1, 10 * sizeof(Thing*));
// things[0] != 0
Thing** things = (Thing**)calloc(10, sizeof(Thing*));
// things[0] == 0
I would like to understand why. Edit: losing my mind is why, both seem to result in zero now... To at least make the question interesting, why doesn't calloc just take a single argument, like malloc?
In practice it's the same. But there's one important feature this gives you.
Say that you're receiving some data from the network and the protocol has a field that specifies how many elements an array will contain that will be sent to you. You do something like:
uint32_t n = read_number_of_array_elements_from_network(conn);
struct element *el = malloc(n * sizeof(*el));
if (el == NULL)
return -1;
read_array_elements_from_network(conn, el, n);
This looks harmless, doesn't it? Well, not so fast. The other side of the connection was evil and actually sent you a very large number as the number of elements so that the multiplication wrapped. Let's say that sizeof(*el) is 4 and the n is read as 2^30+1. The multiplication 2^30+1 * 4 wraps and the result becomes 4 and that's what you allocate while you've told your function to read 2^30+1 elements. The read_array_elements_from_network function will quickly overflow your allocated array.
Any decent implementation of calloc will have a check for overflow in that multiplication and will protect against this kind of attack (this error is very common).
It is the same. The allocation does number of elements times size of one element to allocate the size.
It does not matter as it will be one block.
It's virtually the same, as the allocation block is contiguous. It allocates number_of_elements * size_of_element, so 10 elements of size 4 or 1 element of size 40 both end up allocating 40 bytes.
calloc(10,4) will allocate 10 no of elements where the size will be 4, whereas calloc(1,40) will allocated one elment with size of 40.
Ref : http://www.tutorialspoint.com/c_standard_library/c_function_calloc.htm
By size i mean for every element allocated.
How do they map an index directly to a value without having to iterate though the indices?
If it's quite complex where can I read more?
An array is just a contiguous chunk of memory, starting at some known address. So if the start address is p, and you want to access the i-th element, then you just need to calculate:
p + i * size
where size is the size (in bytes) of each element.
Crudely speaking, accessing an arbitrary memory address takes constant time.
Essentially, computer memory can be described as a series of addressed slots. To make an array, you set aside a continuous block of those. So, if you need fifty slots in your array, you set aside 50 slots from memory. In this example, let's say you set aside the slots from 1019 through 1068 for an array called A. Slot 0 in A is slot 1019 in memory. Slot 1 in A is slot 1020 in memory. Slot 2 in A is slot 1021 in memory, and so forth. So, in general, to get the nth slot in an array we would just do 1019+n. So all we need to do is to remember what the starting slot is and add to it appropriately.
If we want to make sure that we don't write to memory beyond the end of our array, we may also want to store the length of A and check our n against it. It's also the case that not all values we wish to keep track of are the same size, so we may have an array where each item in the array takes up more than one slot. In that case, if s is the size of each item, then we need to set aside s times the number of items in the array and when we fetch the nth item, we need to add s time n to the start rather than just n. But in practice, this is pretty easy to handle. The only restriction is that each item in the array be the same size.
Wikipedia explains this very well:
http://en.wikipedia.org/wiki/Array_data_structure
Basically, a memory base is chosen. Then the index is added to the base. Like so:
if base = 2000 and the size of each element is 5 bytes, then:
array[5] is at 2000 + 5*5.
array[i] is at 2000 + 5*i.
Two-dimensional arrays multiply this effect, like so:
base = 2000, size-of-each = 5 bytes
array[i][j] is at 2000 + 5*i*j
And if every index is of a different size, more calculation is necessary:
for each index
slot-in-memory += size-of-element-at-index
So, in this case, it is almost impossible to map directly without iteration.