How can I get the Length of an array with a specific index for example if I have an array like this
TYPE T_PERSON = PACKED RECORD
Example : STRING[40];
Example2 : STRING[10];
Example3 : STRING[5];
END;
example: ARRAY [1..30] OF T_PERSON
and I want to know the Length of example[28] respectively example[x].
Can I get it with LENGTH() or is there another solution?
If it is an array and you want the number of elements:
high(example)-low(example)+1;
Afaik with Free Pascal and Delphi 4+ length might also work, but don't pin me on that.
If you need the size in bytes, Abelisto is correct and sizeof() is what you want, and it also works on parts of the record (e.g. sizeof(example.example)).
The sum might not add up though if you are not PACKED due to alignment bytes.
Yes, you can also get it with Length function.
function Length(A: DynArrayType):Integer;
Every cell/element of your 'example' array has SizeOf(T_PERSON). This is the number of bytes.
As it stands, SizeOf(T_PERSON) is 58 bytes.
What I cannot explain is why it is 58 bytes and not 55 bytes (as I would expect it for non packed record) or even 56 bytes (for packed record aligned at 4-byte boundary).
Maybe someone else can shed some light on that.
Related
In the given code, there is a method that
for(i = pos - 1 ; i < size -1 ; i++)
{
a[i] = a[i+1] ;
}
but suppose the size is 4 and I want to delete the 4th position value of the array.
In this scenerio I am not able to understand how this code will work.
It seems you do not quite understand how arrays are in C.
In C, an array is a continuous sequence of items of the same type and therefore same size.
The system will allocate space for the array initially.
For example when you say int a[4] and run your function, a is of type "array of int", each entry is type of 'int'.
Mostly, an int value needs 4 bytes of space. And the [4] means space for four ints will be allocated. That is, it will allocate 4*4=16 bytes in your memory.
For example, down here is the spaces allocated by a:
01010101 first byte (start of a[0])
01111110 second byte
00101001 third byte
00000111 fourth byte
01100000 fifth byte (start of a[1])
.....
.....
.....
01010101 16th byte
While it is just allocated, the bytes value is unknown. You can initialize or assign value to it.
Okay, then when you use a to do something, you can say a[1] to access the int represented by the 4 bytes ranging from the 5th byte to the 8th byte.
What will happen if you say a[10]?
The space is not allocated by your program! Accessing it is an error. But only if you are lucky maybe an error tip window showing "Access Violation at memory 0x6463a80 (the number is just example)" or you get a value unknown without a noticeable warning and that would be worse!
From your question, I know that you want to say,
"Oh gosh, I originally have int a[4], but later in the code, I want to shrink it to something like int a[3]!"
The solution is: just ignore a[3], view it as it does not exist! Never use a[3] then that's okay!
If you want to have a full access of operations like "Add" "insert" "Remove" etc. Array is not suitable. Consider C++. It has std library, and there is Vector type. It's not array, it works different from this. Search Google or chat me if u want to know.
You may be not completely understand what I am saying here but please feel free to ask. I'm almost on the same boat with you and I am willing to teach & help you.
Would running this code occupy about 4_000_000 bytes of memory?
my uint32 #array;
#array[1_000_000] = 1;
If you assign element 1_000_000 and each element is 4 bytes, that would be 4_000_004 bytes of memory. So strictly speaking, the answer is "No" :-)
But less pedantically: native arrays are guaranteed to be laid out consecutively in memory, so such an assignment would at least allocate a single block of 4 x 1_000_001 = 4_000_004 bytes of memory. As Christoph stated in his comment, if you want to make sure it is all it will ever allocate, you need to make it a shaped native array. You will get upper bounds checks as a bonus as well.
I'm new to Go and try to understand the language in order to write efficient code. In the following code, sizes of the two arrays differ by 140%, can someone explain this?
package main
import (
"fmt"
"unsafe"
)
func main() {
ind1 := make([]bool, 10)
var ind2 [10]bool
fmt.Println(unsafe.Sizeof(ind1)) // 24
fmt.Println(len(ind1)) // 10
fmt.Println(unsafe.Sizeof(ind2)) // 10
fmt.Println(len(ind2)) // 10
}
The size of the first array remains 10, even in case the capacity is set explicitly:
ind1 := make([]bool, 10, 10)
Can someone explain this? Is there any additional overhead in using make? If yes, why is it recommended to use make over default initialization?
Arrays and slices in Go are different things.
Your ind1 is a slice, and ind2 is an array. The length of an array is part of the type, so for example [2]bool and [3]bool are 2 different array types.
A slice in Go is a descriptor for a contiguous segment of an underlying array and provides access to a numbered sequence of elements from that array. This slice header is a struct-like data structure represented by the type reflect.SliceHeader:
type SliceHeader struct {
Data uintptr
Len int
Cap int
}
It contains a data pointer (to the first element of the represented segment), a length and a capacity.
The unsafe.SizeOf() function returns the size in bytes of the hypothetical variable as if it would hold the passed value. It does not include any memory possibly referenced by it.
So if you pass a slice value (ind1), it will tell you the size of the above mentioned slice header. Note that the size of the fields of SliceHeader are architecture dependent, e.g. int may be 4 bytes on one platform and it may be 8 bytes on another. The size 24 applies to 64-bit architectures.
The Go Playground runs on a 32-bit architecture. Let's see this example:
fmt.Println(unsafe.Sizeof(make([]bool, 10)))
fmt.Println(unsafe.Sizeof(make([]bool, 20)))
fmt.Println(unsafe.Sizeof([10]bool{}))
fmt.Println(unsafe.Sizeof([20]bool{}))
Output (try it on the Go Playground):
12
12
10
20
As you can see, no matter the length of the slice you pass to unsafe.SizeOf(), it always returns 12 on the Go Playground (and 24 on 64-bit architectures).
On the other hand, an array value includes all its elements, and as such, its size depends on its length. Size of [10]bool is 10, and size of [20]bool is 20.
See related questions+answers to learn more about slices, arrays and the difference and relation between them:
How do I find the size of the array in go
Why have arrays in Go?
Why use arrays instead of slices?
Must read blog posts:
Go Slices: usage and internals
Arrays, slices (and strings): The mechanics of 'append'
ind1 is a slice (the type is []bool).
ind2 is an array (the type is [10]bool).
They are not of the same type.
The result of unsafe.Sizeof(ind1) probably has nothing to do with the arguments passed to make.
I am struggling to find the answer to this:
#define BUFLEN 8
unsigned short randombuffer[BUFLEN];
memset(randombuffer, 200 , BUFLEN );
printf("%u", randombuffer[0]);
I am getting the answer as 51400 although I was expecting 200.
After debugging I found out that the randombuffer is filled with 0xC8 for the first 8 entries. Hence 0xC8C8 is 51400. I was however expecting 0x00C8 for each index in the array.
What am I doing wrong?
What you are doing wrong is not reading the spec of memset. memset sets each byte to the specified value. Your buffer has most likely 8 entries of two bytes each. Since you passed 8 to memset, both bytes of the first four entries are changed, the rest isn't touched. That's who memset works.
memset fills bytes, but it looks like you want to fill words. I don't know if there's a memset-like function built in for this, so you might have to do repeated memset/memcpy instead. Note that if you feel comfortable writing inline assembler you could probably do this pretty efficiently yourself in machine code - although a tight loop using pointers in C is probably close to as fast.
The size calculation in the memset call is not correct. It should be:
memset(randombuffer, 200 , BUFLEN * sizeof(*randombuffer) );
because individual elements in randombuffer are two bytes in your case and the third parameter in memset takes the amount of bytes of the object, so you have to pass the number of the elements in the object times their size in bytes.
The values in the elements will still remain 0xC8C8, because memset set a value per byte not per element.
To print them out correctly, use the correct specifier for short:
printf("%hu", randombuffer[0]);
or
printf("%hx", randombuffer[0]);
Greetings
I need to calculate a first-order entropy (Markov source, like on wiki here http://en.wikipedia.org/wiki/Entropy_(information_theory) of a signal that consists of 16bit words.
This means, i must calculate how frequently each combination of a->b (symbol b appears after a) is happening in the data stream.
When i was doing it for just 4 less significant or 4 more significant bits, i used a two dimensional array, where first dimension was the first symbol and second dimension was the second symbol.
My algorithm looked like this
Read current symbol
Array[prev_symbol][curr_symbol]++
prev_symbol=curr_symbol
Move forward 1 symbol
Then, Array[a][b] would mean how many times did symbol b going after symbol a has occurred in a stream.
Now, i understand that array in C is a pointer that is incremented to get exact value, like to get element [3][4] from array[10][10] i have to increment pointer to array[0][0] by (3*10+4)(size of variable stored in array). I understand that the problem must be that 2^32 elements of type unsigned long must be taking too much.
But still, is there a way to deal with it?
Or maybe there is another way to accomplish this?
An two-dimensional array of integers (4 byte) with 32'000 by 32'000 elements occupies about 16 GByte of RAM. Does your machine have that much memory?
Anyhow, out of the more than 1 billion array elements, only very few will have a count different from zero. So it's probably better to go with some sort of sparse storage.
One solution would be to use a dictionary where the tuple (a, b) is the key and the count of occurrences is the value.
Perhaps you could do multiple passes over the data. The entropy contribution from pairs beginning with symbol X is essentially independent of pairs beginning with any other symbol (aside from the total number of them, of course), so you can calculate the entropy for all such pairs and then throw away the distribution data. At the end, combine 2^16 partial entropy values to get the total. You don't necessarily have to do 2^16 passes over the data, you can be "interested" in as many initial characters in a single pass as you have space for.
Alternatively, if your data is smaller than 2^32 samples, then you know for sure that you won't see all possible pairs, so you don't actually need to allocate a count for each one. If the sample is small enough, or the entropy is low enough, then some kind of sparse array would use less memory than your full 16GB matrix.
Did a quick test on Ubuntu 10.10 x64
gt#thinkpad-T61p:~/test$ uname -a
Linux thinkpad-T61p 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux
gt#thinkpad-T61p:~/test$ cat mtest.c
#include <stdio.h>
#include <stdlib.h>
short *big_array;
int main(void)
{
if((big_array = (short *)malloc(4UL*1024*1024*1024*sizeof (short))) == NULL) {
perror("malloc");
return 1;
}
big_array[0]++;
big_array[100]++;
big_array[1UL*1024*1024*1024]++;
big_array[2UL*1024*1024*1024]++;
big_array[3UL*1024*1024*1024]++;
printf("array[100] = %d\narray[3G] = %d\n", big_array[100], big_array[3UL*1024*1024*1024]);
return 0;
}
gt#thinkpad-T61p:~/test$ gcc -Wall mtest.c -o mtest
gt#thinkpad-T61p:~/test$ ./mtest
array[100] = 1
array[3G] = 1
gt#thinkpad-T61p:~/test$
It looks like the virtual memory system on linux is up to the job, as long as you have enough memory and/or swap.
Have fun!