This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Whether variable name in any programming language takes memory space
I was just reading about memory allocation, and can't help wonder this question:
Do both
int x = 4;
and
int this_is_really_really_long_name_for_an_integer_variable = 4;
occupy same amount of memory (the total memory occupied by the variable. not just sizeof(int))
I understand that this question is related to 'programming languages and compiler construction'. But, I haven't got to study it :(
In general they occupy the same amount of space, i.e. sizeof(int). However, one could argue that when building an object file with additional symbols for debugging the ratio is different. The amount of data which the variable stores does not change but the debugging symbols occupy more space in case of the longer variable name. Consider a following example.
$ cat short.c && gcc -c short.c && wc -c short.o
int x = 0;
927 short.o
$ cat long.c && gcc -c long.c && wc -c long.o
int this_is_really_really_long_name_for_an_integer_variable = 0;
981 long.o
The difference in size is exactly the difference of lengths of variables' names.
From a run-time efficiency and memory usage point of view it does not matter, though.
In C? Yes, these variables will occupy the same amount of space.
Variable name is used only by compiler at compile-time.
But there are some languages that store variable names in run-time.
The length of the variable name has no bearing on the amount of storage reserved for it; in most cases, the variable name isn't preserved in the generated machine code.
32 bits, since compiler will no store your name. It will handle it as an address only.
int container only occupied 32 bits.
Variable names are only used for address binding at compile time.variables names are stored in symbol table in lexical processing which is one phase of compiler process
once address binding is done then there is no use of variable name, & your length of variable name does not matter. it only takes 32 bits
Related
I would like to know if I can choose the storage location of arrays in c. There are a couple of questions already on here with some helpful info, but I'm looking for some extra info.
I have an embedded system with a soft-core ARM cortex implemented on an FPGA.
Upon start-up code is loaded from memory and executed by the processor. My code is in assembley and contains some c functions. One particular function is a uART interrupt which I have included below
void UART_ISR()
{
int count, n=1000, t1=0, t2=1, display=0, y, z;
int x[1000]; //storage array for first 1000 terms of Fibonacci series
x[1] = t1;
x[2] = t2;
printf("\n\nFibonacci Series: \n\n %d \n %d \n ", t1, t2);
count=2; /* count=2 because first two terms are already displayed. */
while (count<n)
{
display=t1+t2;
t1=t2;
t2=display;
x[count] = t2;
++count;
printf(" %d \n",display);
}
printf("\n\n Finished. Sequence written to memory. Reading sequence from memory.....:\n\n");
for (z=0; z<10000; z++){} // Delay
for (y=0; y<1000; y++) { //Read variables from memory
printf("%d \n",x[y]);
}
}
So basically the first 1000 values of the Fibonacci series are printed and stored in array X and then values from the array are printed to the screen again after a short delay.
Please correct me if I'm wrong but the values in the array X are stored on the stack as they are computed in the for loop and retrieved from the stack when the array is read from memory.
Here is he memory map of the system
0x0000_0000 to 0x0000_0be0 is the code
0x0000_0be0 to 0x0010_0be0 is 1MB heap
0x0010_0be0 to 0x0014_0be0 is 256KB stack
0x0014_0be0 to 0x03F_FFFF is of-chip RAM
Is there a function in c that allows me to store the array X in the off-chip ram for later retrieval?
Please let me know if you need any more info
Thanks very much for helping
--W
No, not "in C" as in "specified by the language".
The C language doesn't care about where things are stored, it specifies nothing about the existance of RAM at particular addresses.
But, actual implementations in the form of compilers, assemblers and linkers, often care a great deal about this.
With gcc for instance, you can use the section variable attribute to force a variable into a particular section.
You can then control the linker to map that section to a particular memory area.
UPDATE:
The other way to do this is manually, by not letting the compiler in on the secret and doing it yourself.
Something like:
int *external_array = (int *) 0x00140be0;
memcpy(external_array, x, sizeof x);
will copy the required number of bytes to the external memory. You can then read it back by swapping the two first arguments in the memcpy() call.
Note that this is way more manual, low-level and fragile, compared to letting the compiler/linker dynamic duo Just Make it Work for you.
Also, it seems very unlikely that you want to do all of that work from an ISR.
with the following code, I am trying to make an array of numbers and then sorting them. But if I set a high arraysize (MAX), the program stops at the last 'randomly' generated number and does not continue to the sorting at all. Could anyone please give me a hand with this?
#include <stdio.h>
#define MAX 2000000
int a[MAX];
int rand_seed=10;
/* from K&R
- returns random number between 0 and 62000.*/
int rand();
int bubble_sort();
int main()
{
int i;
/* fill array */
for (i=0; i < MAX; i++)
{
a[i]=rand();
printf(">%d= %d\n", i, a[i]);
}
bubble_sort();
/* print sorted array */
printf("--------------------\n");
for (i=0; i < MAX; i++)
printf("%d\n",a[i]);
return 0;
}
int rand()
{
rand_seed = rand_seed * 1103515245 +12345;
return (unsigned int)(rand_seed / 65536) % 62000;
}
int bubble_sort(void)
{
int t, x, y;
/* bubble sort the array */
for (x=0; x < MAX-1; x++)
for (y=0; y < MAX-x-1; y++)
if (a[y] > a[y+1])
{
t=a[y];
a[y]=a[y+1];
a[y+1]=t;
}
return 0;
}
The problem is that you are storing the array in global section, C doesn't give any guarantee about the maximum size of global section it can support, this is a function of OS, arch compiler.
So instead of creating a global array, create a global C pointer, allocated a large chunk using malloc. Now memory is saved in the heap which is much bigger and can grow at runtime.
Your array will land in BSS section for static vars. It will not be part of an image but program loader will allocate required space and fill it with zeros before your program starts 'real' execution. You can even control this process if using embedded compiler and fill your static data with anything you like. This array may occupy 2GB or your RAM and yet your exe file may be few kilobytes. I've just managed to use over 2GB array this way and my exe was 34KB. I can believe a compiler may warn you when you approach maybe 231-1 elements (if your int is 32bit) but static arrays with 2m elements are not a problem nowadays (unless it is embedded system but I bet it is not).
The problem might be that your bubble sort has 2 nested loops (as all bubble sorts) so trying to sort this array - having 2m elements - causes the program to loop 2*1012 times (arithmetic sequence):
inner loop:
1: 1999999 times
2: 1999998 times
...
2000000: 1 time
So you must swap elements
2000000 * (1999999+1) / 2 = (4 / 2) * 10000002 = 2*1012 times
(correct me if I am wrong above)
Your program simply remains too long in sort routine and you are not even aware of that. What you see it just last rand number printed and program not responding. Even on my really fast PC with 200K array it took around 1minute to sort it this way.
It is not related to your os, compiler, heaps etc. Your program is just stuck as your loop executes 2*1012 times if you have 2m elements.
To verify my words print "sort started" before sorting and "sort finished" after that. I bet the last thing you'll see is "sort started". In addition you may print current x value before your inner loop in bubble_sort - you'll see that it is working.
Dynamic Array
int *Array;
Array= malloc (sizeof(int) * Size);
The original C standard (ANSI 1989/ISO 1990) required that a compiler successfully translate at least one program containing at least one example of a set of environmental limits. One of those limits was being able to create an object of at least 32,767 bytes.
This minimum limit was raised in the 1999 update to the C standard to be at least 65,535 bytes.
No C implementation is required to provide for objects greater than that size, which means that they don't need to allow for an array of ints greater than
(int)(65535 / sizeof(int)).
In very practical terms, on modern computers, it is not possible to say in advance how large an array can be created. It can depend on things like the amount of physical memory installed in the computer, the amount of virtual memory provided by the OS, the number of other tasks, drivers, and programs already running and how much memory that are using. So your program may be able to use more or less memory running today than it could use yesterday or it will be able to use tomorrow.
Many platforms place their strictest limits on automatic objects, that is those defined inside of a function without the use of the 'static' keyword. On some platforms you can create larger arrays if they are static or by dynamic allocation.
This question already has answers here:
Getting a stack overflow exception when declaring a large array
(8 answers)
Closed 9 years ago.
When I try to initialize a large double dimensional character array, it works perfectly fine. But when I add a simple print command, it gives me a segmentation fault. Any ideas as to why this is happening?
#include<stdio.h>
int main(void)
{
printf("!");
char f[10000][10000];
}
It works fine without the printf command, or even if the printf command prints nothing, (i.e. ""). If I make it print anything at all it gives the error.
Any help?
This is probably because you are exceeding the stack. Your definition of f takes 100MB Stack space (10000x10000 bytes) and probably as soon as you actually use the stack, the system will find that there isn't that much room on the stack and segfault. You'll probably find the same with calling any other function.
Allocations of that size should be done through malloc().
char *f= malloc(10000*10000);
// access two dimensionally (equivalent of f[5][8])
char z= f[5*10000 + 8];
You are exceeding an unspecified limit of your C implementation, in particular, the total size of objects with automatic storage duration (aka "local variables"). The size of f[] is 100.000.000 bytes. Many C implementation keep automatic variables on the stack, which usually is a limited resource. On my Unix system (FreeBSD) I can inspect this limit:
$ ulimit -a
-t: cpu time (seconds) unlimited
-f: file size (blocks) unlimited
-d: data seg size (kbytes) 33554432
-s: stack size (kbytes) 524288
[...]
and if higher powers allow, increase it with ulimit -s number.
I am using gcc version 4.7.2 on Ubuntu 12.10 x86_64.
First of all these are the sizes of data types on my terminal:
sizeof(char) = 1
sizeof(short) = 2 sizeof(int) = 4
sizeof(long) = 8 sizeof(long long) = 8
sizeof(float) = 4 sizeof(double) = 8
sizeof(long double) = 16
Now please have a look at this code snippet:
int main(void)
{
char c = 'a';
printf("&c = %p\n", &c);
return 0;
}
If I am not wrong we can't predict anything about the address of c. But each time this program gives some random hex address ending in f. So the next available location will be some hex value ending in 0.
I observed this pattern in case of other data types too. For an int value the address was some hex value ending in c. For double it was some random hex value ending in 8 and so on.
So I have 2 questions here.
1) Who is governing this kind of memory allocation ? Is it gcc or C standard ?
2) Whoever it is, Why it's so ? Why the variable is stored in such a way that next available memory location starts at a hex value ending in 0 ? Any specific benefit ?
Now please have a look at this code snippet:
int main(void)
{
double a = 10.2;
int b = 20;
char c = 30;
short d = 40;
printf("&a = %p\n", &a);
printf("&b = %p\n", &b);
printf("&c = %p\n", &c);
printf("&d = %p\n", &d);
return 0;
}
Now here what I observed is completely new for me. I thought the variable would get stored in the same order they are declared. But No! That's not the case. Here is the sample output of one of random run:
&a = 0x7fff8686a698
&b = 0x7fff8686a694
&c = 0x7fff8686a691
&d = 0x7fff8686a692
It seems that variables get sorted in increasing order of their sizes and then they are stored in the same sorted order but with maintaining the observation 1. i.e. the last variable (largest one) gets stored in such a way that the next available memory location is an hex value ending in 0.
Here are my questions:
3) Who is behind this ? Is it gcc or C standard ?
4) Why to waste the time in sorting the variables first and then allocating the memory instead of directly allocating the memory on 'first come first serve' basis ? Any specific benefit of this kind of sorting and then allocating memory ?
Now please have a look at this code snippet:
int main(void)
{
char array1[] = {1, 2};
int array2[] = {1, 2, 3};
printf("&array1[0] = %p\n", &array1[0]);
printf("&array1[1] = %p\n\n", &array1[1]);
printf("&array2[0] = %p\n", &array2[0]);
printf("&array2[1] = %p\n", &array2[1]);
printf("&array2[2] = %p\n", &array2[2]);
return 0;
}
Now this is also shocking for me. What I observed is that the array is always stored at some random hex value ending in '0' if the elements of an array >= 2 and if elements < 2
then it gets memory location following observation 1.
So here are my questions:
5) Who is behind this storing an array at some random hex value ending at 0 thing ? Is it gcc or C standard ?
6) Now why to waste the memory ? I mean array2 could have been stored immediately after array1 (and hence array2 would have memory location ending at 2). But instead of that array2 is stored at next hex value ending at 0 thereby leaving 14 memory locations in between. Any specific benefits ?
The address at which the stack and the heap start is given to the process by the operating system. Everything else is decided by the compiler, using offsets that are known at compile time. Some of these things may follow an existing convention followed in your target architecture and some of these do not.
The C standard does not mandate anything regarding the order of the local variables inside the stack frame (as pointed out in a comment, it doesn't even mandate the use of a stack at all). The standard only bothers to define order when it comes to structs and, even then, it does not define specific offsets, only the fact that these offsets must be in increasing order. Usually, compilers try to align the variables in such a way that access to them takes as few CPU instructions as possible - and the standard permits that, without mandating it.
Part of the reasons are mandated by the application binary interface (ABI) specifications for your system & processor.
See the x86 calling conventions and the SVR4 x86-64 ABI supplement (I'm giving the URL of a recent copy; the latest original is surprisingly hard to find on the Web).
Within a given call frame, the compiler could place variables in arbitrary stack slots. It may try (when optimizing) to reorganize the stack at will, e.g. by decreasing alignment constraints. You should not worry about that.
A compiler try to put local variables on stack location with suitable alignment. See the alignof extension of GCC. Where exactly the compiler put these variables is not important, see my answer here. (If it is important to your code, you really should pack the variables in a single common local struct, since each compiler, version and optimization flags could do different things; so don't depend on that precise behavior of your particular compiler).
While I was doing "Learn C The Hard Way" examples, I thought to myself:
I set int a = 10; but where does that value 10 actually? Can I access it manually from the outside while my program is running?
Here's a little C code snippet for demonstration purposes:
int main (int argc, char const* argv[]) {
int a = 10;
int b = 5;
int c = a + b;
return 0;
}
I opened up the The GNU Project Debugger (GDB) and entered:
break main
run
next 2
From what I understood 0x7fff5bffb04 is a memory address of int c. I then used hexdump -C /dev/mem system call to dump the entire memory into the terminal.
Now the question is where do I look for the variable c in this massive hex dump? My hope is that given the address 0x7fff5bffb04 I can find its value, which is 15. Also, bonus question, what does each column in hexdump -C represent? (I know the last column is ASCII representation)
I then used hexdump -C /dev/mem system call to dump the entire memory into the terminal.
Your hexdump dumped physical memory addresses. The address 0x7fff5bffb04 is a virtual address of the variable in the process you are debugging. It is mapped to some physical address, but you will not be able to find which without examining kernel mapping tables (as Mat already told you in a comment).
To examine virtual address space, use /proc/<pid>/mem (as Barmar already told you in a comment).
But this entire exercise is pointless, because you already can examine the virtual memory in GDB, and you are not going to see anything when you look at virtual memory that GDB didn't already show you much more conveniently [1].
[1] Except you could see GDB-inserted breakpoints, but you are not expected to understand that :-)
Firstly, there is no reason why the values would even exist in ram. More than Likly the machine code for this program simply has the values in cpu registers. You would have to have more bytes (try at least 512) and set them to a random value, which you could then search for in the memory dump.
You are far better of looking at the assembly code produced by the c compiler.