How do you know the exact address of a variable? - c

So I'm looking through my C programming text book and I see this code.
#include <stdio.h>
int j, k;
int *ptr;
int main(void)
{
j = 1;
k = 2;
ptr = &k;
printf("\n");
printf("j has the value %d and is stored at %p\n", j, (void *)&j);
printf("k has the value %d and is stored at %p\n", k, (void *)&k);
printf("ptr has the value %p and is stored at %p\n", (void *)ptr, (void *)&ptr);
printf("The value of the integer pointed to by ptr is %d\n", *ptr);
return 0;
}
I ran it and the output was:
j has the value 1 and is stored at 0x4030e0
k has the value 2 and is stored at 0x403100
ptr has the value 0x403100 and is stored at 0x4030f0
The value of the integer pointed to by ptr is 2
My question is if I had not ran this through a compiler, how would you know the address to those variables by just looking at this code? I'm just not sure how to get the actual address of a variable. Thanks!

Here's my understanding of it:
The absolute addresses of things in memory in C is unspecified. It's not standardised into the language. Because of this, you can't know the locations of things in memory by looking at just the code. (However, if you use the same compiler, code, compiler options, runtime and operating system, the addresses may be consistent.)
When you're developing applications, this is not behaviour you should rely on. You may rely on the difference between the locations of two things in some contexts, however. For example, you can determine the difference between the addresses of pointers to two array elements to determine how many elements apart they are.
By the way, if you are considering using the memory locations of variables to solve a particular problem, you may find it helpful to post a separate question asking how to so without relying on this behaviour.

There is no other way to "know the exact address" of a variable in Standard C than to print it with "%p". The actual address is determined by many factors not under control of the programmer writing code. It's a matter of OS, the linker, the compiler, options used and probably others.
That said, in the embedded systems world, there are ways to express this variable must reside at this address, for example if registers of external devices are mapped into the address space of a running program. This usually happens in what is called a linker file or map file or by assigning an integral value to a pointer (with a cast). All of these methods are non-standard.
For the purpose of your everyday garden-variety programs though, the point of writing C programs is that you need and should not care where your variables are stored.

You can't.
Different compilers can put the variables in different places. On some machines the address is not a simple integer anyway.

The compiler only knows things like "the third integer global variable" and "the four bytes allocated 36 bytes down from the stack pointer." It refers to global vars, pointers to subroutines (functions), subroutine arguments and local vars only in relative terms. (Never mind the extra stuff for polymorphic objects in C++, yikes!) These relative references are saved in the object file (.o or .obj) as special codes and offset values.
The Linker can fill in some details. It may modify some of these sketchy location references when joining several object files. Global variable locations will share a space (the Data Section) when globals from multiple compilation units are merged; the linker decides what order they all go in, but still describing them as relative to the start of the entire set of global vars. The result is an executable file with the final opcodes, but addresses still being sketchy and based on relative offsets.
It's not until the executable is loaded that the Loader replaces all the relative addresses with actual addresses. This is possible now, because the loader (or some part of the operating system it depends on) decides where in the whole virtual address space of the process to store the program's opcodes (Text Section), global variables (BSS, Data Sections) and call stack, and other things. The loader can do the math, and write the actual address into every spot in the executable, typically as part of "load immediate" opcodes and all opcodes involving memory access.
Google "relocation table" for more. See http://www.iecc.com/linker/linker07.html (somewhat old) for a more detailed explanation for particular platforms.
In real life, it's all complicated by the fact that virtual addresses are mapped to physical addresses by a virtual memory system, using segments or some other mechanism to keep each process in a separate address space.

I would like to further build upon the answers already provided by pointing out that some compilers, such as Visual Studio's, have a feature called Address Space Layout Randomization (ASLR), which makes programs begin at a random memory address as an anti-virus feature. Given the addresses that you have in your output, I'd say that you compiled without it (programs without it start at address 0x400000, I think). My source for this information is an answer to this question.
That said, the compiler is what determines the memory addresses at which local variables will be stored. The addresses will most likely change from compiler to compiler, and probably also with each version of the source code.

Every process has its own logical address space starting from zero. Addressees your program can access are all relative to zero. Absolute address of any memory location is decided only after loading the process in main memory. This is done using dynamic relocation by modern operating systems. Hence every time a process is loaded into memory it may be loaded at different location according to availability of the memory. Hence allowing user processes to know exact address of data stored in memory does not make any sense. What your code is printing, is a logical address and not the exact or physical address.

Continuing on the answers described above, please do not forget that processes would run in their own virtual address space (process isolation). This ensures that when your program corrupts some memory, the other running processes are not affected.
Process Isolation:
http://en.wikipedia.org/wiki/Process_isolation
Inter-Process Communication
http://en.wikipedia.org/wiki/Inter-process_communication

Related

Will memory addresses be the same if I run a program in a VM from two different computers?

Fairly new to C and I learned that addresses depend on a few things like the operating system and the CPU. I have a lab for one of my C courses that asks us if we run a program and print out the address for each variable will they have the same address and value as another student's (exact same program). They are local variables, stored on the stack. Normally I would say no but all of us are required to ssh to our University's lab and our programs are being run on the same machines with the same specs. This is where I'm confused, pretty sure that the values will be the same however, I don't know what exactly determines these addresses. Here is a piece of code from the program:
int g2(int a, int b)
{
int c = g1(a + 3, b - 11);
printf("g2: %d %d %d \n", a,b,c);
printf("a's address is %p b's address is %p C's address is %p\n", &a, &b, &c);
return c - b;
}
For me a's address is 0x7ffe9bce4a0c. Also not just looking for a homework answer, asking here because none of my Teammates have sent me their addresses which we were allowed to do. Have researched it but can't find an answer that matches this sort of situation, any help is greatly appreciated, thank you!
"Will memory addresses be the same if I run a program in a VM from two different computers?"
No, they probably won´t even be the same when running only in the same environment and on the same machine. There is nothing like a guarantee that it will have the same address.
A modern-day OS assigns the memory arbitrarily (within certain sections of course).
And this has a good reason: To protect against the exploitation of memory vulnerabilities a hacker could use to harm the program or even the OS.
This technique is called Address Space Layout Randomization. You can read more about it here.
It could be that the variables may have the same address on several executions, but there is no guarantee that this will happen again, already on the next run. In fact, if the OS supports ASLR, It is more likely, that there is the "almost-guarantee" that the addresses will be unequal.
The virtual machine shall have no influence on that behavior. Maybe you should read more in the documentation about the memory storage for your particular virtual machine (if it supports ASLR), but it shall follow the same guidelines.
short answer, no.
operating system loads program in different position every time.
the address that you see is not the actual address in the memory. There is an abstract address layer, supplied by the operating system. You can read about virtual memory addresses if you would like you. You will probably learn it in a course on Operating Systems
Whether you get the same address or varying addresses depends on the operating system.
Not too many years ago, if a program printed the address of one of the local variables in its function, that address would be the same every time the program was run, as long as the function was called in the same point in program execution with the same program input and other circumstances. (Which functions are called, including recursive calls, and how much stack space they use could be affected by program input and other factors.) This was true because, when the program was loaded and initialized, its stack was always started at the same memory address.
This behavior was exploited by malicious people—if there were bugs in the program, they might be exploited, and knowing which addresses were used in the program helps some exploits. So common operating systems have changed it. Now, when a program is started, the locations of its stack and other parts of its memory layout are adjusted randomly. This is called Address Space Layout Randomization (ASLR).
So, in common modern operating systems, you will get varying addresses from run to run when printing the address of a local variable. In specialized operating systems, such as for embedded devices, you may get the same address every time.
The title of your question asks about “a VM,” presumably for virtual machine, but this is not mentioned in the body of your question. To the extent that a virtual machine implements a machine properly, it should produce identical behavior. So whether a program is running in a virtual machine or not should be irrelevant to this question.

char *c="1234". Address stored in c is always the same

This was a question asked by an interviewer:
#include<stdio.h>
int main()
{
char *c="123456";
printf("%d\n",c);
return 0;
}
This piece of code always prints a fixed number (e.g. 13451392), no matter how many times you execute it. Why?
Your code contains undefined behavior: printing a pointer needs to be done using %p format specifier, and only after converting it to void*:
printf("%p\n", (void*)c);
This would produce a system-dependent number, which may or may not be the same on different platforms.
The reason that it is fixed on your platform is probably that the operating system always loads your executable into the same spot of virtual memory (which may be mapped to different areas of physical memory, but your program would never know). String literal, which is part of the executable, would end up in the same spot as well, so the printout would be the same all the time.
To answer your question, the character string "123456" is a static constant in memory, and when the .exe is loaded, it always goes into the same memory location.
What c is (or rather what it contains) is the memory address of that character string which, as I said, is always at the same location. If you print the address as a decimal number, you see the address, in decimal.
Of course, as #dasblinkenlight said, you should print it as a pointer, because different machines/languages have different conventions about the size of pointers versus the size of ints.
Most executable file formats have an option to tell the OS loader at which virtual address to load the executable, For example PE format used by Windows has ImageBase field for this and usually sets to 0x00400000 for applications.
When the loader first load the executable, it tries load it at that address, if it's not used, it load it at it, which is mostly true, but if it's used. It load it at different address given by the system.
The case here is that the offset to your "12345" in the data section is the same, and OS loads the image base at the same base address, so you always get the same virtual address, base + offset.
But this is not always the case, one for the given reason above, the base address may be used, alot of Windows DLLs compile using MSVC sets their base address to 0x10000000, so only one or none is actually loaded at that address.
Another case is when there is address space randomization ASLR, security feature, if it is supported and enabled by the system, MSVC has the linker option /DYNAMICBASE, the system will ignore the specified image base and will give you different random address on its own.
Two things to conclude:
You should not depend on this behavior, the system can load your program at any address and thus you will give different address.
Use %p for printing address, on some systems, for example, int is 4 bytes and pointers are 8 bytes, part of you address will be chopped.

How to store a variable at a specific memory location?

As i am relatively new to C , i have to use for one of my projects the following:
i must declare some global variables which have to be stored every time the program runs at the same memory address.
I did some read and i found that is i declare it "static" it will be stored at the same memory location.
But my question is: can i indicate the program where to store that variable or not.
For example : int a to be stored at 0xff520000. Can this thing be done or not? i have searched here but did not found any relevant example. If their is some old post regarding this, please be so kind to share the link .
Thank you all in advance.
Laurentiu
Update: I am using a 32uC
In your IDE there will be a memory map available through some linker file. It will contain all addresses in the program. Read the MCU manual to see at which addresses there is valid memory for your purpose, then reserve some of that memory for your variable. You have to read the documentation of your specific development platform.
Next, please note that it doesn't make much sense to map variables at specific addresses unless they are either hardware registers or non-volatile variables residing in flash or EEPROM.
If the contents of such a memory location will change during execution, because it is a register, or because your program contains a bootloader/NVM programming algorithm changing NVM memory cells, then the variables must be declared as volatile. Otherwise the compiler will break your code completely upon optimization.
The particular compiler most likely has a non-standard way to allocate variables at specific addresses, such as a #pragma or sometimes the weird, non-standard # operator. The only sensible way you can allocate a variable at a fixed location in standard C, is this:
#define MY_REGISTER (*(volatile uint8_t*)0x12345678u)
where 0x12345678 is the address where 1 byte of that is located. Once you have a macro declaration like this, you can use it as if it was a variable:
void func (void)
{
MY_REGISTER = 1; // write
int var = MY_REGISTER; // read
}
Most often you want these kind of variables to reside in the global namespace, hence the macro. But if you for some reason want the scope of the variable to be reduced, then skip the macro and access the address manually inside the code:
void func (void)
{
*(volatile uint8_t*)0x12345678u = 1; // write
int var = *(volatile uint8_t*)0x12345678u; // read
}
You can do this kind of thing with linker scripts, which is quite common in embedded programming.
On a Linux system you might never get the same virtual address due to address space randomization (a security feature to avoid exploits that would rely on knowing the exact location of a variable like you describe).
If it's just a repeatable pointer you want, you may be able to map a specific address with mmap, but that's not guaranteed.
Like was mentioned in other answers - you can't.
But, you can have a workaround. If it's ok for the globals to be initialized in the main(), you can do something of this kind:
int addr = 0xff520000;
int main()
{
*((int*)addr) = 42;
...
return 0;
}
Note, however, that this is very dependent on your system and if running in protected environment, you'll most likely get a runtime crash. If you're in embedded/non-protected environment, this can work.
No you cannot tell it explicitly where to store a variable in memory. Mostly because on modern systems you have many things done by the system in regards to memory, that is out of your control. Address Layout Randomization is one thing that comes to mind that would make this very hard.
according your compiler if you use XC8 Compiler.
Simply you can write int x # 0x12 ;
in this line you set x in the memory location 0x12
Not at the C level. If you work with assembly language, you can directly control the memory layout. But the C compiler does this for you. You can't really mess with it.
Even with assembly, this only controls the relative layout. Virtual memory may place this at any (in)convenient physical location.
You can do this with some compiler extensions, but it's probably not what you want to do. The operating system handles your memory and will put things where it wants. How do you even know that the memory address you want will be mapped in your program? Ignore everything in this paragraph if you're on an embedded platform, then you should read the manual for that platform/compiler or at least mention it here so that people can give a more specific answer.
Also, static variables don't necessarily have the same address when the program runs. Many operating systems use position independent executables and randomize the address space on every execution.
You can declare a pointer to a specific memory address, and use the contents of that pointer as a variable I suppose:
int* myIntPointer = 0xff520000;

What's inside the stack?

If I run a program, just like
#include <stdio.h>
int main(int argc, char *argv[], char *env[]) {
printf("My references are at %p, %p, %p\n", &argc, &argv, &env);
}
We can see that those regions are actually in the stack.
But what else is there? If we ran a loop through all the values in Linux 3.5.3 (for example, until segfault) we can see some weird numbers, and kind of two regions, separated by a bunch of zeros, maybe to try to prevent overwriting the environment variables accidentally.
Anyway, in the first region there must be a lot of numbers, such as all the frames for each function call.
How could we distinguish the end of each frame, where the parameters are, where the canary if the compiler added one, return address, CPU status and such?
Without some knowledge of the overlay, you only see bits, or numbers. While some of the regions are subject to machine specifics, a large number of the details are pretty standard.
If you didn't move too far outside of a nested routine, you are probably looking at the call stack portion of memory. With some generally considered "unsafe" C, you can write up fun functions that access function variables a few "calls" above, even if those variables were not "passed" to the function as written in the source code.
The call stack is a good place to start, as 3rd party libraries must be callable by programs that aren't even written yet. As such, it is fairly standardized.
Stepping outside of your process memory boundaries will give you the dreaded Segmentation violation, as memory fencing will detect an attempt to access non-authorized memory by the process. Malloc does a little more than "just" return a pointer, on systems with memory segmentation features, it also "marks" the memory accessible to that process and checks all memory accesses that the process assignments are not being violated.
If you keep following this path, sooner or later, you'll get an interest in either the kernel or the object format. It's much easier to investigate one way of how things are done with Linux, where the source code is available. Having the source code allows you to not reverse-engineer the data structures by looking at their binaries. When starting out, the hard part will be learning how to find the right headers. Later it will be learning how to poke around and possibly change stuff that under non-tinkering conditions you probably shouldn't be changing.
PS. You might consider this memory "the stack" but after a while, you'll see that really it's just a large slab of accessible memory, with one portion of it being considered the stack...
The contents of the stack are basically:
Whatever the OS passes to the program.
Call frames (also called stack frames, activation areas, ...)
What does the OS pass to the program? A typical *nix will pass the environment, arguments to the program, possibly some auxiliary information, and pointers to them to be passed to main().
In Linux, you'll see:
a NULL
the filename for the program.
environment strings
argument strings (including argv[0])
padding full of zeros
the auxv array, used to pass information from the kernel to the program
pointers to environment strings, ended by a NULL pointer
pointers to argument strings, ended by a NULL pointer
argc
Then, below that are stack frames, which contain:
arguments
the return address
possibly the old value of the frame pointer
possibly a canary
local variables
some padding, for alignment purposes
How do you know which is which in each stack frame? The compiler knows, so it just treats its location in the stack frame appropriately. Debuggers can use annotations for each function in the form of debug info, if available. Otherwise, if there is a frame pointer, you can identify things relative to it: local variables are below the frame pointer, arguments are above the stack pointer. Otherwise, you must use heuristics, things that look like code addresses are probably code addresses, but sometimes this results in incorrect and annoying stack traces.
The content of the stack will vary depending on the architecture ABI, the compiler, and probably various compiler settings and options.
A good place to start is the published ABI for your target architecture, then check that your particular compiler conforms to that standard. Ultimately you could analyse the assembler output of the compiler or observe the instruction level operation in your debugger.
Remember also that a compiler need not initialise the stack, and will certainly not "clear it down", when it has finished with it, so when it is allocated to a process or thread, it might contain any value - even at power-on, SDRAM for example will not contain any specific or predictable value, if the physical RAM address has been previously used by another process since power on or even an earlier called function in the same process, the content will have whatever that process left in it. So just looking at the raw stack does not tell you much.
Commonly a generic stack frame may contain the address that control will jump to when the function returns, the values of all the parameters passed, and the value of all auto local variables in the function. However the ARM ABI for example passes the first four arguments to a function in registers R0 to R3, and holds the return value of the leaf function in the LR register, so it is not as simple in all cases as the "typical" implementation I have suggested.
The details are very dependent on your environment. The operating system generally defines an ABI, but that's in fact only enforced for syscalls.
Each language (and each compiler even if they compile the same language) in fact may do some things differently.
However there is some sort of system-wide convention, at least in the sense of interfacing with dynamically loaded libraries.
Yet, details vary a lot.
A very simple "primer" could be http://kernelnewbies.org/ABI
A very detailed and complete specification you could look at to get an idea of the level of complexity and details that are involved in defining an ABI is "System V Application Binary Interface AMD64 Architecture Processor Supplement" http://www.x86-64.org/documentation/abi.pdf

need explanation of how memory address work in this C program

I have a very simple C program where I am (out of my own curiosity) investigating which memory addresses are used to allocate local variables. My program is:
#include <stdio.h>
int main()
{
char buffer_1[8], buffer_2[8], buffer_3[8];
printf("address of buffer_1 %p\n", buffer_1);
printf("address of buffer_2 %p\n", buffer_2);
printf("address of buffer_3 %p\n", buffer_3);
return 0;
}
output is as follows:
address of buffer_1 0x7fff5fbfec30
address of buffer_2 0x7fff5fbfec20
address of buffer_3 0x7fff5fbfec10
my question is: why do the address seem to be getting smaller? Is there some logic to this? thank you.
The compiler is allowed to do whatever it wants with your automatic variables. In this case it just looks like it's putting them consecutively on the stack. On most popular systems in use today, stacks grow downwards.
Most compilers allocate stack memory for local variables in one step, at the very beginning pf the function. The memory is allocated as a single continuous block. Under these circumstances, the compiler, obviously, is free to use absolutely any memory layout for local variables inside that block. If can put them there so that the addresses increase in the order of declaration. Or decrease. Or arranged randomly. It is an implementation detail. And there's not much logic behind it.
It is quite possible that in your case the compiler tried to "pretend" that the memory for the arrays was allocated in the stack sequentially and independently (even though that was not the case). If on your platform stack grows downwards (as it does on many platforms), then it is expected that object declared later will have smaller addresses.
But again, functions don't allocate local objects individually. And on top of that the language makes no guarantees about any relationships between local object addresses. So, there's no real reason to prefer one ordering over the other.
The output of your C program is platform-dependent, compiler-dependent.
There cannot be just one perfect answer because the address arrangements vary based on:
Whether the system is little or big endian.
What kind of OS you are compiling on.
What kind of memory architecture you are compiling for.
What kind of compiler you are using(and compilers might have bugs too)
Whether you are on 64-bit or 32-bit platform.
And so much more.
But most important of all, is the type of processor architecture. :)
Here is a list of stack growth strategies per processor:
x86,PDP11 Downwards
System z In a linked list fashion, downwards, mostly.
ARM Select-able and can grow in either up or downward.
Mostek6502 Downwards (but only 256 bytes).
SPARC In a circular fashion with a sliding window, a limited depth stack.
RCA1802A Subject to SCRT(Standard Call and Return Technique) implementation.
But, in general, your compiler, at compile-time should map those addresses into the binary file generated. Then at the run-time, the binary file may occupy(or may pretend to occupy) a sequential set of memory addresses. And in your case the addresses printed by your C source, show that the stack is growing downward.
Basically compiler has responsibility to allocate memory to all the variables .
Array gets address on stack. but it has nothing to do with the o/p you are getting.
Basically The thing is compiler found the contiguous space(or chunk of memory) empty at that time and hence it allocated it to your program.

Resources