IAR EWARM 6.5 Storing a const variable in a certain address BUG? - linker

I want to save an area of flash memory in stm32 to store my own config information.
To make this i want to save the second sector of the flash memory on STM32F2/STM32F4 (16kb stored at 0x08004000-0x08007FFF)
Checking internet and stackoverflow you have 4 ways to do this
1)
#pragma location=0x08004000
__no_init const char ReservedArea[16*1024];
2)
__no_init const char ReservedArea[16*1024] #0x08004000;
3) creating a section + #pragma location=
project icf:
place at address mem: 0x08004000 { readonly section ConfigSection };
c file:
#pragma location="ConfigSection"
__no_init const char ReservedArea[16*1024];
4)
Defining a section in project .icf file IAR define memory region for custom data
Bug or problem found
Method 1 to 3 works ok. Linker include a space area for my variable. You can check the .bin file generated with a hex editor or just debug and see that variable is # 0x08004000.
The problem found with these methods are that iar linker leave unused more than 12kbytes of flash memory between 0x08000800 - 0x08003FFF. The best way to verify this is to remove the var, compile, write in a note the size of the bin file and then add the variable. If you do this you will notice that new bin file size is greater than 16kb when it must be exact 16kb.
If you move the address from 0x08004000 to 0x0800C000 without any other change the file size will increase in another 32kbytes and all the previous area is set to 0x00 and unused in the bin file. This is a big problem for our project because i use the rest of the unused area out of the bin file to allow firmware update.
Checking the map file you will see that the area previous to the reserved zone is unused too.
I tried several ways to fix this with no luck, for example defining 2 variables with address, playing for hours, checking linker options, optimizations, playing with other #pragma options, etc.
About the 4th method, it stores the variable in the system but it dont get the address that i wanted. Probably the problem was that both areas shared the address space.
icf file
define region LANGUAGE_region = mem:[from 0x08004000 to 0x08007FFF];
define region ROM_region = mem:[from __ICFEDIT_region_ROM_start__ to __ICFEDIT_region_ROM_end__];
define region RAM_region = mem:[from __ICFEDIT_region_RAM_start__ to __ICFEDIT_region_RAM_end__];
"LANGUAGE_PLACE":place at start of LANGUAGE_region { section .LANGUAGE_PLACE.noinit };
c code
extern const char ReservedArea[16*1024] #".LANGUAGE_PLACE.noinit";
const char ReservedArea[16*1024];
Is it my problem? is it a bug? Any tip is welcomed.
Thanks in advance.

This does not sound like a bug to me but rather an issue that you'll need to deal with. The .bin file is a raw memory file where each byte of the file maps to a byte in memory. How do you expect the .bin file to represent a byte located at offset 0x4000 or 0xC0000 without also representing all the bytes that come before that? The .bin file must include all the unused bytes between two memory sections in order to maintain the relative offset of the subsequent section.
Is your concern that the unused bytes are 0x00, and therefore they cannot be programmed without first being erased? If so then you can probably configure the linker (or whatever program you're using to create the .bin file) to use 0xFF instead of 0x00 for all unused bytes. Check the linker (or command line) options.
Or is your concern that the .bin file contains a lot of unused memory which will take longer to download and reprogram? Or is the .bin file now too large to fit in the memory region you've reserved for firmware updates? In either case, the solution is to divide the firmware update into two separate portions. For example, the first portion contains just the code starting at 0x08000000 and ending wherever the code ends. The second portion contains the data starting at 0x08004000. The size of the two portioned .bin files will be much less than the combined .bin file because they don't need to include all the unused memory in between. Your firmware update routines will need to be smart enough to recognize each portion and program them to the proper memory address.
If you don't want to deal with separate .bin files then you could consider downloading a .hex file as opposed to a .bin file. A .hex file is not a one-to-one mapping of memory bytes and contains coded information that allows unused memory regions to be skipped. However, your embedded firmware update routines will have to be smart enough to decode the hex file before programming the flash.

I am trying to place some constants into a know FLASH address, but using those methods dealt above I don't get the result. I tried with pragma location and I get no result a all, and also with the # but IAR complain about this. The variable I want to store is the version of the FW so it has a 12 bytes value and I declare it like this as a global value (outside the functions I want to mean, so it is accessible from all the functions of the .c):
#pragma location=0x00001FF0
__no_init const uint8_t version[12] = {0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0A,0x0B};
I have also checked IAR documentation such as:
Technical Note 27498
I am using IAR 6.5 if it helps (because I noticed some methods require 6.70 newer!
EDITED:
Well, now it works doing the following:
In the .icf file:
/* Now I have a read only section in the ROM address 0x00001FF4 */
"ROM":
place at address mem:0x00001FF4 { readonly section .version };
In the .c source file:
#pragma location=".version"
__root const uint8_t version[12] = {0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,0x08,0x09,0x0A,0x0B};
Best regards,
Iván

Ok finally i understood how it works. The linker searchs the longest unused space to include the code there.
Personally I think that this is not the best way, i would prefer that the linker would use the first unused area if the function or the const variable has enough space to fit in.
To limit this I just added the next code (Example for stm32F4)
const char unusedarea[128*3*1024] #0x08020000 ;
#pragma required=unusedarea
This use the space between 0x08020000 and 0x0807FFFF and force the linker to use the other areas. And effectively it worked.
This allow me to reserve space but leaving the needed space free and unused. I can even remove the last 384 kb from the bin file, and upload only the first 128kbytes.
Edited. Setting those vars to __no_init bin file is still small and reserves áreas are not overwritted when using jtag

Related

How can i extract constants' addresses,added by compiler optimization, from ELF file?

I'm writing some code size analysis tool for my C program, using the output ELF file.
I'm using readelf -debug-dump=info to generate Dwarf format file.
I've noticed that My compiler is adding as a part of the optimization new consts, that are not in Dwarf file, to the .rodata section.
So .rodata section size includes their sizes but i don't have their sizes in Dwarf.
Here is an example fro map file:
*(.rodata)
.rodata 0x10010000 0xc0 /<.o file0 path>
0x10010000 const1
0x10010040 const2
.rodata 0x100100c0 0xa /<.o file1 path>
fill 0x100100ca 0x6
.rodata 0x100100d0 0x6c /<.o file2 path>
0x100100d0 const3
0x100100e0 const4
0x10010100 const5
0x10010120 const6
fill 0x1001013c 0x4
In file1 above, although i didn't declare on const variable - the compiler does, this const is taking space in .rodata yet there is no symbol/name for it.
Here is the code inside some function that generates it:
uint8 arr[3][2] = {{146,179},
{133, 166},
{108, 141}} ;
So compiler add some consts values to optimize the load to the array.
How can i extract theses hidden additions from data sections?
I want to be able to fully characterize my code - How much space is used in each file, etc...
I am guessing here - it will be linker dependent, but when you have code such as:
uint8 arr[3][2] = {{146,179},
{133, 166},
{108, 141}} ;
arr at run-time exists in r/w memory, but its initialiser will be located in R/O memory to be copied to the R/W memory when the array is initialised. The linker need only provide the address, because the size will be known locally as a compile-time constant embedded as a literal in the initializing code. Consequently the size information does not appear in the map, because the linker discards that information.
Length is however implicit by the address of adjacent objects for filled space. So for example:
The size of const1 for example is equal to const2 - const1 and for const6 it is 0x1001013c - const6.
It is all rather academic however - you have precise control over this in terms of the size of your constant initialisers. They are not magically created data unrelated to your code, and I am not convinced that thy are a product of optimization as you suggest. The non-zero initialisers must exist regardless of optimisation options, and in any case optimisation primarily affects the size and/or speed of code (.text) rather then data. The impact on data sizes is likely to relate only to padding and alignment and in debug builds possibly "guard-space" for overrun detection.
However there is no need at all for you to guess. You can determine how this data is used by inspecting the disassembly or observing its execution (at the instruction level) in a debugger - to see exactly where initialised variables are copying the data from. You could even place an read-access break-point at these addresses and you will determine directly what code is utilizing them.
to get the size of elf file in details use
"You can use nm and size to get the size of functions and ELF sections.
To get the size of the functions (and objects with static storage duration):
$ nm --print-size --size-sort --radix=d tst.o
The second column shows the size in decimal of function and objects.
To get the size of the sections:
$ size -A -d tst.o
The second column shows the size in decimal of the sections."
Tool to analyze size of ELF sections and symbol

How to reserve a fixed flash section for data?

I need to store some large chunks of data in flash memory, where it will be read often and occasionally be rewritten using SPM. I already figured out how to use pointers to __flash and pgm_read_byte to access it, how not to omit the const (despite my writing to it), how to actually access the array in a loop so that it doesn't get completely optimised away (after inlining), but I don't really understand how to declare my array.
const uint8_t persistent_data[1024] __attribute__(( aligned(SPM_PAGESIZE),
section("mycustomdata") )) = {};
works about fine, except that I do not want to initialise it. When programming my device (an Arduino ATmega328P), I want this section to be keept so that it retains the data previously written by the application. The above does zero-initialise it, and my hex file contains zeroes that the programmer happily uses to overwrite my data.
Using the __flash modifier instead of __attribute__(( section("…") )) does about the same here, except that it places the array elsewhere and I don't have any control about where it is put. It still does this when I use __flash and omit the initialisation (though I get a "uninitialized variable 'persistent_data' put into program memory area [-Wuninitialized]" warning).
Now I am trying to omit the initialiser:
const uint8_t persistent_data[1024] __attribute__(( aligned(SPM_PAGESIZE),
section("mycustomdata") ));
and get rather unexpected results. The sections data from the .lss output shows
Idx Name Size VMA LMA File off Algn
…
1 mycustomdata 00000480 00800480 000055e2 00005700 2**7
CONTENTS, ALLOC, LOAD, DATA
2 .text 00005280 00000000 00000000 000000d4 2**1
CONTENTS, ALLOC, LOAD, READONLY, CODE
This does put all the initialisation zeroes in the hex file at the load memory address 55E2 (instead of omitting them), while the virtual memory address (which the variable persistent_data points to) refers to 0480 - in the middle of the code from the text section!
(I also tried to omit the const, and to omit the const and the initialiser, which both had the same effect as omitting only the initialiser).
I am at a loss. Do I need to use extern maybe? (Any attempt at doing so ended up with a "undefined reference to persistent_data" error). Do I need to use a linker script?
How do I make persistent_data refer to a location is program memory that is not used by any other data, and have the compiler not emit any initialisation data for that location in the hex file?
You don't seem to realize that you actually need two versions of your hex file - one that is suitable for a "new" installation on a new (or worse: re-used, thus with random flash content) chip that initializes the flash section to make sure there is no arbitrary data in there that might be interpreted, and another one used to update a pre-programmed chip that misses this section in order to keep data already modified by your users. So, you are going to need the version that initializes this section anyhow.
The simplest way to achieve this is like your first example, initialize the data to build the "naked chip" version of your code, and produce the "update" version by simply removing this initialized section from the object file with objcopy (assumed you use a GNU toolchain). See the -R option of this tool.
Also, make sure this data section is located at a fixed address - you don't want it to move every time you change something in your code.
I would rather try and use EEPROM if available than go through the hassle of reprogramming.

Split section into multiple memory regions

I'm developing an application on an ARM Cortex-M microcontroller which has two RAM banks à 64kB. The first bank is directly followed by the second bank in the memory map.
The memory banks are currently split into two regions in my linker script. The first region contains the sections .bss and .data. The second bank is used for .heap and .stack, which only take 1kB each (I'm using a different stack in FreeRTOS, which also manages it's own heap).
My problem is, that .bss is too large for the first bank. Therefore I'd like to move some of it's content to the second bank.
One way to accomplish this would be to create a new section, lets call it .secondbss, which is linked to the second bank. Single variables could then be added to this section using __attribute__((section(".secondbss"))).
The reasons why I am not using this solution are
I really want to maintain portability of my source code
There might be a whole lot of variables that would require this attribute and I don't want to choose the section for every single variable
Is there a better solution for this? I already thought of both memories as one region, but I don't know how to prevent the linker from misaligning the data across the boundary between both banks.
How can I solve my problem without using __attribute__ flags?
Thank you!
For example you have 2 banks at 0x20000000 and 0x20010000. You wants use Bank2 for heap and (main) stack. I assume that you have large .bss because of configTOTAL_HEAP_SIZE in FreeRTOSConfig.h. Now see heap sources in FreeRTOS/Source/portable/MemMang/. There are 5 implementations of pvPortMalloc() that do memory allocation.
Looks at lines in heap_X.c that you use
/* Allocate the memory for the heap. */
#if( configAPPLICATION_ALLOCATED_HEAP == 1 )
/* The application writer has already defined the array used for the RTOS
heap - probably so it can be placed in a special segment or address. */
extern uint8_t ucHeap[ configTOTAL_HEAP_SIZE ];
#else
static uint8_t ucHeap[ configTOTAL_HEAP_SIZE ];
#endif /* configAPPLICATION_ALLOCATED_HEAP */
So you can set configAPPLICATION_ALLOCATED_HEAP at 1 and say to you linker to place ucHeap at 0x20010000.
Another way is writing headers for each device that includes addresses of heap and stack and edit sources.
For heap_1.c we can do next changes:
// somewhere in devconfig.h
#define HEAP_ADDR 0x20010000
// in heap_1.c
// remove code related ucHeap
//
// remove static uint8_t *pucAlignedHeap = NULL;
// and paste:
static uint8_t *pucAlignedHeap = (uint8_t *)HEAP_ADDR;
For heap_2.c and heap_4.c edit function prvHeapInit() as well.
Pay attention to heap_5.c that includes vPortDefineHeapRegions().
Now pvPortMalloc() will returns pointers to memory in Bank2. pvPortMalloc() used for allocations stacks of tasks, TCB and user varables. Read sources. Location of main stack depends of your device/architecture. For stm32 (ARM) see vector table or how to change MSP register.

Is there a way to know where global and static variables reside inside the data segment (.data + .bss)?

I want to dump all global and static variables to a file and load them back on the next program invocation. A solution I thought of is to dump the .data segment to a file. But .data segment on a 32bit machine spans over 2^32 address space (4GB). In which part of this address space the variables reside? How do I know which part of the .data segment I should dump?
And when loading the dumped file, I guess that since the variables are referenced by offset in the data segment, it will be safe to just memcpy the whole dump to the alleged starting point of the "variables area". Please correct me if I am wrong.
EDIT
A good start is this question.
Your problem is how to find the beginning and the end of the data segment. I am not sure how to do this, but I could give you a couple of ideas.
If all your data are relatively self-contained, (they are declared within the same module, not in separate modules,) you might be able to declare them within some kind of structure, so the beginning will be the address of the structure, and the end will be some variable that you will declare right after the structure. If I remember well, MASM had a "RECORD" directive or something like that which you could use to group variables together.
Alternatively, you may be able to declare two additional modules, one with a variable called "beginning" and another with a variable called "end", and make sure that the first gets linked before anything else, and the second gets linked after everything else. This way, these variables might actually end up marking the beginning and the end of the data segment. But I am not sure about this, I am just giving you a pointer.
One thing to remember is that your data will inevitably contain pointers, so saving and loading all your data will only work if the OS under which you are running can guarantee that your program will always be loaded in the same address. If not, forget it. But if you can have this guarantee, then yes, loading the data should work. You should not even need a memcpy, just set the buffer for the read operation to be the beginning of the data segment.
The state of an entire program can be very complicated, and will not only involve variables but values in registers. You'll almost certainly be better off keeping track of what data you want to store and then storing it to a file yourself. This can be relatively painless with the right setup and encapsulation. Then when you resume the application, read in the program state and resume.
Assuming you are using gnu tools (gcc, binutils) if you look at the linker scripts the embedded folks use like the gba developers and microcontroller developers using roms (yagarto or devkit-arm for example). In the linker script they surround the segments of interest with variables that they can use elsewhere in their code. For rom based software for example you specify the data segment with a ram AT rom or rom AT ram in the linker script meaning link as if the data segment is in ram at this address space, but also link the data itself into rom at this address space, the boot code then copies the .data segment from the rom to the ram using these variables. I dont see why you couldnt do the same thing to have the compiler/linker tools tell you where stuff is then runtime use those variables to grab the data from memory and save it somewhere to hybernate or shut down and then restore that data from wherever. The variables you use to perform the restore of course should not be part of the .data segment or you trash the variables you are using to restore the segment.
In response to your header question, on Windows, the location and size of the data and bss segments can be obtained from the in-memory PE header. How that is laid out and how to parse it is documented in this specification:
http://msdn.microsoft.com/en-us/windows/hardware/gg463119
I do not believe that there is a guarantee that with every execution you will have the sam sequence of variables, hence the offsets may point to the wrong content.

Fixed address variable in C

For embedded applications, it is often necessary to access fixed memory locations for peripheral registers. The standard way I have found to do this is something like the following:
// access register 'foo_reg', which is located at address 0x100
#define foo_reg *(int *)0x100
foo_reg = 1; // write to foo_reg
int x = foo_reg; // read from foo_reg
I understand how that works, but what I don't understand is how the space for foo_reg is allocated (i.e. what keeps the linker from putting another variable at 0x100?). Can the space be reserved at the C level, or does there have to be a linker option that specifies that nothing should be located at 0x100. I'm using the GNU tools (gcc, ld, etc.), so am mostly interested in the specifics of that toolset at the moment.
Some additional information about my architecture to clarify the question:
My processor interfaces to an FPGA via a set of registers mapped into the regular data space (where variables live) of the processor. So I need to point to those registers and block off the associated address space. In the past, I have used a compiler that had an extension for locating variables from C code. I would group the registers into a struct, then place the struct at the appropriate location:
typedef struct
{
BYTE reg1;
BYTE reg2;
...
} Registers;
Registers regs _at_ 0x100;
regs.reg1 = 0;
Actually creating a 'Registers' struct reserves the space in the compiler/linker's eyes.
Now, using the GNU tools, I obviously don't have the at extension. Using the pointer method:
#define reg1 *(BYTE*)0x100;
#define reg2 *(BYTE*)0x101;
reg1 = 0
// or
#define regs *(Registers*)0x100
regs->reg1 = 0;
This is a simple application with no OS and no advanced memory management. Essentially:
void main()
{
while(1){
do_stuff();
}
}
Your linker and compiler don't know about that (without you telling it anything, of course). It's up to the designer of the ABI of your platform to specify they don't allocate objects at those addresses.
So, there is sometimes (the platform i worked on had that) a range in the virtual address space that is mapped directly to physical addresses and another range that can be used by user space processes to grow the stack or to allocate heap memory.
You can use the defsym option with GNU ld to allocate some symbol at a fixed address:
--defsym symbol=expression
Or if the expression is more complicated than simple arithmetic, use a custom linker script. That is the place where you can define regions of memory and tell the linker what regions should be given to what sections/objects. See here for an explanation. Though that is usually exactly the job of the writer of the tool-chain you use. They take the spec of the ABI and then write linker scripts and assembler/compiler back-ends that fulfill the requirements of your platform.
Incidentally, GCC has an attribute section that you can use to place your struct into a specific section. You could then tell the linker to place that section into the region where your registers live.
Registers regs __attribute__((section("REGS")));
A linker would typically use a linker script to determine where variables would be allocated. This is called the "data" section and of course should point to a RAM location. Therefore it is impossible for a variable to be allocated at an address not in RAM.
You can read more about linker scripts in GCC here.
Your linker handles the placement of data and variables. It knows about your target system through a linker script. The linker script defines regions in a memory layout such as .text (for constant data and code) and .bss (for your global variables and the heap), and also creates a correlation between a virtual and physical address (if one is needed). It is the job of the linker script's maintainer to make sure that the sections usable by the linker do not override your IO addresses.
When the embedded operating system loads the application into memory, it will load it in usually at some specified location, lets say 0x5000. All the local memory you are using will be relative to that address, that is, int x will be somewhere like 0x5000+code size+4... assuming this is a global variable. If it is a local variable, its located on the stack. When you reference 0x100, you are referencing system memory space, the same space the operating system is responsible for managing, and probably a very specific place that it monitors.
The linker won't place code at specific memory locations, it works in 'relative to where my program code is' memory space.
This breaks down a little bit when you get into virtual memory, but for embedded systems, this tends to hold true.
Cheers!
Getting the GCC toolchain to give you an image suitable for use directly on the hardware without an OS to load it is possible, but involves a couple of steps that aren't normally needed for normal programs.
You will almost certainly need to customize the C run time startup module. This is an assembly module (often named something like crt0.s) that is responsible initializing the initialized data, clearing the BSS, calling constructors for global objects if C++ modules with global objects are included, etc. Typical customizations include the need to setup your hardware to actually address the RAM (possibly including setting up the DRAM controller as well) so that there is a place to put data and stack. Some CPUs need to have these things done in a specific sequence: e.g. The ColdFire MCF5307 has one chip select that responds to every address after boot which eventually must be configured to cover just the area of the memory map planned for the attached chip.
Your hardware team (or you with another hat on, possibly) should have a memory map documenting what is at various addresses. ROM at 0x00000000, RAM at 0x10000000, device registers at 0xD0000000, etc. In some processors, the hardware team might only have connected a chip select from the CPU to a device, and leave it up to you to decide what address triggers that select pin.
GNU ld supports a very flexible linker script language that allows the various sections of the executable image to be placed in specific address spaces. For normal programming, you never see the linker script since a stock one is supplied by gcc that is tuned to your OS's assumptions for a normal application.
The output of the linker is in a relocatable format that is intended to be loaded into RAM by an OS. It probably has relocation fixups that need to be completed, and may even dynamically load some libraries. In a ROM system, dynamic loading is (usually) not supported, so you won't be doing that. But you still need a raw binary image (often in a HEX format suitable for a PROM programmer of some form), so you will need to use the objcopy utility from binutil to transform the linker output to a suitable format.
So, to answer the actual question you asked...
You use a linker script to specify the target addresses of each section of your program's image. In that script, you have several options for dealing with device registers, but all of them involve putting the text, data, bss stack, and heap segments in address ranges that avoid the hardware registers. There are also mechanisms available that can make sure that ld throws an error if you overfill your ROM or RAM, and you should use those as well.
Actually getting the device addresses into your C code can be done with #define as in your example, or by declaring a symbol directly in the linker script that is resolved to the base address of the registers, with a matching extern declaration in a C header file.
Although it is possible to use GCC's section attribute to define an instance of an uninitialized struct as being located in a specific section (such as FPGA_REGS), I have found that not to work well in real systems. It can create maintenance issues, and it becomes an expensive way to describe the full register map of the on-chip devices. If you use that technique, the linker script would then be responsible for mapping FPGA_REGS to its correct address.
In any case, you are going to need to get a good understanding of object file concepts such as "sections" (specifically the text, data, and bss sections at minimum), and may need to chase down details that bridge the gap between hardware and software such as the interrupt vector table, interrupt priorities, supervisor vs. user modes (or rings 0 to 3 on x86 variants) and the like.
Typically these addresses are beyond the reach of your process. So, your linker wouldn't dare put stuff there.
If the memory location has a special meaning on your architecture, the compiler should know that and not put any variables there. That would be similar to the IO mapped space on most architectures. It has no knowledge that you're using it to store values, it just knows that normal variables shouldn't go there. Many embedded compilers support language extensions that allow you to declare variables and functions at specific locations, usually using #pragma. Also, generally the way I've seen people implement the sort of memory mapping you're trying to do is to declare an int at the desired memory location, then just treat it as a global variable. Alternately, you could declare a pointer to an int and initialize it to that address. Both of these provide more type safety than a macro.
To expand on litb's answer, you can also use the --just-symbols={symbolfile} option to define several symbols, in case you have more than a couple of memory-mapped devices. The symbol file needs to be in the format
symbolname1 = address;
symbolname2 = address;
...
(The spaces around the equals sign seem to be required.)
Often, for embedded software, you can define within the linker file one area of RAM for linker-assigned variables, and a separate area for variables at absolute locations, which the linker won't touch.
Failing to do this should cause a linker error, as it should spot that it's trying to place a variable at a location already being used by a variable with absolute address.
This depends a bit on what OS you are using. I'm guessing you are using something like DOS or vxWorks. Generally the system will have certian areas of the memory space reserved for hardware, and compilers for that platform will always be smart enough to avoid those areas for their own allocations. Otherwise you'd be continually writing random garbage to disk or line printers when you meant to be accessing variables.
In case something else was confusing you, I should also point out that #define is a preprocessor directive. No code gets generated for that. It just tells the compiler to textually replace any foo_reg it sees in your source file with *(int *)0x100. It is no different than just typing *(int *)0x100 in yourself everywhere you had foo_reg, other than it may look cleaner.
What I'd probably do instead (in a modern C compiler) is:
// access register 'foo_reg', which is located at address 0x100
const int* foo_reg = (int *)0x100;
*foo_reg = 1; // write to foo_regint
x = *foo_reg; // read from foo_reg

Resources