ARM programming - GD32F1x0 library - timer

I'm using product GD32F130 (ST chinese clone).
This ARM includes general purpose TIMER3.
The library GD32F1x0_Firmware_Library_v3.1.0 does not includes TIMER3, but only TIMER0,TIMER1,TIMER2,TIMER5,TIMER13,TIMER14,TIMER15,TIMER16.
Is the library incomplete?

Better looking at datasheet of GD32F130 and to Library Manual (GD32F1x0_User_Manual.pdf),
I found that:
In the datasheet(page 10) the TIMER3 is at memory location 0x4000 0400
In the Library User Manual(page 6) the TIMER2 is at memory location 0x4000 0400
So I suppose I should use TIMER2 in the library, in order to use the TIMER3 of my chip.
I'm not sure, therefore I will not mark this answer as the solution, until I will not able to perform a test on hardware.

Related

How to define memory pointers when programming AVR chips?

Preamble: after working a couple of years as application developer, the world of the software engineering became more obscure than it was before. The reason is that the real stuff is hidden under zillions layers of abstractions: OS, frameworks, etc. The young generation is deprived of the pleasure of working with PDP-like machines where all programming was done via electrical switch toggling. Another problem is the ephemeral nature of modern programming languages. Once there was Python 2.x, now it is deprecated and there is Python 3.x which in its turn will be deprecated in a couple of months. Idem for other languages. ANSI C looks like the Pyramid of Cheops: it was there in 70's and I don't doubt it will be there after the Sun will become a red dwarf.
It seems that now the only way to understand the interaction between the hardware and the software is to play with embedded development. From the pedagogical point of view physical chips are very handy because they allow to tackle the most difficult part of C language, namely pointers. When coding in OS environment, */& notation is still very confusing because it refers to some location somewhere inside of the virtual memory. And before you will got the understanding of what is the virtual memory, you have to read a couple of monographs about OS development, etc. You may find it stupid but I do really want to know which transistor is holding my bit right now. At least, I can wire physical pin voltage to programming abstractions.
Currently I am working with Atmel chips and WinAVR package because of numerous textbooks and accessible hardware. Though all books promise to teach AVR coding using plain C, the reality is that all pointers are hidden behind macros like PORTA, DDRB, etc. All code examples include header file 'io.h' which in its turn refers to other header files specific for a given chip like 'iomx8.h'. So far, I cannot find any macros definition in these headers. The code to increase the voltage on the physical pin 14 on Atmega168 looks like
DDRB = 0x01;
PORTB = 0x01;
Fortunately, Microchip site provides some basic documents where it is stated, for example, that if I want to rise the voltage on the physical pin 14, I need to follow these steps:
unsigned char *ddrB;
ddrB = (unsigned char*)0x24; // the address of ddrB is 0x24
*ddrB |= 0x01; // set up low impedance/ high current state for the transistor 0
unsigned char *portB;
portB = (unsigned char*)0x25;
*portB |= 0x01; // voltage on
*portB &= ~(0x01); // voltage off
Unfortunately, this is the only info I got after one week of the lurking. Now I am going through USART programming and the things become more complicated with all these UBRR0H, UCSR0C. Since provided header files don't contain macros definitions for any register, where else can I find it?
A similar question was asked several years ago: accessing AVR registers with C?. However, provided answers were somewhat useless, besides the clue that GCC itself can map some mythical PORTB to real physical locations. Could someone describe the mechanism behind the mapping?
From a memory-mapping standpoint: The general purpose registers, special function+I/O registers, and SRAM share non-overlapping ranges a single address space, as described in datasheets for various processors in the AVR series. All of your pointers will reference this memory space, unless annotated as pointers to PROGMEM (which will cause different instructions to be emitted). The reference will be made without any sort of virtual memory mapping.
For example, the ATtiny 25/45/85 has the following map shown on page 18:
Your linker is aware of this memory map and will place variables accordingly. For example, a global variable declared in one of your compilation units will end up in an address above 0x0060 in the example device described above, so that it ends up in the SRAM.
From an instruction encoding standpoint: Although there is one address space, there is special functionality reserved for certain important regions. For example, IN and OUT instructions have six bits in their instruction encoding which can be used to directly refer to one of the 64 addresses within [0x20, 0x5F).
The IN and OUT instructions are unique in their ability to load and store to a fixed address encoded directly in the instruction, since the normal load and store instructions require an indirect load with the 'Z' register being loaded first.
As a result, when the compiler sees memory operations to a fixed I/O register, it may generate these more efficient instructions. However, a normal load/store via a pointer will have the same effect (although with different numbers of clock cycles required). For extended I/O registers that didn't fit into the first 64 (e.g. OSCCAL on an atmega328p), normal load/store instructions will always be generated.
Short answer - hidden away in the included headers from Atmel are a collection of macros that create pointers to the register locations. If you want to see any of the source, as well as additional necessary headers like interrupt.h, they are in WinAVR-20100110/avr/include/
Here's a brief overview of the process:
Your Makefile defines the device to be used, and then passes it has a definition to the compiler.
DEVICE = atmega2560
...
-D__$(DEVICE)__
You then include io.h, which automatically includes the necessary headers based on your device:
// In main source file
#include <io.h>
// In io.h
#include <avr/sfr_defs.h>
// ...
#elif defined (__AVR_ATmega2560__)
# include <avr/iom2560.h>
// In sfr_defs.h
#define _MMIO_BYTE(mem_addr) (*(volatile uint8_t *)(mem_addr))
#define __SFR_OFFSET 0x20
#define _SFR_IO8(io_addr) _MMIO_BYTE((io_addr) + __SFR_OFFSET)
// In iom2560.h
#include <avr/iomxx0_1.h>
// Other device specific definitions
// Om iomxx0_1.h
#define PINA _SFR_IO8(0X00)
// Other device family shared definitions
So if you unroll all of that, what you get is a volatile pointer to the register address. When ever you use PINA in your code, the preprocessor replaces it with all of the expanded macros:
PINA
_SFR_IO8(0X00)
_MMIO_BYTE((0X00) + __SFR_OFFSET)
(*(volatile uint8_t *)((0X00) + 0x20))
Which specifies that PINA is a pointer to a volatile 8-bit memory address of 0x20. The internal chip architecture then maps that address to the appropriate peripheral register whenever it is accessed.
Different devices have different register addresses and offsets. If you want to define your own, you'll need to check out the relevant datasheet. For most AVR chips, there is a section towards the end titled "Register Summary" that lists all of the register addresses and names of the individual control bits. In my experience (for AVR, at least), the names of the registers and bits found in the datasheet are exactly what they are defined as in the io.h files.
Also notice the use of "uint8_t" rather than "char." It's common (and highly encouraged) to use the bit-width specific definitions found in <stdint.h> to specify signed/unsigned and 8/16/32 bit variables whenever appropriate. Since AVR is 8-bit, any use of 16 or 32 bit (or float) variables will require multiple clock cycles for each operation. In this case, stdint.h should have:
typedef unsigned char uint8_t

C code modification from stm32F to stm32L

I'm trying to learn and master embedded C, so I was just gonna test a code that I found on Github, which is built for STM32F4 discovery board. The board that I'm actually working on is STM32L152. when I tried to build/run the code (obviously it won't work) the errors I'm getting are mostly related to functions not being defined or "identifier 'function_name' is undefined", note that the code file includes a library (lib).
take a look at the code file:
https://github.com/TDAbboud/STM32F4_Examples/tree/master/04_PWM_Servo
Generally speaking, What modifications should be done to successfully run the code on STM32L1?
Thanks
These are two different chips. Just because they are from ST just because they are ARM based, they are not the same chip. for starters the stm32f4 is a cortex-m4 the stm32l might not be, 99.9% of your code wont care, just some assembly might. if the stm32l is a cortex-m0 then you have far fewer instructions so the assembly will matter if it is a cortex-m3 then it wont.
The real isssue is peripherals not that this is two st chips not that this is two arm chips, the peripherals can/will vary. ST has a number of chips that use the same uart or same gpio or other, but they have more than one uart they use for STM32 chips, and more than one GPIO. And they mix and match as they make new chips, so if you want to port from one to the other you need to go peripheral by peripheral reading the new and old docs to see what if anything changed.
Sounds like you are using a library so the tool may know from the chip you have chosen what peripherals you have and which library you need, so it might not be finding them because for that chip that peripheral and thus that function and those defines do not apply. Take it one perpheral at a time and port between chips.
stm32 (L) serise incloud EEPROM which are important when they go to sleep mode and want to recover variables after wakup . so make sure application is running in sleep modes need to save data somewhere. so if you use STM (F) series care needs to attache a external EEPROM

Where to find I bit and how to edit it to enable interrupts in ARM Cortex-M4

In ARM Cortex-M4F MCU (TM4C1294NCPDT specifically), to deal with interrupts (GPIO interrupts), one of the steps to get the interrupts working is to clear the I BIT.
I searched a lot but I couldn't find any useful information about that, could anybody please tell me where to find that bit exactly and how to edit it if I need some special procedures?
And that will be great if I have been told where to find that information exactly after the explanation please (to learn how to answer myself on any other questions).
The CMSIS provides a standard cross-vendor software interface to Cortex-M based devices. The CMSIS defines a number of functions for interacting with the NVIC and PRIMASK including the intrinsics __disable_irq()/__enable_irq()
The ARM Cortex-M interrupt system is quite complicated and very well thought. It consists of CPU registers and a tightly coupled interrupt controller (NVIC). Interrupts are prioritized and vectored. There is no single interrupt-enable flag as for smaller 8/16 bit MCUs.
For each interrupt, there are two ARM-core instances to gate the event to the CPU: The CPU PRIMASK register (single bit), which can be seen most similar to the classical interrupt-enable flag. Second is one enable bit in the NVIC. For these, there is an ARM standard in the CMSIS headers. These provide functions __enable_irq() and __disable_irq() for the PRIMASK bit. The peripheral interrupt itself has to be controlled by NVIC_EnableIRQ(IRQn_Type IRQn) where IRQn is the interrupt number as defined in the MCU-specific header file.
Finally, there are most times also interrupt enable bits in each peripheral module as know by the smaller MCUs.
Note that to have an interrupt pass through all gates have to be open (all bits set to "enable"). Use the CMSIS functions to manipuate the bits. They very likely will not take more instructions than a hand-crafted version.
Edit:
There is no actual need to fiddle yourself with assembler or the registers. Just use the CMSIS functions, you can very likely not do better yourself, but possibly worse. That's actually the intention of CMSIS.
(end edit)
Start reading in the reference manual for the MCU and the vendor's homepage. That should provide references and app-notes for the device. You also should read the technical reference manual, architecture reference manual from ARM. Actually, just have a close look at all related documents there for the CPU (M4 for you). These are for free, some require registering.
For the NVIC, you should not access it directly, but using the CMSIS header files as provided by TI for exactly this MCU (the headers require some device-specific settings). If not available,you can get them from ARM, but have to provide the device-specific settings yourself (they are few and are given in the MCU's reference manual).
As the ARM Cortex-M4 has multiple interrupts, you need their symbolic names to enable/disable. These have to be defined in the MCU header which defines all peripheral modules, too (there might be multiple such headers). The names end with _IRQn, just search for that.
To use the Cortex-M4 you should read the documents given, or you can try with a good book. However, as this is no tutorial site, nor is it allowed to recommend books, please search yourself.
OK, the easiest answer for my question is:
To use " CPSID I" or " CPSIE I" inline assembly code which will set or clear the PRIMASK (I) Bit respectively. (of course that will work just in privileged mode).
And both assembly instructions are equivalent to __disable_irq() and __enable_irq() functions in CMSIS respectively.

C interrupts on Cortex M3

I'm currently trying to implement interrupts on the STM32L152. I'm not using the standard peripheral libraries because I find them very confusing and difficult to get my head around. I'm not too competent with C for micro controllers yet.
I currently do everything through registers. Is there a way to implement interrupts in C through registers? There doesn't seem to be any information that actually makes sense out there. I did find learning C to be very inaccessible in the first place tbh.
Thanks
Of course you can implement interrupts by setting registers.
The registers-values tell the STM how to deal with interrupts, which interrupt is enabled, how the interrupt-controller has to work.
You'll need an Interrupt-vector-table. When an interrupt occurs a the program-counter will be set to an specific address of program-memory. There you should place a jump-command (assembler jmp) to your interrupt-service-routine.
You should read chapter 10 in this reference manual.
Hope this helps.

Want to configure a particular peripheral register in ARM9 based chip

I have verilog based verification envirnoment for ARM based chip. I have to write new tests in
C++ to verifiy a peripheral. I have all ARM based GCC tools in place. I do not know how to make a
particular peripheral register visible in C++ based test. I want to write to this register, want to wait for the interrupt from the peripheral and then want to read back status of another peripheral register.
I would like to to know how can it be done? Which documentation from ARM should I refer to.
I tried and find all documentations are for system developers
I need the basic information.
Regards
Manish
You will eventually if not immediately want the ARM ARM (yes ARM twice, once for ARM the second one for Architectural Reference Manual, you can google it and find it for free as a download). Second you want the TRM, Technical Reference Manual for the specific core in your chip. ARM doesn't make chips they make processor cores that other people put in their chips so the company that has the chip may or may not have the TRM included in their documentation. If you have an arm core in verilog then I assume you purchased it and that means you have the specific TRM for the specific core that you purchased available, plus any add ons (like a cache for example).
You can take this with a grain of salt but I have done what you are doing for many years (testing in simulation and later on the real chip) now and my preference is to write my C code as if it were going to be running embedded on the arm. Well in this case perhaps you are running embedded on the arm.
Instead of something like this:
#define SOMEREG (*(volatile unsigned int *)0X12345678)
and then in your code
SOMEREG = 0xabc;
or
somevariable = SOMEREG;
somevariable |= 0x10;
SOMEREG = somevariable;
My C code uses external functions.
extern unsigned int GET32 ( unsigned int address );
extern void PUT32 ( unsigned int address, unsigned int data);
somevariable = GET32(0x12345678);
somevariable|=0x10;
PUT32(0x12345678,somevariable);
When running on the chip in or out of simulation:
.globl PUT32
PUT32:
str r1,[r0]
bx lr ;# or mov pc,lr depending on architecture
.globl PUT16
PUT16:
strh r1,[r0]
bx lr
.globl GET32
GET32:
ldr r0,[r0] ;# I know what the ARM ARM says, this works
bx lr
.globl GET16
GET16
ldrh r0,[r0]
bx lr
Say you name the file it putget.s
arm-something-as putget.s -o putget.o
then link putget.o in with your C/C++ objects.
I have had gcc and pretty much every other compiler fail to get the *volatile thing to work 100%, usually right after you release your code to the manufacturing folks to take your tests and run them on the product is when it fails and you have to stop production and re-write or re-tune a bunch of code to get the compiler not confused again. The external function approach has worked 100% on all compilers, the only drawback is the performance when running embedded, but the benefits of abstraction across all interfaces and operating systems pays you back for that.
I assume you are doing one of two things, either you are running code on the simulated arm trying to talk to something tied to the simulated arm. Eventually I assume the code will be doing that so you will have to get into tools and linker issues, which there are many examples out there, some of my own as well, just like building a gcc cross compiler, trivial once shown the first time. If this is a peripheral that will eventually be tied to an arm, but for now is outside the core but inside the design, meaning it hopefully is memory mapped and is tied to the arms memory interface (amba, axi, etc).
for the first case you have to overcome the embedded hurdle, you will need to build bootable code, probably rom/flash based (read only) as that is likely how the arm/chip will boot, dealing with the linker scripts to separate the rom/ram. Here is my advice on that eventually if not now the hardware engineers will want to simulate the rom timing, which is painfully slow in simulation. Compile your program to run completely from ram (other than the exception table which is a separate topic), compile to a binary format that you are willing to write an ad hoc utility for reading, elf is easy, so is ihex and srec, none as easy as a plain old binary .bin. What you ultimately want to do is write some assembler that boots up on the virtual prom/flash, enables the instruction cache (if they have that implemented and working in simulation, if not then wait on that step) uses the ldm amd stm instructions in a loop to copy as many words at a time as you can to ram, then branch to ram. I have a host based utility that takes the .bin file creates an assembler program that includes the assembler that copies the binary to ram and embed the binary itself as .words in the assembler, then assemble and link that program to a format the simulation can use. Do not let the hardware engineers convince you that you have to re-build the verilog every time, you can use a $readmemh() or some other such thing in verilog to read a file at runtime and not have to re-compile the verilog to change the arm binary. You will want to write an ad hoc host based utility to convert your .bin or .elf or whatever to a file that the verilog can read, readmemh is trivial... So I am getting off on a tangent, use the put/get to talk to registers, you have to use the TRM and the ARM ARM to place the interrupt handler code somewhere, you have to enable the interrupt, most likely in more than one place in the arm as well as in the peripheral. The beauty of simulating is that you can watch your code execute and you can see the interrupt leave the peripheral and debug your code based on what you see, with a real chip you dont know if your code is failing to create the interrupt or if your code is failing to enable the interrupt or if the interrupt is working but you made a mistake in the interrupt handler, with a verilog simulator you can see all of this and you should strive to learn to read the waveforms and not rely on the hardware engineers to do it for you. modelsim or cadence or whomever can save the waveforms in .vcd format and you can use a free tool named gtkwave to view the waveforms. Dont let them convince you that they dont have any more licences available for you to look at stuff.
All of that is secondary, if this is an off core but on chip peripheral then you probably want to test that logic without the arm core first. If you dont know verilog, its easy you should just look at the code and you can figure it out. Software engineers can pick it up in a few days or a week if already experienced in languages, particularly C. Either way, the hardware engineer likely has a test bench for the peripheral, you create or have them create a test bench with a register that is similar to what you will see once connected to the arm, either directly on the arm bus or on a test bench interface that simplifies the arm bus. then use vpi, which is ugly but works (google foreign language interface as well as vpi) to connect C code on the host machine running the simulation. Do most of your work in C and verilog minimizing the vpi nightmare. Because this is compiled and linked to the simulation in a sense you do not want to have to re-build the sim every time you want to change your test program. So use something like sockets or some other IPC interface so that you can separate from the vpi code. Then write some host code that implements put32 and get32 (put8, put16, whatever functions you want to implement). so now you take your test program that can run on the arm if compiled that way and instead compile it on the lost linking it to the put/get/whatever abstraction layer. Now you can write programs that for now run on the host but interact with the peripheral in simulation as if it were real hardware and as if your host programs were embedded programs in the arm. the interrupt is likely trivial in this environment as all you have to do is either look for it in the waveforms or have the vpi code print something on the console when the signal changes states or something like that.
Oh, the reason for copying from rom to ram then running from ram is that on average your sim times will be significantly shorter, fives and tens of minutes instead of hours. simulating the peripheral by itself without the arm using a foreign language interface to bridge to/from the host, cuts your sim time from fives of minutes to seconds depending on what you are doing. If you use some sort of abstraction like my put/get you can write your peripheral code one time in one file, linking it different ways that one file/program/function can be used with the perhipheral only in simulation for quickly developing your code, then run with the arm in place in simulation adding the complexity of the arm exceptions and arm interrupt system, and later on the real chip as you were running on the simulated chip. and then later that code can hopefully be used as is in a driver or application space using mmap, etc.

Resources