I am working on IAR compiler for FRDM-KL46Z Platform.
I want to use Internal clock and set it to 48 MHz (or as maximum as possible).
Till now I have done the following steps in the example sysinit.c file and function sysinit() provided.
#define NO_PLL_INIT
#if defined(NO_PLL_INIT)
mcg_clk_hz = 48000000; // It only works on 21000000 Hz, otherwise I get garbage prints on UART0.
SIM_SOPT2 &= ~SIM_SOPT_PLLFLLSEL_MASK
uart0_clk_khz = (mcg_clk_hz) / 1000;
#else
....
In FEI mode, if I do FBI mode or BLPI mode, I get very less mcu clock.
I want the mcu clk to be as high as possible, in internal clock. (According to datasheet I think it is supported, but I don't know how?)
Can anyone please explain or any code reference, much obliged.
Fixed it by doing this
#define NO_PLL_INIT
#if defined(NO_PLL_INIT)
MCG_C4 |= (MCG_C4_DRST_DRS(1) | MCG_C4_DMX32_MASK);
mcg_clk_hz = 48000000;
SIM_SOPT2 &= ~SIM_SOPT_PLLFLLSEL_MASK
uart0_clk_khz = (mcg_clk_hz) / 1000;
#else
....
Related
I am currently working with a Chinese rockchip rk3568 processor and an emmc interface. I have studied the source codes of such large projects as Linux and U-boot for this interface, and I think I have practically reproduced the same actions as there (in both projects, initialization on bare hardware is very similar).
However, unlike the sources, I have a problem with the CLK clock signal.
It seems logical to me that the processor takes a certain frequency (freq_1) and feeds it to the clocking of the eMMC peripheral device (of course, we set the source of this frequency through the processor's system registers). Inside the periphery itself, we set up dividers and turn on CLK clock signals, and as a result, at the output we have to divide the CLK frequency that we expect. I checked the registers, I really write the value of the divider there, and the frequency stability bit is set as required. But my problem is that the periphery seems to ignore these dividers and I see the same frequency at the output as at the input (freq_1).
Maybe the whole point is that I'm doing something earlier (or later) than I should, but I don't understand what.
I will be grateful for any help
I'm trying to run a simple code, but the result is still the same.
Example code:
mmcsd_reset(mmcsd_dev_p, SDHCI_RESET_ALL);
mmcsd_gpio_init();
HWREGB(mmcsd_dev_p->reg_base + EMMC_PWR_CTRL) = 0x01; // set bit Bus power ON
cyg_uint16 clk = 0;
cyg_uint16 div = 2; // some divider for output clk
HWREGH(mmcsd_dev_p->reg_base + EMMC_CLK_CTRL) = clk; // disable clk
delay_us(1000);
// Set divider and input CLK
clk |= (div & SDHCI_DIV_MASK) << SDHCI_DIVIDER_SHIFT;
clk |= ((div & SDHCI_DIV_HI_MASK) >> SDHCI_DIV_MASK_LEN) << SDHCI_DIVIDER_HI_SHIFT;
clk |= SDHCI_CLOCK_INT_EN;
HWREGH(mmcsd_dev_p->reg_base + EMMC_CLK_CTRL) = clk;
// wait stable input clk
hal_delay_us(500 * 1000);
// Enable output clk
clk = HWREGH(mmcsd_dev_p->reg_base + EMMC_CLK_CTRL);
clk |= SDHCI_CLOCK_CARD_EN;
HWREGH(mmcsd_dev_p->reg_base + EMMC_CLK_CTRL) = clk;
I found out experimentally that CLK line clocking for eMMC is configured directly through the system registers (CRU_CLKSEL_CON28). That is, the peripheral supports only those values that are available in this register. Thus, U-boot never adjusts the dividers in the eMMC peripheral register (Clock Control Register) and everything works correctly.
If you rigidly force U-boot to set the divider in the register, then the output frequency will still be equal to the input.
At the moment, based on the results of experiments, I conclude that for some reason Rockchip has not implemented (?) the operation of dividers in the eMMC block
Implemented interrupt function on TIMER1 on PIC16F877A MCU on PIC-DIP40 development board. Configured the timer Prescaler to 1 and auto preload value to 55536 so that the interrupt time is 0.01s. Using a counter of 100 to count 1s interval. The Fosc is 4Mhz. So my calculation is :
interrupt time = (4 / Fosc) * (65536 - 55536) = (4/4000000) * (65536 - 55536) = 0.01 s
And used a counter of 100 to generate a 1s interval.
Currently, I have no oscilloscope to test the actual 1s interval so, I am blinking an LED (LED2) on the timer interrupt and another LED (LED1) on the same time interval 1s using __delay_ms(1000); function.
So as expected the two LEDs will blink synchronously (Turn ON and OFF at the same Time). But for some first iterations, they blink synchronously. After some iterations, there is a clear difference in time between their blinking time (Turning ON and OFF time). After several minutes the difference is almost 1s. So the timer interrupt is not working as expected.
So is my calculation wrong for interrupt time or I am missing something in the timer1 configuration?
The overall goal is to generate a 1s time interval and test the validity without using an oscilloscope.
Here is my code :
// CONFIG
#pragma config FOSC = HS // Oscillator Selection bits (HS oscillator)
#pragma config WDTE = OFF // Watchdog Timer Enable bit (WDT disabled)
#pragma config PWRTE = OFF // Power-up Timer Enable bit (PWRT disabled)
#pragma config BOREN = OFF // Brown-out Reset Enable bit (BOR disabled)
#pragma config LVP = OFF // Low-Voltage (Single-Supply) In-Circuit Serial Programming Enable bit (RB3 is digital I/O, HV on MCLR must be used for programming)
#pragma config CPD = OFF // Data EEPROM Memory Code Protection bit (Data EEPROM code protection off)
#pragma config WRT = OFF // Flash Program Memory Write Enable bits (Write protection off; all program memory may be written to by EECON control)
#pragma config CP = OFF // Flash Program Memory Code Protection bit (Code protection off)
#include <xc.h>
#include <pic16f877a.h>
#define _XTAL_FREQ 4000000
#define LED1_ON PORTDbits.RD7 = 0
#define LED1_OFF PORTDbits.RD7 = 1
#define LED2_ON PORTDbits.RD6 = 0
#define LED2_OFF PORTDbits.RD6 = 1
#define LED2_TOGGLE PORTDbits.RD6 = ~PORTDbits.RD6
uint16_t preloadValue = 55536 ;
uint16_t counter = 0 ;
uint16_t secCounter1 = 100 ;
void io_config() {
TRISD &= ~((1 << _PORTD_RD7_POSITION) | (1 << _PORTD_RD6_POSITION)) ; //RD7 and RD6 are output LEDs
}
void timer1_init(){
TMR1 = preloadValue ; //loading the preload value
T1CON &= ~((1 << _T1CON_T1CKPS1_POSN) | (1 << _T1CON_T1CKPS0_POSN) | (1 << _T1CON_TMR1CS_POSN)) ; //prescalar is 1 clock is Fosc
T1CONbits.TMR1ON = 1 ; //timer 1 is ON
LED2_ON ;
}
void interrupt_en_configure(){
INTCON |= (1 << _INTCON_GIE_POSITION) | (1 << _INTCON_PEIE_POSITION) ; //global and peripheral interrupt on
PIE1 |= _PIE1_TMR1IE_MASK ; //timer 1 interrupt enable
TMR1IF = 0 ; //clearing interupt flag
}
void __interrupt() ISR(){
if(TMR1IF){
counter ++ ;
if (counter == secCounter1){
counter = 0 ;
LED2_TOGGLE ;
}
TMR1 = preloadValue ;
TMR1IF = 0 ;
}
}
void main(void) {
io_config();
interrupt_en_configure() ;
timer1_init() ;
while (1) {
LED1_ON ;
__delay_ms(1000);
LED1_OFF ;
__delay_ms(1000);
}
}
You should not expect them to operate synchronously for the following reasons:
First you do not know how __delay_ms() is implemented or any "promises" of precision it may make - it is certainly not using TIMER1, because you are controlling that. In fact the documentation gives some implementation details, and you really cannot expect precision.
Secondly, even if __delay_ms() were both accurate and synchronous, you are invoking it in a loop with the software overhead of the loop, function call and whatever you are doing to toggle the LED. That is a few cycles on every iteration that do not affect the interrupt interval which is locked to the hardware, and independent of the software timing.
The issue of precision of __delay_ms() is in fact addressed in this Microchip support article where it starts:
If an accurate delay is required, or if there are other tasks that can be performed during the delay, then using a timer to generate an interrupt is the best way to proceed.
In this case you should trust your code over the library provided delay which is intentionally crude (because it does not use up a valuable H/W timer resource).
__delay_ms() delays by running an empty loop, but it commonly cannot be exact. You would need to look into the actual machine code that is run to calculate the real delay. BTW, this is not rocket science and a great learning task. (Been there, done that.)
Now the rest of your loop (LED switching, looping) adds to this. Therefore, your pure software driven blinker is not exact.
However, your interrupt driven blinker is not, too. You reset the timer at the end of the ISR, after several clock cycles have passed. You need to take this into account, and don't forget the interrupt latency. Even worse, depending on the conditional statement, the reset happens at different times after the timer overflow.
Producing exact timing is difficult, especially with such a simple device.
The solution is to avoid software at all for the reset of the timer. Please read chapter 8 of the data sheet and use the capture/compare/PWM module to reset the timer on the appropriate value.
The worst thing that could still happen is some jitter, just because the ISR might have different latencies. But the timer runs as exactly as your system's crystal. In average your LED will blink correctly.
Anyway, if your timing requirements are not that hard, consider to live with some inaccuracy. Then use the most simple solution you like best.
UPDATE
For anyone interested, here is a step-by-step instruction and explanation on how to build a bare metal USB-Stack, how to tackle such a project and what you need to know for each step: STM32USB#GitHub
TLDR:
I have a STM32G441 and want to implement a USB driver without the use of any HAL Libraries, just using CMSIS - for learning experience, for space and because what I want to do would require to change the hal anyway.
But I can't get this thing to receive anything. I'm stuck trying to get the Device Address, which is never handed to the code. The hal middleware works just fine, so it's not a HW issue.
What I'm doing
I'm enabling the USB clock (correctly as I assume, because it can send ACK signals using my Logic Analyzer), power up the USB peripheral as defined in the datasheet, enable all the necessary Interrupts and handle the reset event by initializing the BTable and Endpoint 0. Now I expect to receive a CTR-Interrupt which never appears.
Reference Manual
Clock
The μC runs on a 25MHz HSE clock. The USB periphery runs on the PLL Q clock at ~48MHz, RCC settings were verified with the CubeMX clock configurator. AHB runs at half speed, because I get a bus error hard fault if I try to run it at full speed, but that's another question. The System Clock is set to 143.75MHz.
RCC->CR |= RCC_CR_HSEON | RCC_CR_HSION;
// Configure PLL (R=143.75, Q=47.92)
RCC->CR &= ~RCC_CR_PLLON;
while (RCC->CR & RCC_CR_PLLRDY) {
}
RCC->PLLCFGR |= RCC_PLLCFGR_PLLSRC_HSE | RCC_PLLCFGR_PLLM_0 | (23 << RCC_PLLCFGR_PLLN_Pos) | RCC_PLLCFGR_PLLQ_1;
RCC->PLLCFGR |= RCC_PLLCFGR_PLLREN | RCC_PLLCFGR_PLLQEN;
RCC->CR |= RCC_CR_PLLON;
// Select PLL as main clock, AHB/2 > otherwise Bus Error Hard Fault
RCC->CFGR |= RCC_CFGR_HPRE_3 | RCC_CFGR_SW_PLL;
// Select & Enable IO Clocks (PLL > USB, ADC; HSI16 > UART)
RCC->CCIPR = RCC_CCIPR_CLK48SEL_0 | RCC_CCIPR_ADC12SEL_1 | RCC_CCIPR_USART1SEL_1 | RCC_CCIPR_USART2SEL_1 | RCC_CCIPR_USART3SEL_1 | RCC_CCIPR_UART4SEL_1;
RCC->AHB2ENR |= RCC_AHB2ENR_ADC12EN | RCC_AHB2ENR_GPIOAEN | RCC_AHB2ENR_GPIOBEN | RCC_AHB2ENR_GPIOCEN;
RCC->APB1ENR1 |= RCC_APB1ENR1_USBEN | RCC_APB1ENR1_UART4EN | RCC_APB1ENR1_USART3EN | RCC_APB1ENR1_USART2EN;
RCC->APB2ENR |= RCC_APB2ENR_USART1EN;
// Enable DMAMUX & DMA1 Clock
RCC->AHB1ENR |= RCC_AHB1ENR_DMAMUX1EN | RCC_AHB1ENR_DMA1EN;
USB Memory
As far as I know, the USB BTable and endpoint buffers need to be placed in the USB-SRAM, not in regular SRAM. I've added some linker directives to create a section for that, which seems to work just fine according to the memory analyzer. Mem2Usb just recalculates the offset from absolute to relative to the USB-SRAM offset.
#define __USB_MEM __attribute__((section(".usbbuf")))
#define __USBBUF_BEGIN 0x40006000
#define __MEM2USB(X) (((int)X - __USBBUF_BEGIN))
First question: The access is only allowed to be 16 Bytes wide. But, contrary to e.g. STM32F103 there is no need for padding as it seems. The memory tool has some problems displaying this region, because it is only handling WORD access while the tool uses DWORD access, but copying the memory allocated by the HAL word by word also shows no padding. Is that correct? So I should be able to use all 1024 bytes, not just seeing them but only having 512. This is also the reason why mem2usb does not divide the address by 2.
Then I create some structures for the BTable and the zero-endpoint. The BTable ends up at 0x40006000 by default. Endpoint 0 has a rx and a tx buffer with max 64 bytes as per USB spec. The alignments are taken from the Reference manual. The memory is not automatically zeroed out.
typedef struct {
unsigned short ADDR_TX;
unsigned short COUNT_TX;
unsigned short ADDR_RX;
unsigned short COUNT_RX;
} USB_BTABLE_ENTRY;
__ALIGNED(8)
__USB_MEM
static USB_BTABLE_ENTRY BTable[8] = {0};
__ALIGNED(2)
__USB_MEM
static char EP0_Buf[2][64] = {0};
Initialization
Enabling the NVIC, then power up, wait 1μs until clock is stable as per datasheet, then clear reset state, clear pending interrupts, enable interrupts and last enable the internal pull up to start enumeration.
NVIC_SetPriority(USB_HP_IRQn, 0);
NVIC_SetPriority(USB_LP_IRQn, 0);
NVIC_SetPriority(USBWakeUp_IRQn, 0);
NVIC_EnableIRQ(USB_HP_IRQn);
NVIC_EnableIRQ(USB_LP_IRQn);
NVIC_EnableIRQ(USBWakeUp_IRQn);
USB->CNTR &= ~USB_CNTR_PDWN;
// Wait 1μs until clock is stable
SysTick->LOAD = 100;
SysTick->VAL = 0;
SysTick->CTRL = 1;
while ((SysTick->CTRL & SysTick_CTRL_COUNTFLAG_Msk) == 0) {
}
SysTick->CTRL = 0;
USB->CNTR &= ~USB_CNTR_FRES;
USB->ISTR = 0;
USB->CNTR |= USB_CNTR_RESETM | USB_CNTR_CTRM | USB_CNTR_WKUPM | USB_CNTR_SUSPM | USB_CNTR_ESOFM;
USB->BCDR |= USB_BCDR_DPPU;
USB Reset
Now the host sends a reset signal, which is triggered correctly. During the reset signal, I initialize the BTable and EP0. I set EP0 to ACK on RX and NACK on TX requests, as do other bare metal USB examples and the HAL (they are toggle, not write, but the register is in a known state of 0x00 as the hardware resets them on a reset). Lastly I put the USB peripheral in enable mode and reset the device address to 0.
if ((USB->ISTR & USB_ISTR_RESET) != 0) {
USB->ISTR = ~USB_ISTR_RESET;
// Enable EP0
USB->BTABLE = __MEM2USB(BTable);
BTable[0].ADDR_TX = __MEM2USB(EP0_Buf[0]);
BTable[0].COUNT_TX = 0;
BTable[0].ADDR_RX = __MEM2USB(EP0_Buf[1]);
BTable[0].COUNT_RX = (1 << 15) | (1 << 10);
USB->EP0R = USB_EP_CONTROL | (2 << 4) | (3 << 12);
USB->CNTR = USB_CNTR_CTRM | USB_CNTR_RESETM;
USB->DADDR = USB_DADDR_EF;
}
Debugging shows that the BTable is indeed at 0x40006000 and the Buffer address is written (I assume) correctly. The EP0 register was compared to a working HAL implementation and they are the same at that point.
Here I'm stuck
I expect the host to send the device address next (it doesn't, it sends a sleep and a wakeup and then another reset first), which will trigger the CRT interrupt (which is masked). Point is, it never does. And I don't know why. The host sends the request just fine and the device sends an ACK on that request just fine (logic analyzer), but the CRT is never triggered. Any ideas what else I can try or where to look?
Update
I've now compared the messages from my implementation with the HAL ones. The interrupt now handles the exact same messages in the exact same order and the USB-Registers also contain exactly the same values for every request. I've changed the BTable and USB-SRAM layout to contain the exact same values as the HAL after the Reset-Interrupt.
I had to implement the SUSP and WKUP for this to work, which was probably one of the things thats missing. Now they both behave exactly the same. It turns out, the problem is that I never receive a proper SOF-Package. The HAL gets its first SOF directly after the second reset (HW-Reset > 2x ESOF > SUSP > WKUP > RESET > (Optional 1 ESOF) > SOF), while mine gets an ERR instead of the SOF.
Looks like the error is not related to the USB registers or USB-SRAM. Next step will be to compare all registers I can think of as relevant between the two implementations. Maybe I forgot a clock?
Spend almost a week. Just to figure out I misconfigured my 48MHz clock source...
RCC->CCIPR = RCC_CCIPR_CLK48SEL_0 | ...
This sets the CLK48SEL to Reserved (01), not the PLLQ-Clock (10)...
RCC->CCIPR = RCC_CCIPR_CLK48SEL_1 | ...
Now I get the SOF packages and the CTR alright. May that question serve as a USB bare metal reference in the future.
I was trying to read analog voltage on pin RC3 on PIC16F15325. I have 3.23V across potentiometer and its output is nearly 1.65V which goes to pin RC3 of PIC microcontroller. For configuration and libraries I used MPLAB Code Cofigurator. Code is as follows:
#include "mcc_generated_files/mcc.h"
void main(void)
{
// initialize the device
SYSTEM_Initialize();
EUSART1_Initialize();
ADC_Initialize();
adc_result_t val1 = 0;
// When using interrupts, you need to set the Global and Peripheral Interrupt Enable bits
// Use the following macros to:
// Enable the Global Interrupts
//INTERRUPT_GlobalInterruptEnable();
// Enable the Peripheral Interrupts
//INTERRUPT_PeripheralInterruptEnable();
// Disable the Global Interrupts
//INTERRUPT_GlobalInterruptDisable();
// Disable the Peripheral Interrupts
//INTERRUPT_PeripheralInterruptDisable();
while (1)
{
// Add your application code
val1 = ADC_GetConversion(19); // selected channel RC3
printf("Value - %hu \n",val1);
DELAY_milliseconds(1000);
}
}
For voltage I mentioned above, I expected value near "511". Anything beyond 1023 [i.e. (2^10) - 1] is strange as PIC has 10-bit ADC.However uotput I get is:
Kindly, help me solving this issue.
Actually, output was correct but was looking strange only because of format. Just now I looked into datasheet which said about 2 ways of ADC result formatting.
PIC16(L)F15325/45 Datasheet page-228
and inside ADC_Initialize(); ADCON1 = 0x10 which means ADFM is left shift. I just did val1 = val1 >> 6; after ADC_GetConversion(19); and it worked as expected.
I'm trying to program a PIC12C508A to do a simple LED learning circuit. I've read some examples, the Microchip Datasheet, pic12c508a.h and pic12c508a.inc. I've tried to set the TRIS register using a C program and an ASM program but it does not take. Using MPLAB X, the XC8 compiler, and the built in simulator to check the SFR registers I can see that the TRIS is not updating even when the WREG holds the correct values. If anyone has experience with this please check out my code and see if I am doing something wrong.
#include <xc.h>
// -- CONFIG
#pragma config MCLRE = ON // RA5/MCLR/VPP Pin Function Select bit (RA5/MCLR/VPP pin function is digital input, MCLR internally tied to VDD)
#pragma config WDT = OFF // Turn Watchdog Timer Off.
#pragma config CP = OFF // Flash Program Memory Code Protection bit (Code protection off)
#pragma config OSC = IntRC // Internal RC Oscillator
// -- Internal Frequency
#define _XTAL_FREQ 400000
int main()
{
TRIS = 0b111010; // 0x3A
//---0-0 Set GP0 and GP2 as outputs
GPIO = 0b000100; // 0x04
//---1-0 Set GP2 as HIGH and GP0 as LOW
for(int countdown = 10; countdown > 0; --countdown) {
__delay_ms(60000); // Delay 1 minute.
}
GPIO = 0b000001; // 0x01
//---0-1 Set GP2 as LOW and GP0 as HIGH
while(1)
NOP();
}
I also tried it an assembly which is pretty much identical to the Gooligum tutorials for baseline PIC models.
list p=12c508a
#include <p12c508a.inc>
__CONFIG _MCLRE_ON & _CP_OFF & _WDT_OFF & _IntRC_OSC
RCCAL CODE 0x0FF ; Processor Reset Vector
res 1 ; Hold internal RC cal value, as a movlw k
RESET CODE 0x000 ; RESET VECTOR
movwf OSCCAL ; Factory Calibration
start
movlw b'111010' ; Configure GP0/GP2 as outputs
tris GPIO ;
movlw b'000100' ; Set GP2 HIGH - GREEN LED
movwf GPIO
goto $ ; loop forever
END
This all seems pretty straight forward but when I use breakpoints and examine the SFR registers in the simulator I can see that the GPIO and TRIS registers never are changed even though the WREG will hold the correct values. I've examined the ASM output that the XC8 compiler generates and it is almost identical to the ASM I wrote when it comes to setting the registers.
I've also tried using HEX values and straight integer values and the results are the same.
The answer is that the crystal frequency defined at the top of the program was way beyond the real actual value
#define _XTAL_FREQ 400000 //that's 400KHz INTOSC, impossible
Instead it should be
#define _XTAL_FREQ 4000000 //That's 4MHz INTOSC
#Justin pointed it out in the comment below it's original post.
First, in order to use GP2 as an output, do you need to clear the T0CS in the OPTION register ?
Second, I observe this in the manual:
Note: A read of the ports reads the pins, not the output data latches.
That is, if an output driver on a pin is enabled and driven high, but
the external system is holding it low, a read of the port will
indicate that the pin is low.
but I guess the simulator will assume the external system is not holding down the pin.
Third, BCF and BSF instructions look like a better way of waggling GP2 and GP0 independent of whatever else is going on in the GPIO.
I'm sorry, but other than that I don't know what to suggest.
You can try different GPIO, because according to the documentation, GP2 may be controlled by the option register.