Why does my ESP32 keep on reseting after startup? - c

I have wired up my ESP32 and after I power it, it keeps on restarting with the message:
rst:0x10 (RTCWDT_RTC_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:812
load:0xffffffff,len:-1
ets Jun 8 2016 00:22:57
rst:0x10 (RTCWDT_RTC_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
configsip: 0, SPIWP:0xee
clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
mode:DIO, clock div:1
load:0x3fff0018,len:4
load:0x3fff001c,len:812
load:0xffffffff,len:-1
ets Jun 8 2016 00:22:57
I have connected a number of devices with a keypad connected on GPIOs [6, 7, 8, 15, 4, 16, 17, 15] and both RTC and LCD Serial Adapter connected on pins [21, 22]. All devices are powered by the ESP32's 5V pin.
Now, strangely, when resetting, if I press number 1 on the keypad (4x4) or disconnect it, the resetting stops and everything works as expected even after reconnecting it. The same happens with the LCD serial adapter. All other devices do not affect the ESP32.
Any insight as to what may be causing this peculiar behavior will be greatly appreciated.

From the ESP32 you can safely pull about 12mA amount of current. You probably have too many devices powered by it's 5V pin.
Consider using an additional power source. Don't forget to interconnect the GND if do it.

Your logs seem very similar to what I was getting for over a month. Flashing a non-secure-boot enabled project was ok. But building and flashing the exact same project with secure-boot enabled (under menuconfig) gave me the 'bootloop' with
"load:0xffffffff,len:-1"
It could be that your bootloader.bin size exceeds the default limit (0x7000). This would cause an overlap in the default offset (0x8000) for partition table.
For my case, my bootloader.bin size was about 0x9000 (did a hexdump).
To overcome this, I changed the partition table offset from 0x8000 to 0x10000 under menuconfig. (This resulted in the offset of the app image to shift from 0x10000 to 0x20000) Then flash accordingly with 'esptool.py write_flash...', or use 'idf.py flash'.
Another way is to reduce the size of the bootloader.bin, perhaps by reducing the verbose output to 'warning' or 'error'

First thing to always try is the process of elimination. Remove one device at a time to see which one attached is causing the problem.
But looking at the data I have is GPIO 6-11 is used by the flash memory. Might double-check me on that. I have a doc from Andreas Spiess that has the pinouts of several boards and has those GPIO blocked out as unusable as they are used by the flash memory.

Related

Can't read from VEML6030 (or only get 0x0000) via I²C using Renesas r5F104GK

My problem:
I have connected a VEML6030 (Ambient Light Sensor from Vishay) to my µC.
If I want to read this sensor, I only get 0x0000 as an answer.
I'm programming in c on a Renesas r5F104GK.
I used Applilet as a code generator.
I have the data sheet and an application note as documentation. I have also already spent days searching online - unfortunately unsuccessful so far.
I also have a Lis3DH sensor on my PCB, which is connected to the same I²C bus.
I can separate both components from the bus with a jumper.
What I have already achieved:
Depending on the level of the ADDR pin, I see an ack on the bus.
identify veml6030
The communication with the Lis3DH works (read & write)
I get protocol-compliant Ack / NACK from the sensor.
The system is operated at 3V
If I try to read output, I only get 0x000:
Output
During the tests I am sure that only the VEML6030 is contacted.
I would be very happy if someone here could share their experience with the VEML6030 and, if necessary, have a tip on what I'm doing wrong.
In the end, I'm sure that the problem is in front of the monitor ;)
Update (#Lundin)
How can I move my ticket to electronics.stackexchange.com?
Unfortunately I can only publish a part of the schematic
2.1) SDA & SCL have got 10k Pullups
2.2) SDA is connected to µC Pin 18 (P14/RxD2/SI20/SDA20/TRDIOD0/(SCLA0))
2.3) SCL is connected to µC Pin 17 (P15/PCLBUZ1/SCK20/SCL20/TRDIOB0/(SDAA0))
2.4) INT is connected to µC Pin 36 (P140/PCLBUZ0/INTP6)
The Connector at ADDR is just used to get the correct footprint on the PCB. In real it is a Jumper with 3 pins.
P.S.: Solved Communication
enter image description here
I hope this is the correct way now.
My problem is solved.
I've to send only a "Start".

STM32F4 FSMC/FMC SRAM as Heap/Stack results in random hardfaults

we are currently evaluating to use an external SRAM for C/C++ heap storage on our platform using a STM32F439BI microcontroller.
The problem
Using the SRAM as storage for heap results in random hardfaults which are raised from buserrors/imprecice buserrors.
Without placing the heap on the SRAM, memory tests run successfully on the whole SRAM (8 bit/16 bit and 32 bit accesses).
Connecting a debugger I can observe these errors sometimes before a hardfault occurs. Most often a word is read from the SRAM and the CPU register fills with addresses of the following format: 0x-1F3-1F3 (- is most often '0', sometimes 'A' or '6'). The pattern '1F3' persists. If the same address is read again some lines further down the correct value is read (some other address in 0x60000000 space).
If I stop the program on a breakpoint at some point early in the program and step a few lines, I get these errors more frequently.
Further details
The SRAM is connected using the FMC/FSMC peripheral on FMC bank 1 and SRAM bank 1 and is therefore memory-mapped to address 0x60000000.
All settings for GPIO pins and FMC configuration are set from the startup file before main() executes or static objects are created.
The SRAM is the following: CY7C1041GN30
We connect all 16 data pins, all 18 address pins, BHE, BLE, OE, WE and CE to our controller. All pins are configured as push-pull-alternate-function, pull-up, AF_12 (FMC), very high speed. We enable clocks for all necessary pins and the clock for FMC. Note: Initially we started out without pull-up/down showing the same symptoms.
The controller runs with a clock speed of 168 MHz
As stated above, a memory test runs successfully
We use DMA for SPI, I2C and ADC data transfers
We frequently use interrupts, including external (pin) interrupts
We use the following timing settings:
AddressSetupTime: 2
AddressHoldTime: 4
DataSetupTime: 4
BusTurnAroundDuration: 1
CLKDivision: 2
DataLatency: 2
We configure the FMC as follows:
NSBank FMC_NORSRAM_BANK1,
DataAddressMux FMC_DATA_ADDRESS_MUX_DISABLE,
MemoryType FMC_MEMORY_TYPE_SRAM,
MemoryDataWidth FMC_NORSRAM_MEM_BUS_WIDTH_16,
BurstAccessMode FMC_BURST_ACCESS_MODE_DISABLE,
WaitSignalPolarity FMC_WAIT_SIGNAL_POLARITY_LOW,
WrapMode FMC_WRAP_MODE_DISABLE,
WaitSignalActive FMC_WAIT_TIMING_BEFORE_WS,
WriteOperation FMC_WRITE_OPERATION_ENABLE,
WaitSignal FMC_WAIT_SIGNAL_DISABLE,
ExtendedMode FMC_EXTENDED_MODE_DISABLE,
AsynchronousWait FMC_ASYNCHRONOUS_WAIT_DISABLE,
WriteBurst FMC_WRITE_BURST_DISABLE,
ContinuousClock FMC_CONTINUOUS_CLOCK_SYNC_ASYNC,
WriteFifo 0,
PageSize 0
We spend a lot of time of experimenting with longer timings and compared all the settings to examples including this one: Using STM32L476/486 FSMC peripheral
to drive external memories (although this one is for the STM32L4, I am fairly certain it applies to this controller as well)
Findings on similar problems
The problem sounds very similar to this errata sheet entry: "2.3.4 Corruption of data read from the FMC" but it also says the error is fixed in our revision of the controller (3)
I hope someone out there has seen this strange behaviour before and can help us. After over one week of debugging we expect some kind of error in the controller when interrupts/DMA accesses occur while the CPU accesses the SRAM (when we use it as heap, it is accessed very frequently). Hopefully you can shed some light on this topic.
Sorry for not getting back to you, internet.
Yes, we found out what the issue was (at least in our case). Problem was that the J-Link debugger we use is causing problems if it hangs above the power electronics on our pcb (it is mounted vertically). If we guide the ribbon cable out at the top (only digital electronics) the error disappears. So our guess is, that some noise from the electronics was caught up by the cable and directly injected into the JTAG port, which caused failures inside the MCU.
Just got a confirmation from ST, that there is a bug in the STM32F469 FMC that might cause incorrect values if the write fifo is disabled. The workaround is to have the fifo enabled. It is the same issue as in this F7 processor https://www.st.com/resource/en/errata_sheet/dm00145382.pdf

PWM DMA to a whole GPIO

I've got an STM32F4, and I want to PWM a GPIO port that's been OR'd with a mask..
So, maybe we want to PWM 0b00100010 for awhile at 200khz, but then, 10khz later, we now want to PWM 0b00010001...then, 10kHz later, we want to PWM some other mask on the same GPIO.
My question is, how do you do this with DMA? I'm trying to trigger a DMA transfer that will set all the bits on a rising edge, and then another DMA transfer that will clear all the bits on a falling edge.
I haven't found a good way to do this, (at least with CubeMX and my limited experience with C & STM32's) as it looks like I only get a chance to do something on a rising edge.
One of my primary concerns is CPU time, because although I mention hundreds of kilohertz in the above example, I'd like to make this framework very robust in-so-far as it isn't going to be wasteful of CPU resources...That's why I like the DMA idea, since it's dedicated hardware doing the mindless lifting of a word here to a word there type of stuff, and the CPU can do other things like crunch numbers for a PID or something.
Edit
For clarity : I have a set of 6 values that I could write to a GPIO. These are stored in an array.
What I'm trying to do is set up a PWM timer to set the GPIO during the positive width of the PWM and then I want the GPIO to be set to 0b00000000 during the low period width if the pwm.
So, I need to see when the rising edge is, quickly write to the gpio, then see when the falling edge is, and write 0 to the gpio.
Limited solution without DMA
STM32F4 controllers have 12 timers with up to 4 PWM channels each, 32 in total. Some of them can be synchronized to start together, e.g. you can have TIM1 starting TIM2, TIM3, TIM4 and TIM8 simultaneously. That's 20 synchronized PWM outputs. If it's not enough, you can form chains where a slave timer is a master to another, but it'd be quite tricky to keep all of them perfectly synchronized. Not so tricky, if an offset of a few clock cycles is acceptable.
There are several examples in the STM32CubeF4 library example projects section, from which you can puzzle together your setup, look in Projects/*_EVAL/Examples/TIM/*Synchro*.
General solution
A general purpose or an advanced timer (that's all of them except TIM6 and TIM7) can trigger a DMA transfer when the counter reaches the reload value (update event) and when the counter equals any of the compare values (capture/compare event).
The idea is to let DMA write the desired bit pattern to the low (set) half of BSRR on a compare event, and the same bits to the high (reset) half of BSRR on an update event.
There is a problem though, that DMA1 cannot access the AHB bus at all (see Fig. 1 or 2 in the Reference Manual), to which the GPIO registers are connected. Therefore we must use DMA2, and that leaves us with the advanced timers TIM1 or TIM8. Things are further complicated because DMA requests caused by update and compare events from these timers end up on different DMA streams (see Table 43 in the RM). To make it somewhat simpler, we can use DMA 2, Stream 6 or Stream 2, Channel 0, which combine events from 3 timer channels. Instead of using the update event, we can set the compare register on one timer channel to 0.
Set up the DMA stream of the selected timer to
channel 0
single transfer (no burst)
memory data size 16 bit
peripheral data size 16 bit
no memory increment
peripheral address increment
circular mode
memory to peripheral
peripheral flow controller: I don't know, experiment
number of data items 2
peripheral address GPIOx->BSRR
memory address points to the output bit pattern
direct mode
at last, enable the channel.
Now, set up the timer
set the prescaler and generate an update event if required
set the auto reload value to achieve the required frequency
set the compare value of Channel 1 to 0
set the compare value of Channel 2 to the required duty cycle
enable DMA request for both channels
enable compare output on both channels
enable the counter
This way you can control 16 pins with each timer, 32 if using both of them in master-slave mode.
To control even more pins (up to 64) at once, configure the additional DMA streams for channel 4 compare and timer update events, set the number of data items to 1, and use ((uint32_t)&GPIOx->BSRR)+2 as the peripheral address for the update stream.
Channels 2 and 4 can be used as regular PWM outputs, giving you 4 more pins. Maybe Channel 3 too.
You can still use TIM2, TIM3, TIM4, and TIM5 (each can be slaved to TIM1 or TIM8) for 16 more PWM outputs as described in the first part of my post. Maybe TIM9 and TIM12 too, for 4 more, if you can find a suitable master-slave setup.
That's 90 pins toggling at once. Watch out for total current limits.
what PWM 0b00100010 means? PWM is a square wave with some duty ratio. it wil be very difficult to archive using DMA but you will need to have table with already calculated values. For example to have 2kHz PWM with 10% ratio you will need to have 10 samples one with bit set, nine with bit zeroed. You configure the timer to 20k / sec trigger mem-to-mem (GPIO has to be done this way) DMA transmission in the circular mode. On the pin you will have 2kHz 10% wave. The PWM resolution will be 10%. If you want to make it 0.5% you will need 200 samples table and DMA triggered 400k times per second.
IMO it is better to use timer and DMA to load new values to it (read about the burst DMA mode in the timer documentation in the Reference Manual)

Can I disable Interrupts on a BBB for a short duration (0.5ms)?

I am trying to write a small driver program on a Beaglebone Black that needs to send a signal with timings like this:
I need to send 360 bits of information. I'm wondering if I can turn off all interrupts on the board for a duration of 500µs while I send the signal. I have no idea if I can just turn off all the interrupts like that. Searches have been unkind to me so far. Any ideas how I might achieve this? I do have some prototypes in assembly language for the signal, but I'm pretty sure its being broken by interrupts.
So for example, I'm hoping I could have something like this:
disable_irq();
/* asm code to send my bytes */
reenable_irq();
What would the bodies of disable_irq() and reenable_irq() look like?
The calls you would want to use are local_irq_disable() and local_irq_enable() to disable & enable IRQs locally on the current CPU. This also has the effect of disabling all preemption on the CPU.
Now lets talk about your general approach. If I understand you correctly, you'd like to bit bang your protocol over a GPIO with timing accurate to < 1/3 us.
This will be a challenge. Tests show that the Beaglebone black GPIO toggle frequency is going to max out at ~2.78MHz writing directly to the SoC IO registers in kernel mode (~0.18 us minimum pulse width).
So, although this might be achievable by the thinnest of margins by writing atomic code in kernel space, I propose another concept:
Implement your custom serial protocol on the SPI bus.
Why?
The SPI bus can be clocked up to 48MHz on the Beaglebone Black, its buffered and can be used with the DMA engine. Therefore, you don't have to worry about disabling interrupts and monopolizing your CPU for this one interface. With a timing resolution of ~0.021us (# 48MHz), you should be able to achieve your timing needs with an acceptable margin of error.
With the bus configured for Single Channel Continuous Transfer Transmit-Only Master mode and 30-bit word length (2 30-bit words for each bit of your protocol):
To write a '0' with your protocol, you'd write the 2 word sequence - 17 '1's followed by 43 '0's - on SPI (#48MHz).
To write a '1' with your protocol, you'd write the 2 word sequence - 43 '1's followed by 17 '0's - on SPI (#48MHz).
From your signal timmings it's easy to figure out that SPI or other serial peripheral can not reach your demand. In your timmings, encoding is based on the width of the pulse. So let's get to the point:
Q1 Could you turn off all interrupts for a duration of 500µs?
A: 0.5ms is quite a long time in embedded system. ISR is born to enable the concurrency of multi-task and improve the real-time capability. Your should keep in mind that ISR and context-switch(in some chip architecture) are all influenced by global interrupt.
But if your top priority is to perform the timmings, and the real-time window of other tasks are acceptable, of cause you can disable the global interrupt in the duration. Even longer. If not, don't do ATOM operation in such a long time.
Q2 How?
A: For a certain chip, there's asm instruction for open/close global interrupt undoubtedly. Find the instructions or the APIs provided by your OS, do the 3 steps below(pseudocode):
state_t tState = get_interrupt_status( );
disable_interrupt( );
... /*your operation here*/
resume_interrupt( tState );

AVR/Arduino: reading timer toggled port pin

I've configured the timer 2 in CTC mode and to toggle the port pin on compare match (TCCR2A=0x42, TCCR2B=0x02, OCR2A=0x20) and have set DDR3 to output. Hence, according to the ATmega328P documentation (pages 158-163). OC2A (aka PB3) should toggle on each compare match. Unfortunately, I can't read the pin state at PORTB. Is this expected? I assumed, that even if a port is configured as output I can read the set value.
There were two problems:
In AVR Studio 4.18 I must not use the Simulator 1, because it has a bug for the timer 2 and hence can't toggle the port pin correctly. I needed to use Simulator 2 or AVR Studio 5.
I needed to read PINB instead of PORTB (though the toggling is an output operation).
I don't know about that specific microcontroller, but in some architectures you need at least a NOP between changing the port pin and the latch being updated (so you can read the change).
Also there is the maximum frequency a pin can be toggled at (many times slower than the microcontroller CPU clock). Be sure to not be over that frequency.

Resources