I'm trying to implement a GOCR algorithm to 32F429IDISCOVERY board. The GOCR itself works very well on PC but on the discovery board I'm still having some issues that makes it unstable and unusable. Sometimes the algorithm works fine, everything goes well and the result is great but sometimes the processor gets stuck in hard fault/default handler. I cannot tell what is the reason of the crash and what am I supposed to do but I believe that stack/heap got overflowed.
Below I attached an image that shows the state of the processor before and after the crash:
Related
I am currently developing a Windows Kernel Driver that implements its own networking stack. While testing some base functionality of the implemented stack, I noticed that replies to pings would sometimes take noticeably longer than usual. Investigating this issue further, I found out that KeAcquireSpinLock sporadically has an execution time of up to 20 ms (instead of few µs), even when the lock is not held by other cores (I confirmed this by printing the lock value before calling KeAcquireSpinLock).
Since I had no clue why KeAcquireSpinLock takes so long, I implemented a different approach with KeAcquireSpinLockAtDpcLevel, manually rising the IRQL if required:
oldIrql = KeGetCurrentIrql();
if (oldIrql < DISPATCH_LEVEL)
{
KeRaiseIrql(DISPATCH_LEVEL, &oldIrql);
}
KeAcquireSpinLockAtDpcLevel(m_lock);
// DO STH WITH SHARED RECOURCES
KeReleaseSpinLockFromDpcLevel(m_lock);
if (oldIrql< DISPATCH_LEVEL) KeLowerIrql(oldIrql);
I expected the above code to be functionally equivalent to KeAcquireSpinLock. However, it turned out that the runtime issue I had with KeAcquireSpinLock is gone and performance is fine with this approach.
I have searched the internet for similar problems with KeAcquireSpinLock, but it seems like I am alone with this issue. Maybe I have a bug in other sections of the driver? Can someone explain this behavior?
Note that I am not talking about Deadlocks, since KeAcquireSpinLock would always return at some point and the implementation with KeAcquireSpinLockAtDpcLevel uses the same architecture / locking object.
I encounter a problem currently with a custom STM32L151 board that I will try to explain here.
The program I am testing runs properly for some time, I get debug messages on puTTY as intended but at a time, the program seems to be "blocked".
It is pretty weird behaviour because the function which prints over UART is called (I put a breakpoint here to see if I reach this point) but I get no output on a terminal.
So I was wondering what could be the issue, if someone as an idea because I kinda run out of ideas to be honest, I tried to understand. I will assume there is no hardware issue, it is still possible but I do not think it could be that reason.
Also, the programs is aimed at receiving a FSK message and answering it and it seems that I have the same behaviour with the radio chip: I receive the message and send a response (I get the TxDone callback called which indicates that the FSK message has normally been sent but the device which waits for this response does not receive it).
So to sum up a bit: the program runs properly for a moment then "blocks" & I do not get any output anymore (debug or radio communication) but still runs (functions effectively called) and after a moment again, the program "unblocks" itself and runs properly again (debug messages work).
The device I work on is STM32L151 based, I work with Keil, UART config is: 19200 baudrate, 8 data bits, 1 stop bit, no parity, XON/XOFF flow control & the radio chip I use is SX1272.
If someone has any idea or any trail I can investigate on. If you need any further details, I am not sure I am accurate enough on the description of the problem but any help is appreciated.
I'm doing an assignment where I have to code parts of the ARM single-cycle processor such as the ALU, control unit and etc. All other modules, inputs are already given to me, all I have to do is write the verilog code for the blank modules given to me.
I've managed to code the modules and the timing simulation in quartus seems to be correct, as the PC goes in the order it is meant to. However, when I try to implement it on the FPGA (cyclone IV), the HEX outputs on the board (which are supposed to be the PC, already assigned before given to me) won't budge from 0.
I have no clue as in why the code works in timing simulation but not in the hardware, because from all I know, timing simulation takes propagation delay into consideration. I looked in the file given to me, not to be changed, and the RTL viewer and I found out that the clk I'm given to use is one made from modifying CLOCK_27.
The clock cycle time shown in the timing simulation is 50ns, and everything works well under that condition. However, I've touched a FPGA for the first time for this project and I'm not entirely sure what CLOCK_27 is and if it is different in the timing simulation and in the hardware.
If the clock cycle time difference is the cause of the trouble, I believe I have to shorten the propagation delay of my design. If it is not, I now have no clue.
You say that the timing simulation seems to be correct. Did you check the simulation results before and after synthesis? If not, I would suggest to do that. In post synthesis simulation you actually check that the timings are met. You can get some information about the difference before and after synthesis here
Hope this helps.
So I was programming on my board with a ATSAM4S8Bu using an Atmel-ICE debugger happily when suddenly I was assaulted by this error message any time I tried to debug or deploy to my board:
Failed to launch program
Error: unexpected chip identifier 0x00000000
This error also sometimes gets shouted at me:
Could not activate interface, but found DAP with ID 0x2ba01477.
How rude of it! I tried reasoning with it but it is not having any of it.
but seriously, it was fine one moment and the next this error has stopped me from further development so what does it mean and how do I fix this?
EDIT:
This error only seems to occur on my machine. It works on my colleagues, I tried reinstalling atmel-usb and atmel studio 6.2 but no luck :(
EDIT:
Some screen shots of the screen im shown in Tools->Device Programming and then trying to read the devices signature:
EDIT:
I also seem to get this error sometimes instead:
I've had this problem too and I have found a couple of solutions that I would like to share.
My PCB was using an ATSAM4E processor (that had never been programmed) with a Cortex debug header. I got the error message when I tried either method (SWD or JTAG).
Note: I was able to read the Device ID for a very short window after powering the PCB on or after pressing the reset button (Credit to Yaro and Yarooo). Often I would have to try multiple times to try and hit that short window. This confirmed to me that my circuit of the Cortex Debug header was correct.
jrb114 quotes in his post that there is an errata on the SAM3S datasheet that requires:
an external crystal or ceramic resonator on XIN/XOUT, or use the Main oscillator in bypass mode (applying a clock on XIN).
...
So what I did to make these boards work was provide a 1 MHz clock to XIN using a signal generator. Apply power to the PCB, then connect using the ATMEL-Ice. This connects fine. After that I set the GPNVM Bit 1 so we boot from flash, not SAMBA, programmed the device and it works fine.
My PCBs had an external crystal so I was a bit confused why my boards didn't work. So I put an oscilloscope on the XIN line and found that the crystal was not generating a waveform.
It turns out that on most of my boards, there was a short between one of the capacitors (for the crystal) to ground. No wonder my clock wasn't going.
On the other boards, the inductor that goes between VDD_OUT and VDD_PLL was not soldered correctly to the PCB, causing it to be open circuit.
Overall, it appears that this error is a result of not having a clock signal on XIN, whether it be incorrect wiring or not using an external crystal/resonator.
Is there a way I can use the delay command and have something else running in the background?
Kinda, if you use interrupts. delay itself uses these. But it's not as elegant as a multi-threaded solution (which is probably what you're looking for). There is a Multi-Threading library for Arduino but I'm not sure how well, or even if, it works.
The Arduino is only capable of running a single thread at a time meaning it can only do one thing at a time. You can use interrupts to literally interrupt the normal flow of your code but it's still technically not executing at the same time. The library I linked to attempts to implement what you might call a crude "hyper-threaded" solution. Two threads executing in tandem on a single physical processing core.
If you need other code to execute, you need to learn how to program with millis(). This involved converting your code from "step by step" execution to a time-based state machine.
For example if you want a LED to flash, you have two states for that LED: On and Off. You change the state when enough time has elapsed.
Here are a series of examples of how to convert delay()-based code into millis()-based code:
http://www.cmiyc.com/blog/2011/01/06/millis-tutorial/
Usually all you need is a timer and a ISR routine. You won't manage to live without Interrupts :P Here you can find a good explanation about this.
I agree with JamesC4S, state machine is probably the right formalism to use in your case. You could for example try the ThingML language (which uses components, state machines, etc), and which compiles to Arduino code. A simple example can be found here.