How to properly set up DMA on a dsPIC33F - c

I have code written by someone else (for a dsPIC33FJ128MC706A) - which initializes UART1 & PWMs and uses a busy wait in the U1Rx interrupt to get 14 bytes, and timer1 interrupts to update PWM registers if new data arrives (debugged - it enters the _U1RXInterrupt when data is sent)
I tried modifying it to use DMA instead of busy wait. I changed nothing in the initialization, but called the following function after initialization of UART:
unsigned char received_data1 [0x0f] __attribute__((space(dma)));
unsigned char received_data2 [0x0f] __attribute__((space(dma)));
void dma_init(void)
{
DMA0REQ = 0x000b; // IRQSEL = 0b0001011 (UART1RX))
DMA0PAD = (volatile unsigned int) &U1RXREG;
DMA0STA = __builtin_dmaoffset(received_data1);
DMA0STB = __builtin_dmaoffset(received_data2);
DMA0CNT = 0xe;//15 bytes in
DMA0CON = 0x0002; //continuous ping-pong + post-increment
IFS0bits.DMA0IF = 0;
IEC0bits.DMA0IE = 1;
DMA0CONbits.CHEN = 1;
}
This is based on example 22-10 in the FRM with a few slight changes (UART1 receiver instead of UART2). I can't seem to find other examples that don't use library functions (which I didn't even know existed up until I came across those examples).
(Also, to be consistent with the code I got, I put in _ISR after void instead of making interrupt an attribute, but that didn't change anything)
In the example, there was no interrupt for the UART1 receiver - but according to the FRM, it should be enabled. Made sense to me that I don't need to clear the interrupt flag myself (it would kind-of defeat the purpose of DMA if I needed to use the CPU for it).
And yet, when I send data, the breakpoint on DMA0Interrupt doesn't trigger, but the one on the default interrupt does, and I have PWMIF=1 and U1RXIF=1. I also tried commenting out the default ISR and adding a _U1ErrInterrupt even though examining the flags I didn't notice any error flags being raised, just the normal U1RXIF.
I don't understand why this would happen. I haven't changed the PWM code at all, and the PWM flag isn't raised in the original code when I place a breakpoint in the _U1RxInterrupt. I don't even understand how a DMA set-up could cause a PWM error.
The U1RXIF flag seems to be telling me the DMA didn't handle the incoming byte (I assume it's the DMA that clears the flag), which once again relates to the question "what did I do wrong in this DMA setup"
UART initialization code:
in a general initialization function (called by main())
TRISFbits.TRISF2 = 1; /* U1RX */
TRISFbits.TRISF3 = 0; /* U2TX */
also called by main:
void UART1_init_USB (void)
{
/* Fcy 7.3728 MHZ */
U1MODE = 0x8400; /* U1ATX Transmit status and control register */
U1STA = 0x2040; /* Receve status and control register Transmit disabled*/
U1BRG = 0x000f; /* Baund rate control register 115200 bps */
IPC2bits.U1RXIP = 3; /* Receiver Interrupt Priority bits */
IEC0bits.U1RXIE = 1; /* Enable RS232 interrupts */
IFS0bits.U1RXIF = 0;
}
(comments by original code author)

Related

ATSAML21 Hardware Timer

I've been trying to configure & run hardware timer over SAML21 MCU to generate a 100ms delay i.e. ISR is supposed to hit at every 100ms. But it is observed that after starting the timer ISR is hitting at every 10us and changing the Prescaler & Compare register values isn't creating any difference in the 10us interval. Please review my code and let me know where I'm doing wrong.
I'm trying to configure Timer1(TC1) in 16bit mode, using GCLK_GENERATOR_1 as its clock source running at 8MHz frequency(CPU Main Clock:16MHz). The timer is expected to cause overflow interrupt every 100ms.
TcCount16 *tc_hw1 = NULL; /* Pointer to TC1 hardware registers Initilized later */
void init_timer1(void)
{
struct tc_module tc_inst1;
struct tc_config conf_tc1;
tc_get_config_defaults(&conf_tc1);
conf_tc1.clock_source = GCLK_GENERATOR_1;
conf_tc1.clock_prescaler = TC_CLOCK_PRESCALER_DIV64; /* 8MHz/64 = 125KHz*/
conf_tc1.reload_action = TC_RELOAD_ACTION_GCLK;
conf_tc1.counter_size = TC_COUNTER_SIZE_16BIT;
conf_tc1.count_direction = TC_COUNT_DIRECTION_UP;
conf_tc1.counter_16_bit.value = 0x0000;
/** Rest of the settings are used as defaults **/
while (tc_init(&tc_inst1, TC1, &conf_tc1) != STATUS_OK){
}
tc_set_top_value(&tc_inst1, 12500); /* Set counter compare top value */
/* Enable interrupt & Set Priority */
tc_hw1 = &(tc_inst1.hw->COUNT16); /* Initialize pointer to TC1 hardware register */
tc_hw1->INTENSET.reg |= TC_INTFLAG_OVF; /* Enable Overflow Interrupt */
NVIC_SetPriority(TC1_IRQn, 2);
NVIC_EnableIRQ(TC1_IRQn);
tc_enable(&tc_inst1); /*Start The TIMER*/
}
void TC1_Handler(void)
{
if((tc_hw1->INTFLAG.reg) & (TC_INTFLAG_OVF))
{
port_pin_toggle_output_level(PIN_PB03);
}
system_interrupt_clear_pending(SYSTEM_INTERRUPT_MODULE_TC1);
}
Debugger Information: I can see that the timer register is configured correctly but the COUNT register is not incrementing itself every time I pause to capture the debug info it shows only 0x0000 values.
Please help. Thanks!
I resolved the issue actually it's kind of mandatory to clear the timer OVF bit in the INTFLAG register. So the interrupt handler should've been like this:
void TC1_Handler(void)
{
if((tc_hw1->INTFLAG.reg) & (TC_INTFLAG_OVF))
{
tc_hw1->INTFLAG.reg = TC_INTFLAG_OVF; /*Clears the flag by writing 1 to it*/
port_pin_toggle_output_level(PIN_PB03);
}
system_interrupt_clear_pending(SYSTEM_INTERRUPT_MODULE_TC1); /*Not necessarily needed*/
}

Strange returned value when calling hal_can library function for stm32 when timeout

I'm using hal can library for stm32L476vg mcu. I'm debugging my can transmission app and I can observe something apparently wrong in HAL_CAN_Transmit() sequence execution and its returned value, specially when it musts return HAL_TIMEOUT value.
Main code:
#include "main.h"
#include "Pm03_SystemConfig.h"
#include "App_Task_CAN.h"
uint8_t charBuffer[10]={0x00};
int main(void)
{
/* Reset of all peripherals, Initializes the Flash interface and the Systick. */
HAL_Init();
/* Configure the system clock */
Pm03_SystemClock_Config();
App_Task_CAN_reset();
App_Task_CAN_init();
charBuffer[0]=0xCA;
charBuffer[1]=0xFE;
while (1)
{
App_Task_CAN_sendData(&charBuffer[0]);
}
}
My TX higher level function implementation that calls hal_can transmit and init for can configuration in App_Task_CAN.c file:
uint8_t App_Task_CAN_sendData(uint8_t* cBuffer)
{
for (uint8_t index=0; index< DATABTXLONG; index ++)
{
HCAN_Struct.pTxMsg->Data[index]= cBuffer[index];
}
if (HAL_CAN_Transmit(&HCAN_Struct, TIMEOUT)!=HAL_OK)
{
TaskCan_Error_Handler();
}
}
void App_Task_CAN_init(void)
{
static CanTxMsgTypeDef TxMessage;
static CanRxMsgTypeDef RxMessage;
/*configuracion timing para obtener 500kb/s*/
HCAN_Struct.Instance = CAN1;
HCAN_Struct.pTxMsg = &TxMessage;
HCAN_Struct.pRxMsg = &RxMessage;
HCAN_Struct.Init.Prescaler = 1;
HCAN_Struct.Init.Mode = CAN_MODE_NORMAL;//values defined in different hal libraries
HCAN_Struct.Init.SJW = CAN_SJW_1TQ;
HCAN_Struct.Init.BS1 = CAN_BS1_6TQ;//segment point at 87.5%
HCAN_Struct.Init.BS2 = CAN_BS2_1TQ;
HCAN_Struct.Init.TTCM = DISABLE;
HCAN_Struct.Init.ABOM = DISABLE;
HCAN_Struct.Init.AWUM = DISABLE;
HCAN_Struct.Init.NART = DISABLE;
HCAN_Struct.Init.RFLM = DISABLE;//Fifo locked mode disabled
HCAN_Struct.Init.TXFP = DISABLE;//prioridad de tx por id (más bajo más prioridad)
if (HAL_CAN_Init(&HCAN_Struct) != HAL_OK)
{
TaskCan_Error_Handler();
}
//TX
HCAN_Struct.pTxMsg->StdId = 0x321;
HCAN_Struct.pTxMsg->ExtId = 0x01;//29 bits
HCAN_Struct.pTxMsg->IDE = CAN_ID_STD;//values defined in different hal libraries
HCAN_Struct.pTxMsg->RTR = CAN_RTR_DATA;//values defined in different hal libraries
HCAN_Struct.pTxMsg->DLC = DATABTXLONG;//1-9
}
some defines used above, declared in App_Task_CAN.h:
/* Timeout for can operations in bit time units */
#define TIMEOUT 100
/* param for config mesg frames */
#define DATABTXLONG 2
I was debugging and going step by step through all the hal transmit function sequence (concerned piece of code below) and I can see how, after first TXRQ ->
Part of hal_can_transmit function in stm32l4xx_hal_can.c file:
HAL_StatusTypeDef HAL_CAN_Transmit(CAN_HandleTypeDef* hcan, uint32_t Timeout)
{
...
/* Request transmission */
hcan->Instance->sTxMailBox[transmitmailbox].TIR |= CAN_TI0R_TXRQ;
...
/* Check End of transmission flag */
while(!(__HAL_CAN_TRANSMIT_STATUS(hcan, transmitmailbox)))
{
/* Check for the Timeout */
if(Timeout != HAL_MAX_DELAY)
{
if((Timeout == 0) || ((HAL_GetTick()-tickstart) > Timeout))
{
hcan->State = HAL_CAN_STATE_TIMEOUT;
/* Process unlocked */
__HAL_UNLOCK(hcan);//line in which thread breaks
return HAL_TIMEOUT;//ignored line during debug
}
}
}
...
}
(NOTE: This next piece of code belongs to hal library provided by ST)
-> once the thread passes through the above transmit call, after TXRQ line this happens:
the thread waits for end Tx flag and
after timeout value expires, the threads goes into timeout state line
it arrives until hal unlock() macro line
after this, it exits/breaks the execution and returns to the line where this function was called ignoring the next line "return HAL_TIMEOUT".
The thread is broken just before arriving to this line (red text line). It returns to my transmit call a HAL_OK value! I recieve HAL_OK value! I confirm this after mounting a switch case in order to see what value it was reciving instead of HAL_TIMEOUT.
It happens from the first time transmit is called. I don't know if it is normal behavior. Obviously it is a hal_can code fact and I can not do anything from code user to fix it.
Has someone else, who worked with hal can for stm32, faced this situation?
My Tx does not work and I was looking for the possible fail causes through debugging. Hence I have seen this. Maybe there is more than this code problem to solve. But I think my code is very simple to fail: I only call send data in a infinite loop (of course after can init). By debugging I can see how data is arriving to the mailboxes correctly, but flag of TXOk never sets.
So, maybe this hal_can strange behavior is part of the cause of my transmission fail.
It seems to be a third party bug. Has someone deal with same situation? Any suggestion about my user code? is there something to improve?

MSP430 I2C slave holding clock line low

I'm more of a high level software guy but have been working on some embedded projects lately so I'm sure there's something obvious I'm missing here, though I have spent over a week trying to debug this and every 'MSP' related link in google is purple at this point...
I currently have an MSP430F5529 set up as an I2C slave device whose only responsibility currently is to receive packets from a master device. The master uses industry grade I2C and has been heavily tested and ruled out as the source of my problem here. I'm using Code composer as my IDE using the TI v15.12.3.LTS compiler.
What is currently happening is the master queries how many packets (of size 62 bytes) the slave can hold, then sends over a few packets which the MSP is just currently discarding. This is happening every 100ms on the master side and for the minimal example below the MSP will always just send back 63 when asked how many packets it can hold. I have tested the master with a Total Phase Aardvark and everything is working fine with that so I'm sure it's a problem on the MSP side. The problem is as follows:
The program will work for 15-20 minutes, sending over tens of thousands of packets. At some point the slave starts to hold the clock line low and when paused in debug mode, is shown to be stuck in the start interrupt. The same sequence of events is happening every single time to cause this.
1) Master queries how many packets the MSP can hold.
2) A packet is sent successfully
3) Another packet is attempted but < 62 bytes are received by the MSP (counted by logging how many Rx interrupts I receive). No stop condition is sent so master times out.
4) Another packet is attempted. A single byte is sent before the stop condition is sent.
5) Another packet is attempted to be sent. A start interrupt, then a Tx interrupt happens and the device hangs.
Ignoring the fact that I'm not handling the timeout errors on the master side, something very strange is happening to cause that sequence of events, but that's what happens every single time.
Below is the minimal working example which is reproducing the problem. My particular concern is with the SetUpRx and SetUpTx functions. The examples that the Code Composer Resource Explorer gives only has examples of Rx or Tx, I'm not sure if I'm combining them in the right way. I also tried removing the SetUpRx completely, putting the device into transmit mode and replacing all calls to SetUpTx/Rx with mode = TX_MODE/RX_MODE, which did work but still eventually holds the clock line low. Ultimately I'm not 100% sure on how to set this up to receive both Rx and Tx requests.
#include "driverlib.h"
#define SLAVE_ADDRESS (0x48)
// During main loop, set mode to either RX_MODE or TX_MODE
// When I2C is finished, OR mode with I2C_DONE, hence upon exit mdoe will be one of I2C_RX_DONE or I2C_TX_DONE
#define RX_MODE (0x01)
#define TX_MODE (0x02)
#define I2C_DONE (0x04)
#define I2C_RX_DONE (RX_MODE | I2C_DONE)
#define I2C_TX_DONE (TX_MODE | I2C_DONE)
/**
* I2C message ids
*/
#define MESSAGE_ADD_PACKET (3)
#define MESSAGE_GET_NUM_SLOTS (5)
static volatile uint8_t mode = RX_MODE; // current mode, TX or RX
static volatile uint8_t rx_buff[64] = {0}; // where to write rx data
static volatile uint8_t* rx_data = rx_buff; // used in rx interrupt
static volatile uint8_t tx_len = 0; // number of bytes to reply with
static inline void SetUpRx(void) {
// Specify receive mode
USCI_B_I2C_setMode(USCI_B0_BASE, USCI_B_I2C_RECEIVE_MODE);
// Enable I2C Module to start operations
USCI_B_I2C_enable(USCI_B0_BASE);
// Enable interrupts
USCI_B_I2C_clearInterrupt(USCI_B0_BASE, USCI_B_I2C_TRANSMIT_INTERRUPT);
USCI_B_I2C_enableInterrupt(USCI_B0_BASE, USCI_B_I2C_START_INTERRUPT + USCI_B_I2C_RECEIVE_INTERRUPT + USCI_B_I2C_STOP_INTERRUPT);
mode = RX_MODE;
}
static inline void SetUpTx(void) {
//Set in transmit mode
USCI_B_I2C_setMode(USCI_B0_BASE, USCI_B_I2C_TRANSMIT_MODE);
//Enable I2C Module to start operations
USCI_B_I2C_enable(USCI_B0_BASE);
//Enable master trasmit interrupt
USCI_B_I2C_clearInterrupt(USCI_B0_BASE, USCI_B_I2C_RECEIVE_INTERRUPT);
USCI_B_I2C_enableInterrupt(USCI_B0_BASE, USCI_B_I2C_START_INTERRUPT + USCI_B_I2C_TRANSMIT_INTERRUPT + USCI_B_I2C_STOP_INTERRUPT);
mode = TX_MODE;
}
/**
* Parse the incoming message and set up the tx_data pointer and tx_len for I2C reply
*
* In most cases, tx_buff is filled with data as the replies that require it either aren't used frequently or use few bytes.
* Straight pointer assignment is likely better but that means everything will have to be volatile which seems overkill for this
*/
static void DecodeRx(void) {
static uint8_t message_id = 0;
message_id = (*rx_buff);
rx_data = rx_buff;
switch (message_id) {
case MESSAGE_ADD_PACKET: // Add some data...
// do nothing for now
tx_len = 0;
break;
case MESSAGE_GET_NUM_SLOTS: // How many packets can we send to device
tx_len = 1;
break;
default:
tx_len = 0;
break;
}
}
void main(void) {
//Stop WDT
WDT_A_hold(WDT_A_BASE);
//Assign I2C pins to USCI_B0
GPIO_setAsPeripheralModuleFunctionInputPin(GPIO_PORT_P3, GPIO_PIN0 + GPIO_PIN1);
//Initialize I2C as a slave device
USCI_B_I2C_initSlave(USCI_B0_BASE, SLAVE_ADDRESS);
// go into listening mode
SetUpRx();
while(1) {
__bis_SR_register(LPM4_bits + GIE);
// Message received over I2C, check if we have anything to transmit
switch (mode) {
case I2C_RX_DONE:
DecodeRx();
if (tx_len > 0) {
// start a reply
SetUpTx();
} else {
// nothing to do, back to listening
mode = RX_MODE;
}
break;
case I2C_TX_DONE:
// go back to listening
SetUpRx();
break;
default:
break;
}
}
}
/**
* I2C interrupt routine
*/
#pragma vector=USCI_B0_VECTOR
__interrupt void USCI_B0_ISR(void) {
switch(__even_in_range(UCB0IV,12)) {
case USCI_I2C_UCSTTIFG:
break;
case USCI_I2C_UCRXIFG:
*rx_data = USCI_B_I2C_slaveGetData(USCI_B0_BASE);
++rx_data;
break;
case USCI_I2C_UCTXIFG:
if (tx_len > 0) {
USCI_B_I2C_slavePutData(USCI_B0_BASE, 63);
--tx_len;
}
break;
case USCI_I2C_UCSTPIFG:
// OR'ing mode will let it be flagged in the main loop
mode |= I2C_DONE;
__bic_SR_register_on_exit(LPM4_bits);
break;
}
}
Any help on this would be much appreciated!
Thank you!

STM8 interrupt serial receive

I am new to STM8, and trying to use a STM8S103F3, using IAR Embedded Workbench.
Using C, I like to use the registers directly.
I need serial on 14400 baud, 8N2, and getting the UART transmit is easy, as there are numerous good tutorials and examples on the net.
Then the need is to have the UART receive on interrupt, nothing else will do.
That is the problem.
According to iostm8s103f3.h (IAR) there are 5 interrupts on 0x14 vector
UART1_R_IDLE, UART1_R_LBDF, UART1_R_OR, UART1_R_PE, UART1_R_RXNE
According to Silverlight Developer: Registers on the STM8,
Vector 19 (0x13) = UART_RX
According to ST Microelectronics STM8S.h
#define UART1_BaseAddress 0x5230
#define UART1_SR_RXNE ((u8)0x20) /*!< Read Data Register Not Empty mask */
#if defined(STM8S208) ||defined(STM8S207) ||defined(STM8S103) ||defined(STM8S903)
#define UART1 ((UART1_TypeDef *) UART1_BaseAddress)
#endif /* (STM8S208) ||(STM8S207) || (STM8S103) || (STM8S903) */
According to STM8S Reference manual RM0016
The RXNE flag (Rx buffer not empty) is set on the last sampling clock edge,
when the data is transferred from the shift register to the Rx buffer.
It indicates that a data is ready to be read from the SPI_DR register.
Rx buffer not empty (RXNE)
When set, this flag indicates that there is a valid received data in the Rx buffer.
This flag is reset when SPI_DR is read.
Then I wrote:
#pragma vector = UART1_R_RXNE_vector //as iostm8s103f3 is used, that means 0x14
__interrupt void UART1_IRQHandler(void)
{ unsigned character recd;
recd = UART1_SR;
if(1 == UART1_SR_RXNE) recd = UART1_DR;
etc.
No good, I continually get interrupts, UART1_SR_RXNE is set, but UART1_DR
is empty, and no UART receive has happened. I have disabled all other interrupts
I can see that can vector to this, and still no good.
The SPI also sets this flag, presumably the the UART and SPI cannot be used
together.
I sorely need to get this serial receive interrupt going. Please help.
Thank you
The problem was one bit incorrectly set in the UART1 setup.
The complete setup for the UART1 in the STM8S103F3 is now(IAR):
void InitialiseUART()
{
unsigned char tmp = UART1_SR;
tmp = UART1_DR;
// Reset the UART registers to the reset values.
UART1_CR1 = 0;
UART1_CR2 = 0;
UART1_CR4 = 0;
UART1_CR3 = 0;
UART1_CR5 = 0;
UART1_GTR = 0;
UART1_PSCR = 0;
// Set up the port to 14400,n,8,2.
UART1_CR1_M = 0; // 8 Data bits.
UART1_CR1_PCEN = 0; // Disable parity.
UART1_CR3 = 0x20; // 2 stop bits
UART1_BRR2 = 0x07; // Set the baud rate registers to 14400
UART1_BRR1 = 0x45; // based upon a 16 MHz system clock.
// Disable the transmitter and receiver.
UART1_CR2_TEN = 0; // Disable transmit.
UART1_CR2_REN = 0; // Disable receive.
// Set the clock polarity, clock phase and last bit clock pulse.
UART1_CR3_CPOL = 0;
UART1_CR3_CPHA = 0;
UART1_CR3_LBCL = 0;
// Set the Receive Interrupt RM0016 p358,362
UART1_CR2_TIEN = 0; // Transmitter interrupt enable
UART1_CR2_TCIEN = 0; // Transmission complete interrupt enable
UART1_CR2_RIEN = 1; // Receiver interrupt enable
UART1_CR2_ILIEN = 0; // IDLE Line interrupt enable
// Turn on the UART transmit, receive and the UART clock.
UART1_CR2_TEN = 1;
UART1_CR2_REN = 1;
UART1_CR1_PIEN = 0;
UART1_CR4_LBDIEN = 0;
}
//-----------------------------
#pragma vector = UART1_R_RXNE_vector
__interrupt void UART1_IRQHandler(void)
{
byte recd;
recd = UART1_DR;
//send the byte to circular buffer;
}
You forget to add global interrupt flag
asm("rim") ; //Enable global interrupt
It happens at non isolated connections whenever you connect your board's ground with other source's ground (USB<->TTL converter connected to PC etc.), In this case microcontroller is getting noise due to high value SMPS's Y capacitor etc.
Simply connect your RX and TX line's via 1K resistor and put 1nF (can be deceased for high speed) capacitors on these lines and to ground (micro controller side) to suppress noises.

DMA interrupt for SPI

I'm trying to recreate a project of writing to an SD card (using FatFS) for a dsPIC33FJ128GP802 microcontroller.
Currently to collect the date from the SPI I have a do/while that loops 512 times and writes a dummy value to the SPI buffer, wait for the SPI flag, then read the SPI value, like so:
int c = 512;
do {
SPI1BUF = 0xFF;
while (!_SPIRBF);
*p++ = SPI1BUF;
} while (--c);
I'm trying to recreate this using the DMA intterupts but it's not working like I had hoped. I'm using one DMA channel, SPI is in 8 bit mode for the time being, so DMA is in byte mode, it's also in 'null write' mode, and continuous without ping pong. My buffers are only one member arrays and the DMA is matched.
DMA2CONbits.CHEN = 0; //Disable DMA
DMA2CONbits.SIZE = 1; //Receive bytes (8 bits)
DMA2CONbits.DIR = 0; //Receive from SPI to DMA
DMA2CONbits.HALF = 0; //Receive full blocks
DMA2CONbits.NULLW = 1; //null write mode on
DMA2CONbits.AMODE = 0; //Register indirect with post-increment
DMA2CONbits.MODE = 0; //continuous mode without ping-pong
DMA2REQbits.IRQSEL = 10; //Transfer done (SPI)
DMA2STA = __builtin_dmaoffset(SPIBuffA); //receive buffer
DMA2PAD = (volatile unsigned int) &SPI1BUF;
DMA2CNT = 0; //transfer count = 1
IFS1bits.DMA2IF = 0; //Clear DMA interrupt
IEC1bits.DMA2IE = 1; //Enable DMA interrupt
From what I understand from the null write mode, the DMA will write a null value every time a read is performed. However, the DMA wont start until an initial write is performed by the CPU, so I've used the manual/force method to start the DMA.
DMA1CONbits.CHEN = 1; //Enable DMA
DMA1REQbits.FORCE = 1; //Manual write
The interrupt will now start, and runs without error. However the code later shows that the collection was incorrect.
My interrupt is simple in that all I'm doing is placing the collected data (which I assume is placed in my DMAs buffer as allocated above) into a pointer which is used throughout my program.
void __attribute__((interrupt, no_auto_psv)) _DMA2Interrupt(void) {
if (RxDmaBuffer == 513) {
DMA2CONbits.CHEN = 0;
rxFlag = 1;
} else {
buffer[RxDmaBuffer] = SPI1BUF;
RxDmaBuffer++;
}
IFS1bits.DMA2IF = 0; // Clear the DMA0 Interrupt Flag
}
When the interrupt has run 512 times, I stop the DMA and throw a flag.
What am I missing? How is this not the same as the non-DMA method? Is it perhaps the lack of the while loop which waits for the completion of the SPI transmission (while (!_SPIRBF);). Unfortunately with the null write mode automatically sending and receiving the SPI data I can't manually put any sort of wait in.
I've also tried using two DMA channels, one to write and one to read, but this also didn't work (plus I need that channel later for when I come to proper writing to the SD card).
Any help would be great!

Resources