I am trying to write a device code for C8051F340 to get the data from the host(PC) via USB. I have some example from silicon lab and the code look like below:
void Receive_File(void)
{
ReadStageLength = ((BytesToRead - BytesRead) > MAX_BLOCK_SIZE_READ)? MAX_BLOCK_SIZE_READ:(BytesToRead - BytesRead);
BytesRead += Block_Read((U8*)(&TempStorage[BlockIndex]), ReadStageLength); // Read Block
BlockIndex++;
// If device has received as many bytes as fit on one FLASH page, disable interrupts,
// write page to flash, reset packet index, enable interrupts
// Send handshake packet 0xFF to host after FLASH write
if ((BlockIndex == (BLOCKS_PR_PAGE)) || (BytesRead == BytesToRead))
{
Page_Erase((U8*)(PageIndices[PageIndex]));
Page_Write((U8*)(PageIndices[PageIndex]));
PageIndex++;
Led1 = !Led1;
BlockIndex = 0;
Buffer[0] = 0xFF;
Block_Write(Buffer, 1); // Send handshake Acknowledge to host
}
// Go to Idle state if last packet has been received
if (BytesRead == BytesToRead) {M_State = ST_IDLE_DEV; Led1 = 0;}
}
// Startup code for SDCC to disablt WDT before initializing variables so that
// a reset does not occur
#if defined SDCC
void _sdcc_external_startup (void)
{
PCA0MD &= ~0x40; // Disable Watchdog timer
}
#endif
I have some questions want to ask:
1. Where the data goes? the Buffer [0]?
2. if I got a Hex value transfer from the host, can I just read the Buffer [0] to get it ?
sorry I am a newbie.
Thank you.
Your received data stored in array TempStorage
You used Buufer[0] (the value 0xFF) for to send data to host
Related
I'm trying to understand how the STM32F091VB manages the send of data via serial protocol with the function HAL_UART_Transmit_IT()
At the moment I've a function called in the main() that creates the packet and send it via serial; it is something like this:
tx1[0] = STX;
tx1[1] = 0xFF;
tx1[2] = 0x80;
tx1[3] = 0x80;
DE_TAST_HIGH;
HAL_UART_Transmit_IT(&huart3, tx1, 8);
Now, the data I'm sending is quite small so the code run pretty fast and I'm trying to understand what's going to happen if I try to send a huge packet via serial protocol.
For istance, if my tx1[] is 100byte the HAL_UART_Transmit_IT() function block the CPU waiting while the full packet is sent to the serial port or it works more like a separate process where I tell the micro to send that packet and, while sending it it also process the remaining part of my code/main function?
I've tried to search on the micro datasheet to see if there was something about this process but I had no luck. I've read the stm32f0xx_hal_uart.c and it confirms that it is sent via interrupt in a non blocking mode but I would like to have some more in depth documentation about it
First of all you need to understand how the HAL_UART_Transmit_IT is meant to be used. We can get some help from STM FAQ.
The function is "non blocking" because when you call it it will do some configuration of the interrupts and then return. The buffer will not be transmitted during the call to your function, instead the heavy lifting is deferred to a later stage.
We can further have a look at the source code, to get a proof from what I said (note I kept only the juicy parts).
Blocking
HAL_StatusTypeDef HAL_UART_Transmit(UART_HandleTypeDef *huart, uint8_t *pData, uint16_t Size, uint32_t Timeout)
{
uint16_t* tmp;
uint32_t tickstart = 0U;
// [ ... ]
huart->TxXferSize = Size;
huart->TxXferCount = Size;
while(huart->TxXferCount > 0U)
{
// [ ... ]
// This is were the actual HW regs are accessed, starting the transfer
huart->Instance->DR = (*pData++ & (uint8_t)0xFF);
}
}
// [ ... ]
return HAL_OK
}
Non Blocking
HAL_StatusTypeDef HAL_UART_Transmit_IT(UART_HandleTypeDef *huart, uint8_t *pData, uint16_t Size)
{
huart->pTxBuffPtr = pData;
huart->TxXferSize = Size;
huart->TxXferCount = Size;
/* Enable the UART Transmit data register empty Interrupt */
// This is the only part were HW regs are accessed. What is happening here??
SET_BIT(huart->Instance->CR1, USART_CR1_TXEIE);
return HAL_OK;
}
The _IT function only activates one interrupt, based also on the datasheet:
This means we will receive an interrupt whenever the TX buffer is free. Who is actually sending the data then?
With the help of the FAQs and reading the source code, we find that void HAL_UART_IRQHandler(UART_HandleTypeDef *huart) does something like this:
/* UART in mode Transmitter ------------------------------------------------*/
if(((isrflags & USART_SR_TXE) != RESET) && ((cr1its & USART_CR1_TXEIE) != RESET))
{
UART_Transmit_IT(huart);
return;
}
Which in turn calls the UART_Transmit_IT
static HAL_StatusTypeDef UART_Transmit_IT(UART_HandleTypeDef *huart)
{
uint16_t* tmp;
/* Check that a Tx process is ongoing */
if(huart->gState == HAL_UART_STATE_BUSY_TX)
{
huart->Instance->DR = (uint8_t)(*huart->pTxBuffPtr++ & (uint8_t)0x00FF);
if(--huart->TxXferCount == 0U)
{
/* Disable the UART Transmit Complete Interrupt */
CLEAR_BIT(huart->Instance->CR1, USART_CR1_TXEIE);
/* Enable the UART Transmit Complete Interrupt */
SET_BIT(huart->Instance->CR1, USART_CR1_TCIE);
}
return HAL_OK;
}
else
{
return HAL_BUSY;
}
}
This function transmits only one byte! It then decrements the counter for the transmission (remember, all the information was set into the uart handler) and if it reaches 0, finally the complete interrupt is called.
Interrupts
Note that StmCube does the peripheral initialization and interrupt linking for you, but if you program from scratch you need to remember to write and register UART_IRQ_Handler
You can find here the code navigator to review my snippets and investigate further.
I'm struggling with, probably, a very simple problem.
I have a Cypress CY8 controller acting as SPI master, which should communicate with a PIC32mx in slave mode to exchange data packets.
However i cannot even fix simple transmission of multiple bytes from the master to the slave. I've set up the cypress to transmit a char of increasing value (0-255) with a pause (and slave select toggle) in between. The pic should read the incoming byte and then print it over uart to my pc (the uart connection works).
But the pic only prints the first character it receives continuously instead of it being updated.
If i check my logic sniffer, the cypress does send incrementing values and the pic relays them back over the MISO line (looks like the shift buffer isn't cleared).
What could this be?
The cypress without the pic attached gives proper output:
https://dl.dropboxusercontent.com/u/3264324/Schermafdruk%202015-07-28%2015.43.28.png
With the pic attached it relays the data over MISO:
https://dl.dropboxusercontent.com/u/3264324/Schermafdruk%202015-07-28%2015.43.45.png
And this is my (now) extremely basic code to test it:
TRISBbits.TRISB2 = 1; // make Ra2 pin input (SDI)
TRISBbits.TRISB5 = 0; // make Ra2 pin output (SDO)
TRISBbits.TRISB15 = 1; //make RB14 output (SCK)
ANSELA = 0; // all ports digital
ANSELB = 0; // all ports digital
SYSKEY = 0x00000000;
SYSKEY = 0xAA996655;
SYSKEY = 0x556699AA;
CFGCONbits.IOLOCK=0; // unlock configuration
CFGCONbits.PMDLOCK=0;
SDI2R = 0b0100; //SDI2 on pin RB2
SS2R = 0b0011; //SS2 on pin rb10
RPB5R = 0b0100; //SDO2 on pin RB5
// SCLK is connected to pin RB14 (SCK) by default
SYSKEY = 0x00000000;
SPI2CON = 0; // Stops and resets the SPI1.
rData=SPI2BUF; // clears the receive buffer
SPI2BRG=207; // use FPB/4 clock frequency <-- not important in slave mode right?
SPI2STATCLR=0x40; // clear the Overflo
SPI2CON=0x8180;
unsigned char t;
while(1){
t = SpiChnReadC(2);
//t = SPI2BUF; <== i've tried this also
sendData(t); <== uart routine
}
As i do receive a character and the spi data is relayed back to the cypress constantly i think something goed wrong with reading/clearing the spi data structure in the PIC. But i can't figure out why.
As i read in the datasheet, reading from SPI2BUFF gives me the received data, and clears the read flags so new data can be received, but it looks like that doesn't happen...
Can someone shine a light on this for me?
Thanks in advance
Timberleek
You should try making you SPI handler ISR driven to keep you from constantly polling, can also help the debugging since you'll only get notifications when the SPI is actually transacting.
NOTE: I'm bringing this from my FreeRTOS impl, so my ISR definition is not XC32 exactly...
/* Open SPI */
SPI1CON = 0;
spi_flags = SPICON_MODE32 | SPICON_ON;
SpiChnOpen(1,spi_flags,BRG_VAL);
SpiChnGetRov(1,TRUE);
mSPI1ClearAllIntFlags();
mSPI1SetIntPriority(priority + 1);
mSPI1SetIntSubPriority(0);
mSPI1RXIntEnable(1);
void vSPI1InterruptHandler(void)
{
unsigned long data;
if (IFS0bits.SPI1EIF == 1)
{
mSPI1EClearIntFlag();
}
if (IFS0bits.SPI1RXIF == 1)
{
data = SPI1BUF;
//sendData(data);
}
mSPI1RXClearIntFlag();
}
I am new to embedded programming and am trying to get my first I2C project working. I am using the PIC32MX795F512L. I am pretty much following the microchip datasheet for I2C on PIC32. The problem I'm having is the S bit is never cleared from the I2C1STAT register. It is actually unclear to me whether I have to do this or if it is done automatically at the conclusion of the Start event, but right now I'm trying to manually clear it. However, nothing that I do seems to have an effect. If more information is needed to make it easier to understand what is happening let me know. I am using a PICKIT3 so I can get debugging information as well. I know that the Master interrupt occurs, the S bit gets set, I exit the interrupt code and hang on the while statement checking the I2C1STATbits.S.
Edit: I'm editing this post to have my new code instead of the old code. I am now using a 20MHZ peripheral clock. Just one of the many things I tried today that did not work. Delay is just a 256ms delay. Super long I know, but it was quick.
main()
{
//Setup I2C1CON
I2C1CONbits.SIDL = 0; //Continue to run while in Idle
I2C1CONbits.SCLREL = 1; //Release the clock (Unsure of this)
I2C1CONbits.A10M = 0; //Using a 7 bit slave address
I2C1CONbits.DISSLW = 1; //Slew rate control disabled because running at 100 KHZ
I2C1ADD = 0x1E; //Slave address without read or write bit
I2C1BRG = 0x060; //Set the BRG clock rate - Based on Page 24-19
I2C1CONbits.ON = 1; //Turn on the I2C module
delay();
I2C1CONbits.SEN = 1; //Initiate a start event
while(I2C1CONbits.SEN == 1); //Wait until Start event is done
I2C1TRN = 0x3C; //Load the address into the Transmit register
while(I2C1STATbits.TRSTAT == 1);
while(I2C1STATbits.ACKSTAT == 0); //Wait for a ACK from the device
I2C1TRN = 0x00;
while(I2C1STATbits.TRSTAT == 1);
while(I2C1STATbits.ACKSTAT == 0);
I2C1TRN = 0x70;
while(I2C1STATbits.TRSTAT == 1);
while(I2C1STATbits.ACKSTAT == 0);
while(1);
}
Thanks for any help.
I'm also just beggining on PIC32MZ family, setting up the I2C to talk to various memory chips.
I used your code and modified it so that it would work properly. Since I am using PIC32MZ family, I believe the I2C registers should probably be the same.
I2C configuration:
I2C1CONbits.SIDL = 0; // Stop in Idle Mode bit -> Continue module operation when the device enters Idle mode
I2C1CONbits.A10M = 0; // 10-bit Slave Address Flag bit -> I2CxADD register is a 7-bit slave address
I2C1CONbits.DISSLW = 1; // Slew Rate Control Disable bit -> Slew rate control disabled for Standard Speed mode (100 kHz)
I2C1CONbits.ACKDT = 0; // Acknowledge Data bit -> ~ACK is sent
I2C1BRG = 0x0F3; // Baud Rate Generator set to provide 100KHz for SCL with 50 MHz xtal.
I followed the transmission steps provided in the I2C Datasheet so it would be easy to follow the steps coupled with the pdf and my comments on the code.
I2C Data Transmission:
// 1. Turn on the I2C module by setting the ON bit (I2CxCON<15>) to ‘1’.
I2C1CONbits.ON = 1; // I2C Enable bit -> Enables the I2C module and configures the SDAx and SCLx pins as serial port pins
//------------- WRITE begins here ------------
// 2. Assert a Start condition on SDAx and SCLx.
I2C1CONbits.PEN = 0; // Stop Condition Enable Bit -> Stop Condition Idle
I2C1CONbits.SEN = 1; // Start Condition Enable bit -> Initiate Start condition on SDAx and SCLx pins; cleared by module
while(I2C1CONbits.SEN == 1); // SEN is to be cleared when I2C Start procedure has been completed
// 3. Load the Data on the bus
I2C1TRN = 0b10100000 ; // Write the slave address to the transmit register for I2C WRITE
while(I2C1STATbits.TRSTAT == 1); // MASTER Transmit still in progress
while(I2C1STATbits.ACKSTAT == 1); // Master should receive the ACK from Slave, which will clear the I2C1STAT<ACKSTAT> bit.
I2C1TRN = 0xCE; // Register Address
while(I2C1STATbits.TRSTAT == 1);
while(I2C1STATbits.ACKSTAT == 1);
I2C1TRN = 0xCF; // Register Value
while(I2C1STATbits.TRSTAT == 1);
while(I2C1STATbits.ACKSTAT == 1);
I2C1CONbits.PEN = 1; // Stop Condition Enable Bit -> Initiate Stop condition on SDAx and SCLx pins; cleared by module
//-------------- WRITE ends here -------------
This code works well as I used a couple of LEDs to toggle indicating a successful write procedure.
The while(!(I2C1STATbits.ACKSTAT == 0)); is the right way i guess.I2C1ADD is used to set the slave address of the current PIC mc if you are using your I2C as slave.If any master requests start with that slave address the PIC will respond to that as slave.
Read this pdf where initially they have specified that in PIC the I2C is configured as both master and slave and the master request is checked by the same PIC with with the slave address in the I2C1ADD value.
http://ww1.microchip.com/downloads/en/DeviceDoc/61116F.pdf
I'm trying to recreate a project of writing to an SD card (using FatFS) for a dsPIC33FJ128GP802 microcontroller.
Currently to collect the date from the SPI I have a do/while that loops 512 times and writes a dummy value to the SPI buffer, wait for the SPI flag, then read the SPI value, like so:
int c = 512;
do {
SPI1BUF = 0xFF;
while (!_SPIRBF);
*p++ = SPI1BUF;
} while (--c);
I'm trying to recreate this using the DMA intterupts but it's not working like I had hoped. I'm using one DMA channel, SPI is in 8 bit mode for the time being, so DMA is in byte mode, it's also in 'null write' mode, and continuous without ping pong. My buffers are only one member arrays and the DMA is matched.
DMA2CONbits.CHEN = 0; //Disable DMA
DMA2CONbits.SIZE = 1; //Receive bytes (8 bits)
DMA2CONbits.DIR = 0; //Receive from SPI to DMA
DMA2CONbits.HALF = 0; //Receive full blocks
DMA2CONbits.NULLW = 1; //null write mode on
DMA2CONbits.AMODE = 0; //Register indirect with post-increment
DMA2CONbits.MODE = 0; //continuous mode without ping-pong
DMA2REQbits.IRQSEL = 10; //Transfer done (SPI)
DMA2STA = __builtin_dmaoffset(SPIBuffA); //receive buffer
DMA2PAD = (volatile unsigned int) &SPI1BUF;
DMA2CNT = 0; //transfer count = 1
IFS1bits.DMA2IF = 0; //Clear DMA interrupt
IEC1bits.DMA2IE = 1; //Enable DMA interrupt
From what I understand from the null write mode, the DMA will write a null value every time a read is performed. However, the DMA wont start until an initial write is performed by the CPU, so I've used the manual/force method to start the DMA.
DMA1CONbits.CHEN = 1; //Enable DMA
DMA1REQbits.FORCE = 1; //Manual write
The interrupt will now start, and runs without error. However the code later shows that the collection was incorrect.
My interrupt is simple in that all I'm doing is placing the collected data (which I assume is placed in my DMAs buffer as allocated above) into a pointer which is used throughout my program.
void __attribute__((interrupt, no_auto_psv)) _DMA2Interrupt(void) {
if (RxDmaBuffer == 513) {
DMA2CONbits.CHEN = 0;
rxFlag = 1;
} else {
buffer[RxDmaBuffer] = SPI1BUF;
RxDmaBuffer++;
}
IFS1bits.DMA2IF = 0; // Clear the DMA0 Interrupt Flag
}
When the interrupt has run 512 times, I stop the DMA and throw a flag.
What am I missing? How is this not the same as the non-DMA method? Is it perhaps the lack of the while loop which waits for the completion of the SPI transmission (while (!_SPIRBF);). Unfortunately with the null write mode automatically sending and receiving the SPI data I can't manually put any sort of wait in.
I've also tried using two DMA channels, one to write and one to read, but this also didn't work (plus I need that channel later for when I come to proper writing to the SD card).
Any help would be great!
The below IRQ handler drains the USART3 of incoming 32 byes of data. The first IRQ TC event reads the first 6 bytes, then reprograms the DMA engine to read in the last 24 bytes. The second TC repograms it to read the header again This method will allow for variable length messages using DMA. However I cannot seem to be able the change the DMA start address. I'd like to be able to store each message in a seperate buffer, however it just over writes the first buffer on upon recieving each message. What am I doing wrong? The microcontroller is a STM32F103ze (Cortex-M3).
void DMA1_Channel3_IRQHandler(void)
{
uint32_t tmpreg = 0;
/* Test on DMA1 Channel3 Transfer Complete interrupt */
if(DMA_GetITStatus(DMA1_IT_TC3))
{
if (rx3_dma_state == RECIEVE_HEADER )
{
DMA_Cmd(DMA1_Channel3, DISABLE);/* Disable DMA1_Channel2 transfer*/
// Now that have received the header. Configure the DMA engine to receive
// the data portion of the message
DMA_SetCurrDataCounter(DMA1_Channel3,26);
DMA_Cmd(DMA1_Channel3, ENABLE);
} else if ( rx3_dma_state == RECIEVE_MSG )
{
//CurrDataCounterEnd3 = DMA_GetCurrDataCounter(DMA1_Channel3);
DMA_Cmd(DMA1_Channel3, DISABLE);
/* Get Current Data Counter value after complete transfer */
USART3_Rx_DMA_Channel_Struct->CMAR = (uint32_t) RxBuffer3LookUp[rx_buffer_index];
DMA_SetCurrDataCounter(DMA1_Channel3, 6);
DMA_Cmd(DMA1_Channel3,ENABLE);
// Set up DMA to write into the next buffer slot (1 of 8)
++rx_buffer_index;
rx_buffer_index %= 8;
}
/* Clear DMA1 Channel6 Half Transfer, Transfer Complete and Global interrupt pending bits */
DMA_ClearITPendingBit(DMA1_IT_GL3);
}
// Update the state to read fake header of 6 bytes
// or to read the fake data section of the message 26 bytes
++rx3_dma_state;
rx3_dma_state %= 2;
isr_sem_send(dma_usart3_rx_sem);
}
You say:
...reprograms the DMA engine to read in the last 24 bytes.
but your code says:
DMA_SetCurrDataCounter(DMA1_Channel3,26);
Is that right? Is it 24 or 26?