AVR atmega128 Serial comminucation - c

I have a problem in connecting ATmega128 serial port to computer via USB-Serial converter. The USB-Serial converter is verified as I have connected computer to CDMA modem using it. However when I try to connect it with atmega128 I can't figure out the problem. I have connected it to serial LCD (CLCD) and it works fine.Even in simulation with virtual terminal there is no problem. I would like to know if I have missed anything related to serial port. I have already checked baud rate in hardware options and in virtual terminal.
Here is the code.
#include<avr/io.h>
#include<util/delay.h>
char str1[]="AT\r\n";
char str2[]="AT+CMGF=1\r\n";
char str3[]="AT+CMGS=\"01068685673\"\r\n";
char str4[]="hello\x1A\r\n";
int i;
void TX_CHAR(char ch)
{
while(!(UCSR1A&0x20));
UDR1=ch;
}
int main()
{
UBRR1H=0; UBRR1L=103; UCSR1B=0x08;
UCSR1C=0b00000110;
while(1)
{
i=0; while(str1[i])TX_CHAR(str1[i++]);
_delay_ms(200);
i=0; while(str2[i])TX_CHAR(str2[i++]);
_delay_ms(200);
i=0; while(str3[i])TX_CHAR(str3[i++]);
_delay_ms(200);
i=0; while(str4[i])TX_CHAR(str4[i++]);
_delay_ms(3000);
}
}

Things to check:
hardware - wiring
value of M103C fuse (= compatibility mode)
XTAL frequency and prescalers, as the formula for BAUD depends on it: BAUD = Fosc/16(UBRR+1)
USART double speed flag (UCSRA)
frame format
UDREn flag set
You also may get better insight into your code if you use predefined symbolic values, e.g.
/* Enable receiver and transmitter */
UCSRB = (1<<RXEN)|(1<<TXEN);
see the examples in the data sheet (pg 176ff)
On the frame format I understand you are set to async, 8-bit (UCSR1B:2 = 0, UCSR1C:2,1 = 11), parity disabled, 1 stop bit
In void TX_CHAR(char ch) I understand you are checking the status of bit 7 (RXC1) by using mask 0x20H. On the other hand you dont have the RX enabled (RXEN1 meaning UCSR1B:4=0)
I wonder if you shouldn't better check bit 6 (TXC1). Again ... using the symbolic values would help to better understand the code.
Hope this helps ...

Related

Error occurs when i transmit data with USART

I am stuck on the following problem. Consider this code:
int main(void)
{
SysTickInit();
USART_GPIOInits();
USART_Inits();
char data[] = "hello\n";
for(uint8_t i=0; i<10; i++)
{
HAL_UART_Transmit(&Usart1, (uint8_t*)data, strlen(data), 1000);
}
while(1){}
}
I try to send hello\n to Hercules in 10 times, but Hercules did not receive what i sent
this is what Hercules got , it had þ every the first time I reset the MCU. But , when I used Debugger mode, it did not get any error.
below is transmit function
below is Init function
but want to communicate with fingerprint , but because of this wrong i cant communicate
Change the order of initialization.
From
USART_GPIOInits();
USART_Inits();
to
USART_Inits();
USART_GPIOInits();
UART's default line state is logic high, logic low (start bit) launches a new transfer.
When GPIO is inialized first, with the corresponding peripheral module disabled, most likely you'll gen a logic low level on the TX pin, because there is no one to set it to a logic high (since UART is still disabled). When UART is initialized, it sets the TX line to a logic high (stop bit), and the terminal appication receives it as a broken byte.
Check your schematics
During and after reset CPU outputs are tri-stated. Most likely they'll stay at zero level until the configuration code will do it's job, leading to the same issue - receiving a garbage byte after the reset.
To prevent it, voltage levels on the interface pins must be defined during reset phase with an external pull-up resistor, like 10kOm, from TX and RX pins to VCC.

Mixed voltage reading from different AD7606 channels

Please help! I am using FSMC to connect a STM32F407 MCU with AD7606 to sample voltage value. MCU would send sampled values to PC using USB HS port after 1024 conversions. But when I inspect the values from PC, I found that readings from channel 0 occasionally contains data from other channels. For example, if connect channel 0 to 5v, connect channel 8 to 3.3v, connect other channels to ground. Then the printed value from channel 0 would contain 5v, 0v, 3.3v. The basic setup is as follows:
A 200KHZ PWM single is generated by TIM10 to act as CONVST signal for AD7606.
7606 will then issue a BUSY signal which I used as an external interrupt source.
In the Interrupt handler, An DMA request would be issued to read 8 16bit data
from FSMC address space to memory space. TIM10 PWM would be stopped if 1024
conversions has been done.
In the DMA XFER_CPLT call back, if 1024 conversions has been done, the converted
data would be sent out by USB HS port, and TIM10 PWM would be enabled again.
Some code blocks:
void HAL_GPIO_EXTI_Callback(uint16_t GPIO_Pin)
{
if(GPIO_Pin == GPIO_PIN_7)
{
// DMA data from FSMC to memory
HAL_DMA_Start_IT(&hdma_memtomem_dma2_stream0, 0x6C000000, (uint32_t)(adc_data + adc_data_idx) , 8);
adc_data_idx += 8;
if (adc_data_idx >= ADC_DATA_SIZE)
HAL_TIM_PWM_Stop(&htim10, TIM_CHANNEL_1);
}
}
void dma_done(DMA_HandleTypeDef *_hdma)
{
int i;
int ret;
// adc_data[adc_data_idx] would always contain data from
// channel 1, led1 wouldn't light if every thing is fine.
if (adc_data[adc_data_idx] < 0x7f00 )
HAL_GPIO_WritePin(led1_GPIO_Port, led1_Pin, GPIO_PIN_SET);
if (adc_data_idx >= ADC_DATA_SIZE)
{
if(hUsbDeviceHS.dev_state == USBD_STATE_CONFIGURED)
{
// if I don't call CDC_Transmit_HS, everything is fine.
ret = CDC_Transmit_HS((uint8_t *)(adc_data), ADC_DATA_SIZE * 2 );
if (ret != USBD_OK)
{
HAL_GPIO_WritePin(led1_GPIO_Port, led2_Pin, GPIO_PIN_SET);
}
}
adc_data_idx = 0;
HAL_TIM_PWM_Start(&htim10, TIM_CHANNEL_1);
}
}
It seems that a single USB transaction would take longer than 5us(one conversion time), so I stopped PWM signal to stop conversion...
If I only send the second half of the data buffer, there is no data mixture. It's very strange.
According to your description, I think the processing is correct, and the problem is at the CDC_Transmit_HS(); I have met the problem on the CDC_Transmit_FS(), which can't transmit more than 64 bytes data for original code, and need to modify some code, otherwise the some error occurs. Did you check the number of received data is correct?
Reference:
I can't receive more than 64 bytes on custom USB CDC class based STM32 device
I'm not sure your ADC_DATA_SIZE size; if it's larger than 64 bytes, maybe you can modify to smaller than 64 bytes and try again and check whether or not the data is correct. I am not sure if it is affected by this problem, but I think you can give it a try.
On the other hand, it may also be necessary to GND the ADC IN pins not used by AD7606 to avoid interference between channels.
Or you can try other communication (I2C, SPI, UART...etc) to send the data.
If there is no problem with other communication methods, there is a high chance that it is a problem with CDC_Transmit_HS(). If there are problems with other transmission methods, you may have to check whether there is a conflict between the ADC conversion time or the transmission time.

Problem utilizing spi module of beaglebone black

I have the following weird problem.
I have setup the BBB to activate the spi1 module. The the module is connected to an F-RAM chip (FM25CL64B). I have done all the necessary configuration. The /dev/spidev1.0 is present and I wrote a small program to write to, and read from the chip, by opening the /dev/spidev1.0 and using ioctl with the SPI_IOC_MESSAGE macro command.
Using that program I managed to successfully write 32 bytes of text in to the F-RAM chip. Reading also seemed to be successful... How do I know they both were successful? I used a logic analyzer with an SPI decoder activated to actually see what's going on over all four of the SPI lines. Monitoring all SPI lines I could see that the write and read operations generate the correct signals with the correct timings, and all signals are in sync. The CS enables the chip during the transaction, CLK clocks every byte in 8-bit words (as configured), the data lines show the correct values, which I could see thanks to the SPI decoder which shows the byte value right above each 8-bit signal sequence of both MOSI and MISO lines.
The problem is that even though I can see that correct information is sent over the MISO line during read operation, the buffer I provide to the ioctl(iSPIR, SPI_IOC_MESSAGE(2), xfer) is filled with zeros.
I intentionally initialized that buffer with other values, so I can see if the ioctl even writes into it. And it does. Zeros.
Now, the fact that I could see all the bytes sent over the MISO line during the read operation, proves that the writing operation not only looked right in the analyzer but actually wrote the intended data during the previous write operation.
I checked several times whether the MISO line is configured correctly, in the dts file (which I can rebuild and reinstall on demand). I checked if it's the correct pin, and if it's configured as input. Everything seemed to be correctly configured.
I ran the program as root in case there are permission issues - no difference.
I also implemented the spi communication in GPIO mode. I.e. the spi module is disabled, all lines configured as GPIO. CE, CLK, MOSI configured as otuputs and the MISO configured as input. This way I could implement the entire communication in software so I could have full control over the lines. Doing that, this time, I was able to successfully fill a buffer with the correct data from the F-RAM chip. I.e. the sequential read operation went fine from the F-RAM chip all the way to my user space buffer. I was able to print the data into the console. However, that worked way too slow. Also I find it inefficient to use purely software implementation of SPI com when there is a module available for use.
In order to write my sample program I used the spi_test.c open source example available online.
I also built and ran spi_test.c itself with no modification, same result.
Here is listing of my program (Relevant snippets):
// SPI config ...
int InitSPIReadMode(const char* pstrDeviceF)
{
int file;
__u8 wr_mode = SPI_MODE_0, rd_mode = SPI_MODE_0, lsb = 0, bits = 8;
__u32 speed = CLOCK_FREQ_HZ; // 500kHz
if((file = open(pstrDeviceF, O_RDWR)) < 0)
{
printf("Failed to open the bus.");
/* ERROR HANDLING; you can check errno to see what went wrong */
exit(1);
}
if(ioctl(file, SPI_IOC_RD_MODE, &rd_mode) < 0)
{
printf("SPI rd_mode\n");
return -1;
}
if(ioctl(file, SPI_IOC_RD_LSB_FIRST, &lsb) < 0)
{
printf("SPI rd_lsb_fist\n");
return -1;
}
if(ioctl(file, SPI_IOC_RD_BITS_PER_WORD, &bits) < 0)
{
printf("SPI bits_per_word\n");
return -1;
}
if(ioctl(file, SPI_IOC_RD_MAX_SPEED_HZ, &speed) < 0)
{
printf("SPI max_speed_hz\n");
return -1;
}
printf("%s: spi wr-mode=%d, spi rd-mode=%d, %d bits per word, %s, %d Hz max\n", pstrDeviceF, wr_mode, rd_mode, bits, lsb ? "(lsb first) " : "(msb first)", speed);
xfer[0].cs_change = 0; /* Keep CS activated */
xfer[0].delay_usecs = 0; //delay in us
xfer[0].speed_hz = CLOCK_FREQ_HZ; //speed
xfer[0].bits_per_word = 8; // bites per word 8
xfer[1].cs_change = 0; /* Keep CS activated */
xfer[1].delay_usecs = 0;
xfer[1].speed_hz = CLOCK_FREQ_HZ;
xfer[1].bits_per_word = 8;
return file;
}
In the main function: (as the logic analyzer shows this code correctly sends the command, address and clocks the 32 bytes of data afterwards)
int iSPIR = InitSPIReadMode("/dev/spidev1.0"); //open("/dev/spidev1.0", O_RDWR | O_SYNC);
char arrInstruct[3] = { OPCO_READ, 0x00, 0x00 };
char arrFRamData[512];
for(int pos = 0; pos < 512; pos++) arrFRamData[pos] = pos;
xfer[0].tx_buf = (unsigned long)arrInstruct;
xfer[0].len = 3;
xfer[1].rx_buf = (unsigned long)arrFRamData;
xfer[1].len = 32;
if(ioctl(iSPIR, SPI_IOC_MESSAGE(2), xfer) < 0) printf("ioctl write error %s.\n", strerror(errno));
// hex dumping of the arrFRamData buffer.
xfer is a global variable defined as:
struct spi_ioc_transfer xfer[2];
Thanks a lot in advance! :)
I found what the problem was with my setup. But even though the whole thing works now, the solution I applied raises more questions than answers.
So I found this solution in an article (https://elinux.org/BeagleBone_Black_Enable_SPIDEV) on the beaglebone wiki.
There I noticed that, in the device-tree overlay they set the CLK as an input. Reading the entire article turned out nothing as to why the CLOCK on the BBBs side has to be an input. Even though it's the master... The article only explains how to build and install the new DTBO in order to activate the SPI1 module.
So, even though it didn't make any sense to me, I though it wouldn't hurt to try changing the CLK line in my DTBO file form output to input to see what happens... And it worked! :O
Now, why is it so strange that the setup works like this. The BBB is supposed to be the SPI master, so its clock line should be an Output in order to drive this synchronous communication. That means the FM25CL64B chip must act as SPI slave. I just checked the datasheet and yes, the CLK on the chips side, is an input. So the CLK on the BBB must be an output. That's how I've configured the CLK pin on the BBB. As an output. And it didn't work.
This is why I'm so puzzled. So it works even though both ends of the CLK line are inputs?! Looking at the logic analyzers output, I can clearly see that the CLK line is being driven correctly. But if both the master an slave have inputs on that line there shouldn't be anything to generate those impulses?! There is nothing else connected to that line....

Async Usart IO_READ always null SAMD20J16

Fairly new to the Atmel software framework, usually use MPLab/PIC devices but for my project I needed to build a UART router capable of connecting 6 UART devices together so I chose a SAMD20J16 chip. So far iv'e configured the drivers using ATMEL start with USART HAL_ASYNC drivers for all USART modules.
Iv'e gotten the device to write to all 6 USART modules and have gotten the read interrupt to trigger but I am unable to read from the usart buffer. For now ive stripped it down to the minimal code possible to try and find out why it's not working, it's probably a simple oversight on my part, but Iv'e been searching the internet for 2 days and all the documentation I can find using the HAL_ASYNC drivers in ASF4 is flaky at best.
#include <atmel_start.h>
int main(void)
{
int nRead=0;
uint8_t rxPData;
/* Initializes MCU, drivers and middleware */
atmel_start_init();
//DECLARE UART STRUCTURTES
struct io_descriptor *uartD1;
usart_async_get_io_descriptor(&USART_0, &uartD1);
usart_async_enable(&USART_0);
struct io_descriptor *uartP;
usart_async_get_io_descriptor(&USART_1, &uartP);
usart_async_enable(&USART_1);
struct io_descriptor *uartD3;
usart_async_get_io_descriptor(&USART_2, &uartD3);
usart_async_enable(&USART_2);
struct io_descriptor *uartD4;
usart_async_get_io_descriptor(&USART_3, &uartD4);
usart_async_enable(&USART_3);
struct io_descriptor *uartS;
usart_async_get_io_descriptor(&USART_4, &uartS);
usart_async_enable(&USART_4);
struct io_descriptor *uartD2;io_write(uartP, (uint8_t *) "UART P\r\n", 8);
usart_async_get_io_descriptor(&USART_5, &uartD2);
usart_async_enable(&USART_5);
void uartP_callback(const struct usart_async_descriptor *const io_descriptor){
if(usart_async_is_rx_not_empty(uartP)){
io_write(uartP, (uint8_t *) "TEST P\r\n", 8);
nRead=io_read(uartP,&rxPData,6);
io_write(uartP,&rxPData,nRead);
}
}
usart_async_register_callback(&USART_1, USART_ASYNC_RXC_CB, uartP_callback);
while (1) {
}
}
Right now the observed behavior is anytime I send a UART packet it triggers the interrupt, and passes the IF rx_buffer_is_not_empty and sends back "TEST P\r\n". I added this line of code after some testing just to see if the interrupt was triggering.
However, no matter what input size i send to it it never transmits back any data that was sent to it.
Right now i'm simply trying to verify that I can receive and transmit on all usart ports, the final code won't have transmits inside the receive interrupts.

Writing Device Drivers for Microcontrollers, where to define IO Port pins?

I always seem to encounter this dilemma when writing low level code for MCU's.
I never know where to declare pin definitions so as to make the code as reusable as possible.
In this case Im writing a driver to interface an 8051 to a MCP4922 12bit serial DAC.
Im unsure how/where I should declare the pin definitions for The CS(chip select) and LDAC(data latch) for the DAC. At the moment there declared in the header file for the driver.
Iv done a lot of research trying to figure out the best approach but havent really found anything.
Im basically want to know what the best practices... if there are some books worth reading or online information, examples etc, any recommendations would be welcome.
Just a snippet of the driver so you get the idea
/**
#brief This function is used to write a 16bit data word to DAC B -12 data bit plus 4 configuration bits
#param dac_data A 12bit word
#param ip_buf_unbuf_select Input Buffered/unbuffered select bit. Buffered = 1; Unbuffered = 0
#param gain_select Output Gain Selection bit. 1 = 1x (VOUT = VREF * D/4096). 0 =2x (VOUT = 2 * VREF * D/4096)
*/
void MCP4922_DAC_B_TX_word(unsigned short int dac_data, bit ip_buf_unbuf_select, bit gain_select)
{
unsigned char low_byte=0, high_byte=0;
CS = 0; /**Select the chip*/
high_byte |= ((0x01 << 7) | (0x01 << 4)); /**Set bit to select DAC A and Set SHDN bit high for DAC A active operation*/
if(ip_buf_unbuf_select) high_byte |= (0x01 << 6);
if(gain_select) high_byte |= (0x01 << 5);
high_byte |= ((dac_data >> 8) & 0x0F);
low_byte |= dac_data;
SPI_master_byte(high_byte);
SPI_master_byte(low_byte);
CS = 1;
LDAC = 0; /**Latch the Data*/
LDAC = 1;
}
This is what I did in a similar case, this example is for writing an I²C driver:
// Structure holding information about an I²C bus
struct IIC_BUS
{
int pin_index_sclk;
int pin_index_sdat;
};
// Initialize I²C bus structure with pin indices
void iic_init_bus( struct IIC_BUS* iic, int idx_sclk, int idx_sdat );
// Write data to an I²C bus, toggling the bits
void iic_write( struct IIC_BUS* iic, uint8_t iicAddress, uint8_t* data, uint8_t length );
All pin indices are declared in an application-dependent header file to allow quick overview, e.g.:
// ...
#define MY_IIC_BUS_SCLK_PIN 12
#define MY_IIC_BUS_SCLK_PIN 13
#define OTHER_PIN 14
// ...
In this example, the I²C bus implementation is completely portable. It only depends on an API that can write to the chip's pins by index.
Edit:
This driver is used like this:
// main.c
#include "iic.h"
#include "pin-declarations.h"
main()
{
struct IIC_BUS mybus;
iic_init_bus( &mybus, MY_IIC_BUS_SCLK_PIN, MY_IIC_BUS_SDAT_PIN );
// ...
iic_write( &mybus, 0x42, some_data_buffer, buffer_length );
}
In one shop I worked at, the pin definitions were put into a processor specific header file. At another shop, I broke the header files into themes associated with modules in the processor, such as DAC, DMA and USB. A master include file for the processor included all of these themed header files. We could model different varieties of the same processor by include different module header files in the processor file.
You could create an implementation header file. This file would define I/O pins in terms of the processor header file. This gives you one layer of abstraction between your application and the hardware. The idea is to loosely couple the application from hardware as much as possible.
If only the driver needs to know about the CS pin, then the declaration should not appear in the header, but within the driver module itself. Code re-use is best served by hiding data at the most restrictive scope possible.
In the event that an external module needs to control CS, add an access function to the device driver module so that you have single point control. This is useful if during debugging you need to know where and when an I/O pin is being asserted; you only have one point to apply instrumentation or breakpoints.
The answer with the run-time configuration will work for a decent CPU like ARM, PowerPC...but the author is running a 8051 here. #define is probably the best way to go. Here's how I would break it down:
blah.h:
#define CSN_LOW() CS = 0
#define CSN_HI() CS = 1
#define LATCH_STROBE() \
do { LDAC = 0; LDAC = 1; } while (0)
blah.c:
#include <blah.h>
void blah_update( U8 high, U8 low )
{
CSN_LOW();
SPI_master_byte(high);
SPI_master_byte(low);
CSN_HI();
LATCH_STROBE();
}
If you need to change the pin definition, or moved to a different CPU, it should be obvious where you need to update. And it's also helps when you have to adjust the timing on the bus (ie. insert a delay here and there) as you don't need to change all over the place. Hope it helps.

Resources