I'm making a remote controlled machine using a pi pico to drive the motors and read some sensors, and a raspberry pi 4 to send commands to the pi pico via serial and host the web interface.
I'm working on sending and receiving commands from the raspberry and for now I'm stuck with this code:
#include <string.h>
#include "pico/stdlib.h"
#include "hardware/uart.h"
#include "hardware/irq.h"
#define UART_ID uart0
#define BAUD_RATE 19200
#define DATA_BITS 8
#define STOP_BITS 1
#define PARITY UART_PARITY_NONE
#define UART_TX_PIN 0
#define UART_RX_PIN 1
static int chars_rxed = 0;
char uCommand[32] = {0, 0};
void on_uart_rx() {
char tmp_string[] = {0, 0};
while (uart_is_readable(UART_ID)) {
uint8_t ch = uart_getc(UART_ID);
tmp_string[0] = ch;
strcat(uCommand, tmp_string);
if(uart_is_writable(UART_ID)){
uart_putc(UART_ID, '-');
uart_puts(UART_ID, uCommand);
uart_putc(UART_ID, '-');
}
chars_rxed++;
}
}
int main(){
uart_init(UART_ID, BAUD_RATE);
gpio_set_function(UART_TX_PIN, GPIO_FUNC_UART);
gpio_set_function(UART_RX_PIN, GPIO_FUNC_UART);
uart_set_hw_flow(UART_ID, false, false);
uart_set_format(UART_ID, DATA_BITS, STOP_BITS, PARITY);
uart_set_fifo_enabled(UART_ID, false);
int UART_IRQ = UART_ID == uart0 ? UART0_IRQ : UART1_IRQ;
irq_set_exclusive_handler(UART_IRQ, on_uart_rx);
irq_set_enabled(UART_IRQ, true);
uart_set_irq_enables(UART_ID, true, false);
uart_puts(UART_ID, "\nOK\n");
while (1){
tight_loop_contents();
if(uCommand[0] != 0){
uart_putc(UART_ID, '/');
uart_puts(UART_ID, uCommand);
uart_putc(UART_ID, '/');
}
}
}
my idea was to take the command sent via serial during the interrupt and place it in a charset, then parse it and execute it externally.
Trying it, I notice that it never enters the if inside the while and it doesn't 'fill' the 'uCommand' charset completely but only a few characters compared to the ones sent.
I hope my question is not off topic.
You should declare uCommand (and any other shared objects) volatile.
The while-loop is waiting for uCommand[0] != 0 to become true, but the main thread does not modify uCommand[0] so the compiler is free to "optimise" away the entire block of code if it "knows" it can never be true.
Similarly chars_rxed may get optimised away because you write but never read it.
Declaring it volatile will also prevent that.
The optimisaton of apparently redundant accesses usually only occurs when -O1 or higher optimisation level is set. Though you should in any event use volatile in such circumstances regardless.
Related
I'm using MikroC for PIC v7.2, to program a PIC18f67k40.
Within functii.h, I have the following variable declaration:
extern volatile unsigned char byte_count;
Within main.c, the following code:
#include <functii.h>
// ...
volatile unsigned char byte_count = 0;
// ...
void interrupt () {
if (RC1IF_bit) {
uart_rx = Uart1_read();
uart_string[byte_count] = uart_rx;
byte_count++;
}
// ...
}
Then, within command.c, I have the following code:
#include <functii.h>
void how_many_bytes () {
// ...
uart1_write(byte_count);
// ...
}
In main.c, I process data coming through the UART, using an interrupt. Once the end of transmission character is received, I call how_many_bytes(), which sends back the length of the message that was received (plus the data bytes themselves, the code for which I didn't include here, but those are all OK!!).
The problem is that on the uart1_write() call, byte_count is always 0, instead of having been incremented in the interrupt sequence.
Probably you need some synchronization between the interrupt handler and the main processing.
If you do something like this
if(byte_count != 0) {
uart1_write(byte_count);
byte_count = 0;
}
the interrupt can occur anywhere, for example
between if(byte_count != 0)and uart1_write(byte_count); or
during the processing of uart1_write(byte_count); which uses a copy of the old value while the value gets changed or
between uart1_write(byte_count); and byte_count = 0;.
With the code above case 1 is no problem but 2 and 3 are. You would lose all characters received after reading byte_count for the function call.
Maybe you can disable/enable interrupts at certain points.
A better solution might be to not reset byte_count outside of interrupt() but instead implement a ring buffer with separate read and write index. The read index would be modified by how_many_bytes() (or uart1_write()) only and the write index by interrupt() only.
Using Atmel studio 7, with STK600 and 32UC3C MCU
I'm pulling my hair over this.
I'm sending strings of a variable size over UART once every 5 seconds. The String consists of one letter as opcode, then two chars are following that tell the lenght of the following datastring (without the zero, there is never a zero at the end of any of those strings). In most cases the string will be 3 chars in size, because it has no data ("p00").
After investigation I found out that what supposed to be "p00" was in fact "0p0" or "00p" or (only at first try after restarting the micro "p00"). I looked it up in the memory view of the debugger. Then I started hTerm and confirmed that the data was in fact "p00". So after a while hTerm showed me "p00p00p00p00p00p00p00..." while the memory of my circular uart buffer reads "p000p000p0p000p0p000p0p0..."
edit: Actually "0p0" and "00p" are alternating.
The baud rate is 9600. In the past I was only sending single letters. So everything was running well.
This is the code of the Receiver Interrupt:
I tried different variations in code that were all doing the same in a different way. But all of them showed the exact same behavior.
lastWebCMDWritePtr is a uint8_t* type and so is lastWebCMDRingstartPtr.
lastWebCMDRingRXLen is a uint8_t type.
__attribute__((__interrupt__))
void UartISR_forWebserver()
{
*(lastWebCMDWritePtr++) = (uint8_t)((&AVR32_USART0)->rhr & 0x1ff);
lastWebCMDRingRXLen++;
if(lastWebCMDWritePtr - lastWebCMDRingstartPtr > lastWebCMDRingBufferSIZE)
{
lastWebCMDWritePtr = lastWebCMDRingstartPtr;
}
// Variation 2:
// advanceFifo((uint8_t)((&AVR32_USART0)->rhr & 0x1ff));
// Variation 3:
// if(usart_read_char(&AVR32_USART0, getReadPointer()) == USART_RX_ERROR)
// {
// usart_reset_status(&AVR32_USART0);
// }
//
};
I welcome any of your ideas and advices.
Regarts Someo
P.S. I put the Atmel studio tag in case this has something to do with the myriad of debugger bugs of AS.
For a complete picture you would have to show where and how lastWebCMDWritePtr, lastWebCMDRingRXLen, lastWebCMDRingstartPtr and lastWebCMDRingBufferSIZE are used elsewhere (on the consuming side)
Also I would first try a simpler ISR with no dependencies to other software modules to exclude a hardware resp. register handling problem.
Approach:
#define USART_DEBUG
#define DEBUG_BUF_SIZE 30
__attribute__((__interrupt__))
void UartISR_forWebserver()
{
uint8_t rec_byte;
#ifdef USART_DEBUG
static volatile uint8_t usart_debug_buf[DEBUG_BUF_SIZE]; //circular buffer for debugging
static volatile int usart_debug_buf_index = 0;
#endif
rec_byte = (uint8_t)((&AVR32_USART0)->rhr & 0x1ff);
#ifdef USART_DEBUG
usart_debug_buf_index = usart_debug_buf_index % DEBUG_BUF_SIZE;
usart_debug_buf[usart_debug_buf_index] = rec_byte;
usart_debug_buf_index++
if (!(usart_debug_buf_index < DEBUG_BUF_SIZE)) {
usart_debug_buf_index = 0; //candidate for a breakpoint to see what happened in the past
}
#endif
//uart_recfifo_enqueue(rec_byte);
};
I'm writing a driver for a GSM modem running on an ARM Cortex M0. The only UART on the system is in use for talking to the modem, so the best I can do for logging the UART conversation with the modem is to build up a string in memory and watch it with GDB.
Here are my UART logging functions.
// Max number of characters user in the UART log, when in use.
#define GSM_MAX_UART_LOG_CHARS (2048)
static char m_gsm_uart_log[GSM_MAX_UART_LOG_CHARS] = "";
static uint16_t m_gsm_uart_log_index = 0;
// Write a character to the in-memory log of all UART messages.
static void gsm_uart_log_char(const char value)
{
m_gsm_uart_log_index++;
if (m_gsm_uart_log_index > GSM_MAX_UART_LOG_CHARS)
{
// Clear and restart log.
memset(&m_gsm_uart_log, 0, GSM_MAX_UART_LOG_CHARS); // <-- Breakpoint here
m_gsm_uart_log_index = 0;
}
m_gsm_uart_log[m_gsm_uart_log_index] = value;
}
// Write a string to the in-memory log of all UART messages.
static void gsm_uart_log_string(const char *value)
{
uint16_t i = 0;
char ch = value[i++];
while (ch != '\0')
{
gsm_uart_log_char(ch);
ch = value[i++];
}
}
If I set a breakpoint on the line shown above, the first time it's reached, m_gsm_uart_log_index is already well over 2048. I've seen 2154 and a bunch of other values between 2048 and 2200 or so.
How is this possible? There's no other code that touches m_gsm_uart_log_index anywhere.
You have a buffer overflow happening which could trample on m_gsm_uart_log_index.
The check for end of buffers should be:
if (m_gsm_uart_log_index >= GSM_MAX_UART_LOG_CHARS) {
...
}
As it stands, m_gsm_uart_log_index can reach 2048, and so writing m_gsm_uart_log_index[2048] is likely to be at the location where m_gsm_uart_log_index is stored.
You are writing to the buffer when m_gsm_uart_log_index == GSM_MAX_UART_LOG_CHARS, which means that you are overrunning the buffer by 1 character. This writes into the first byte of m_gsm_uart_log_index and corrupts it.
Change:
if (m_gsm_uart_log_index > GSM_MAX_UART_LOG_CHARS)
to:
if (m_gsm_uart_log_index >= GSM_MAX_UART_LOG_CHARS)
I'm trying to access a SPI sensor using the SPIDEV driver but my code gets stuck on IOCTL.
I'm running embedded Linux on the SAM9X5EK (mounting AT91SAM9G25). The device is connected to SPI0. I enabled CONFIG_SPI_SPIDEV and CONFIG_SPI_ATMEL in menuconfig and added the proper code to the BSP file:
static struct spi_board_info spidev_board_info[] {
{
.modalias = "spidev",
.max_speed_hz = 1000000,
.bus_num = 0,
.chips_select = 0,
.mode = SPI_MODE_3,
},
...
};
spi_register_board_info(spidev_board_info, ARRAY_SIZE(spidev_board_info));
1MHz is the maximum accepted by the sensor, I tried 500kHz but I get an error during Linux boot (too slow apparently). .bus_num and .chips_select should correct (I also tried all other combinations). SPI_MODE_3 I checked the datasheet for it.
I get no error while booting and devices appear correctly as /dev/spidevX.X. I manage to open the file and obtain a valid file descriptor. I'm now trying to access the device with the following code (inspired by examples I found online).
#define MY_SPIDEV_DELAY_USECS 100
// #define MY_SPIDEV_SPEED_HZ 1000000
#define MY_SPIDEV_BITS_PER_WORD 8
int spidevReadRegister(int fd,
unsigned int num_out_bytes,
unsigned char *out_buffer,
unsigned int num_in_bytes,
unsigned char *in_buffer)
{
struct spi_ioc_transfer mesg[2] = { {0}, };
uint8_t num_tr = 0;
int ret;
// Write data
mesg[0].tx_buf = (unsigned long)out_buffer;
mesg[0].rx_buf = (unsigned long)NULL;
mesg[0].len = num_out_bytes;
// mesg[0].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[0].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[0].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[0].cs_change = 0;
num_tr++;
// Read data
mesg[1].tx_buf = (unsigned long)NULL;
mesg[1].rx_buf = (unsigned long)in_buffer;
mesg[1].len = num_in_bytes;
// mesg[1].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[1].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[1].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[1].cs_change = 1;
num_tr++;
// Do the actual transmission
if(num_tr > 0)
{
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), mesg);
if(ret == -1)
{
printf("Error: %d\n", errno);
return -1;
}
}
return 0;
}
Then I'm using this function:
#define OPTICAL_SENSOR_ADDR "/dev/spidev0.0"
...
int fd;
fd = open(OPTICAL_SENSOR_ADDR, O_RDWR);
if (fd<=0) {
printf("Device not found\n");
exit(1);
}
uint8_t buffer1[1] = {0x3a};
uint8_t buffer2[1] = {0};
spidevReadRegister(fd, 1, buffer1, 1, buffer2);
When I run it, the code get stuck on IOCTL!
I did this way because, in order to read a register on the sensor, I need to send a byte with its address in it and then get the answer back without changing CS (however, when I tried using write() and read() functions, while learning, I got the same result, stuck on them).
I'm aware that specifying .speed_hz causes a ENOPROTOOPT error on Atmel (I checked spidev.c) so I commented that part.
Why does it get stuck? I though it can be as the device is created but it actually doesn't "feel" any hardware. As I wasn't sure if hardware SPI0 corresponded to bus_num 0 or 1, I tried both, but still no success (btw, which one is it?).
UPDATE: I managed to have the SPI working! Half of it.. MOSI is transmitting the right data, but CLK doesn't start... any idea?
When I'm working with SPI I always use an oscyloscope to see the output of the io's. If you have a 4 channel scope ypu can easily debug the issue, and find out if you're axcessing the right io's, using the right speed, etc. I usually compare the signal I get to the datasheet diagram.
I think there are several issues here. First of all SPI is bidirectional. So if yo want to send something over the bus you also get something. Therefor always you have to provide a valid buffer to rx_buf and tx_buf.
Second, all members of the struct spi_ioc_transfer have to be initialized with a valid value. Otherwise they just point to some memory address and the underlying process is accessing arbitrary data, thus leading to unknown behavior.
Third, why do you use a for loop with ioctl? You already tell ioctl you haven an array of spi_ioc_transfer structs. So all defined transaction will be performed with one ioctl call.
Fourth ioctl needs a pointer to your struct array. So ioctl should look like this:
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), &mesg);
You see there is room for improvement in your code.
This is how I do it in a c++ library for the raspberry pi. The whole library will soon be on github. I'll update my answer when it is done.
void SPIBus::spiReadWrite(std::vector<std::vector<uint8_t> > &data, uint32_t speed,
uint16_t delay, uint8_t bitsPerWord, uint8_t cs_change)
{
struct spi_ioc_transfer transfer[data.size()];
int i = 0;
for (std::vector<uint8_t> &d : data)
{
//see <linux/spi/spidev.h> for details!
transfer[i].tx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].rx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].len = d.size(); //number of bytes in vector
transfer[i].speed_hz = speed;
transfer[i].delay_usecs = delay;
transfer[i].bits_per_word = bitsPerWord;
transfer[i].cs_change = cs_change;
i++
}
int status = ioctl(this->fileDescriptor, SPI_IOC_MESSAGE(data.size()), &transfer);
if (status < 0)
{
std::string errMessage(strerror(errno));
throw std::runtime_error("Failed to do full duplex read/write operation "
"on SPI Bus " + this->deviceNode + ". Error message: " +
errMessage);
}
}
First time posting so there's probably gonna be more info than necessary but I wanna be thorough:
One of our exercises in C was to create sender and receiver programs that would exchange data via RS232 serial communication with null modem. We used a virtual port program (I used the trial version of Virtual Serial Port by eltima software if you want to test). We were required to do 4 versions:
1) Using a predetermined library created by a previous student that had sender and reveiver etc. premade functions
2) Using the inportb and outportb functions
3) Using OS interrupt int86 and giving register values through the REGS union
4) Using inline assembly
Compiler: DevCPP (Bloodshed).
All worked, but now we are required to compare all the different versions based on the CPU time that is spent to send and receive a character. It specifically says that we have to find the following:
average, standard deviation, min, max and 99,5 %
Nothing was explained in class so I'm a little lost here...I'm guessing those are statistical numbers after many trials of the normal distribution? But even then how do I actually measure CPU cycles on this? I'll keep searching but I'm posting here in the mean time 'cause the deadline is in 3 days :D.
Code sample of the int86 version:
#include <stdio.h>
#include <stdlib.h>
#include <dos.h>
#define RS232_INIT_FUNCTION 0
#define RS232_SEND_FUNCTION 1
#define RS232_GET_FUNCTION 2
#define RS232_STATUS_FUNCTION 3
#define DATA_READY 0x01
#define PARAM 0xEF
#define COM1 0
#define COM2 1
void rs232init (int port, unsigned init_code)
{
union REGS inregs;
inregs.x.dx=port;
inregs.h.ah=RS232_INIT_FUNCTION;
inregs.h.al=init_code;
int86(0x14,&inregs,&inregs);
}
unsigned char rs232transmit (int port, char ch)
{
union REGS inregs;
inregs.x.dx=port;
inregs.h.ah=RS232_SEND_FUNCTION;
inregs.h.al=ch;
int86(0x14,&inregs,&inregs);
return (inregs.h.ah);
}
unsigned char rs232status(int port){
union REGS inregs;
inregs.x.dx=port;
inregs.h.ah=RS232_STATUS_FUNCTION;
int86(0x14, &inregs, &inregs);
return (inregs.h.ah); //Because we want the second byte of ax
}
unsigned char rs232receive(int port)
{
int x,a;
union REGS inregs;
while(!(rs232status(port) & DATA_READY))
{
if(kbhit()){
getch();
exit(1);
}
};
inregs.x.dx=port;
inregs.h.ah=RS232_GET_FUNCTION;
int86(0x14,&inregs,&inregs);
if(inregs.h.ah & 0x80)
{
printf("ERROR");
return -1;
}
return (inregs.h.al);
}
int main(){
unsigned char ch;
int d,e,i;
do{
puts("What would you like to do?");
puts("1.Send data");
puts("2.Receive data");
puts("0.Exit");
scanf("%d",&i);
getchar();
if(i==1){
rs232init(COM1, PARAM);
puts("Which char would you like to send?");
scanf("%c",&ch);
getchar();
while(!rs232status(COM1));
d=rs232transmit(COM1,ch);
if(d & 0x80) puts("ERROR"); //Checks the bit 7 of ah for error
}
else if(i==2){
rs232init(COM1,PARAM);
puts("Receiving character...");
ch=rs232receive(COM1);
printf("%c\n",ch);
}
}while(i != 0);
system("pause");
return 0;
}
There is some guesswork required here because the question is a little undefined.
You've listed four different methods for sending/receiving a character. What I suspect your lecturer is looking for is the time from when you call the method given (or enter your inline assembly code) to the time when you return from the method (leave inline code). You will need to grab a time just before the call and just after the call and find their difference.
Less ambiguous is cpu time. The clock() method is the most straightforward way to do this, however this may not be what the lecturer is looking for.
Finally are the statistics, which is straightforward. Do a bunch of runs, and run some statistics on the times