Issue with SPI (Serial Port Comm), stuck on ioctl() - c

I'm trying to access a SPI sensor using the SPIDEV driver but my code gets stuck on IOCTL.
I'm running embedded Linux on the SAM9X5EK (mounting AT91SAM9G25). The device is connected to SPI0. I enabled CONFIG_SPI_SPIDEV and CONFIG_SPI_ATMEL in menuconfig and added the proper code to the BSP file:
static struct spi_board_info spidev_board_info[] {
{
.modalias = "spidev",
.max_speed_hz = 1000000,
.bus_num = 0,
.chips_select = 0,
.mode = SPI_MODE_3,
},
...
};
spi_register_board_info(spidev_board_info, ARRAY_SIZE(spidev_board_info));
1MHz is the maximum accepted by the sensor, I tried 500kHz but I get an error during Linux boot (too slow apparently). .bus_num and .chips_select should correct (I also tried all other combinations). SPI_MODE_3 I checked the datasheet for it.
I get no error while booting and devices appear correctly as /dev/spidevX.X. I manage to open the file and obtain a valid file descriptor. I'm now trying to access the device with the following code (inspired by examples I found online).
#define MY_SPIDEV_DELAY_USECS 100
// #define MY_SPIDEV_SPEED_HZ 1000000
#define MY_SPIDEV_BITS_PER_WORD 8
int spidevReadRegister(int fd,
unsigned int num_out_bytes,
unsigned char *out_buffer,
unsigned int num_in_bytes,
unsigned char *in_buffer)
{
struct spi_ioc_transfer mesg[2] = { {0}, };
uint8_t num_tr = 0;
int ret;
// Write data
mesg[0].tx_buf = (unsigned long)out_buffer;
mesg[0].rx_buf = (unsigned long)NULL;
mesg[0].len = num_out_bytes;
// mesg[0].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[0].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[0].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[0].cs_change = 0;
num_tr++;
// Read data
mesg[1].tx_buf = (unsigned long)NULL;
mesg[1].rx_buf = (unsigned long)in_buffer;
mesg[1].len = num_in_bytes;
// mesg[1].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[1].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[1].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[1].cs_change = 1;
num_tr++;
// Do the actual transmission
if(num_tr > 0)
{
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), mesg);
if(ret == -1)
{
printf("Error: %d\n", errno);
return -1;
}
}
return 0;
}
Then I'm using this function:
#define OPTICAL_SENSOR_ADDR "/dev/spidev0.0"
...
int fd;
fd = open(OPTICAL_SENSOR_ADDR, O_RDWR);
if (fd<=0) {
printf("Device not found\n");
exit(1);
}
uint8_t buffer1[1] = {0x3a};
uint8_t buffer2[1] = {0};
spidevReadRegister(fd, 1, buffer1, 1, buffer2);
When I run it, the code get stuck on IOCTL!
I did this way because, in order to read a register on the sensor, I need to send a byte with its address in it and then get the answer back without changing CS (however, when I tried using write() and read() functions, while learning, I got the same result, stuck on them).
I'm aware that specifying .speed_hz causes a ENOPROTOOPT error on Atmel (I checked spidev.c) so I commented that part.
Why does it get stuck? I though it can be as the device is created but it actually doesn't "feel" any hardware. As I wasn't sure if hardware SPI0 corresponded to bus_num 0 or 1, I tried both, but still no success (btw, which one is it?).
UPDATE: I managed to have the SPI working! Half of it.. MOSI is transmitting the right data, but CLK doesn't start... any idea?

When I'm working with SPI I always use an oscyloscope to see the output of the io's. If you have a 4 channel scope ypu can easily debug the issue, and find out if you're axcessing the right io's, using the right speed, etc. I usually compare the signal I get to the datasheet diagram.

I think there are several issues here. First of all SPI is bidirectional. So if yo want to send something over the bus you also get something. Therefor always you have to provide a valid buffer to rx_buf and tx_buf.
Second, all members of the struct spi_ioc_transfer have to be initialized with a valid value. Otherwise they just point to some memory address and the underlying process is accessing arbitrary data, thus leading to unknown behavior.
Third, why do you use a for loop with ioctl? You already tell ioctl you haven an array of spi_ioc_transfer structs. So all defined transaction will be performed with one ioctl call.
Fourth ioctl needs a pointer to your struct array. So ioctl should look like this:
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), &mesg);
You see there is room for improvement in your code.
This is how I do it in a c++ library for the raspberry pi. The whole library will soon be on github. I'll update my answer when it is done.
void SPIBus::spiReadWrite(std::vector<std::vector<uint8_t> > &data, uint32_t speed,
uint16_t delay, uint8_t bitsPerWord, uint8_t cs_change)
{
struct spi_ioc_transfer transfer[data.size()];
int i = 0;
for (std::vector<uint8_t> &d : data)
{
//see <linux/spi/spidev.h> for details!
transfer[i].tx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].rx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].len = d.size(); //number of bytes in vector
transfer[i].speed_hz = speed;
transfer[i].delay_usecs = delay;
transfer[i].bits_per_word = bitsPerWord;
transfer[i].cs_change = cs_change;
i++
}
int status = ioctl(this->fileDescriptor, SPI_IOC_MESSAGE(data.size()), &transfer);
if (status < 0)
{
std::string errMessage(strerror(errno));
throw std::runtime_error("Failed to do full duplex read/write operation "
"on SPI Bus " + this->deviceNode + ". Error message: " +
errMessage);
}
}

Related

Sensor value through UART in ESP32 correction

I am using ESP-IDF SDK to develop a small project to take sensor data through UART. I am following the data sheet which is provided by the manufacturer to parse and calculate the value of different parameter. But the output on serial is not correct and every time I am getting different output which is wrong.
Code:-
#include "freertos/FreeRTOS.h"
#include "freertos/task.h"
#include "esp_system.h"
#include "esp_log.h"
#include "driver/uart.h"
#include "string.h"
#include "driver/gpio.h"
static const int RX_BUF_SIZE = 1024;
#define TXD_PIN (GPIO_NUM_4)
#define RXD_PIN (GPIO_NUM_5)
#define DELAY_IN_MS(t) (((portTickType)t*configTICK_RATE_HZ)/(portTickType)1000)
void init(void) {
const uart_config_t uart_config = {
.baud_rate = 4800,
.data_bits = UART_DATA_8_BITS,
.parity = UART_PARITY_DISABLE,
.stop_bits = UART_STOP_BITS_1,
.flow_ctrl = UART_HW_FLOWCTRL_DISABLE,
.source_clk = UART_SCLK_APB,
};
uart_driver_install(UART_NUM_1, RX_BUF_SIZE * 2, 129, 0, NULL, 0);
uart_param_config(UART_NUM_1, &uart_config);
uart_set_pin(UART_NUM_1, TXD_PIN, RXD_PIN, UART_PIN_NO_CHANGE, UART_PIN_NO_CHANGE);
}
static void rx_task(void *arg)
{
static const char *RX_TASK_TAG = "RX_TASK";
esp_log_level_set(RX_TASK_TAG, ESP_LOG_INFO);
uint8_t data[7] = {0};
uint8_t PR=0,spo2=0,temprature=0;
while (1) {
const int rxBytes = uart_read_bytes(UART_NUM_1, data, 7, 1);
if (rxBytes == 7) {
printf("The rxbytes %d and %s\n",rxBytes,data);
PR = (data[3] & 0x7F) + ((data[2] & 0x40)<<1);
spo2 = (data[4] & 0x7F);
temprature = (data[5] & 0x7F);
printf("PR is %d , Spo2 is %d , temperature is %d \n",PR,spo2,temprature);
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, data);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, data, rxBytes, ESP_LOG_INFO);
memset(data,0,7);
}
}
}
void app_main(void)
{
init();
xTaskCreate(rx_task, "uart_rx_task", 1024*2, NULL, configMAX_PRIORITIES, NULL);
}
The output of the program on serial monitor is:-
The data sheet provided by the manufacturer is:-
https://drive.google.com/file/d/1lPATxeXXreVZkg9Ufg9BnyCrl4EsbJAj/view?usp=sharing
Correct me if I misaligned any data format to calculate the values.
Assembling messages from an unreliable channel (serial) means you can't really rely on them always arriving in the order you expect without any issues, so you have to take precautions that you don't get junk.
The code assumes that it will always receive these 7-byte messages in 7-byte chunks, and it doesn't always work that way. Line noise or timeouts could cause a proper message to be received in multiple chunks (say, 4 bytes then 3 bytes), or it could cause bytes to be lost.
To see if this is part of the problem, add logging on every read, not just on the ones that you expect:
static void rx_task(void *arg)
{
...
while (1) {
const int rxBytes = uart_read_bytes(UART_NUM_1, data, 7, 1);
// Log ALL reads, not just the ones you expect
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, data);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, data, rxBytes, ESP_LOG_INFO);
if (rxBytes == 7) {
///
}
}
}
This will probably confirm my hunch.
In any case, you can't ever rely on the fixed-size messages because if it gets out of sync once, it won't ever recover. This means you have to build in your own protections.
Reading the data sheet for the sensor, it says that the first byte of every 7-byte message has the high bit set, so this is perfect for resynchronization: you ignore everything until you get that start byte, then read 6 more bytes, then you have a full message.
So you end up needing two buffers: one for the message you're assembling, and one for doing raw I/O from the sensor, copying to the real message buffer as you verify the sync.
A quick-and-dirty method would look like this:
static void rx_task(void *arg)
{
static const char *RX_TASK_TAG = "RX_TASK";
esp_log_level_set(RX_TASK_TAG, ESP_LOG_INFO);
// sensor message we're trying to build
uint8_t message[7] = {0};
uint8_t *msgnext = message;
while (1) {
uint8_t inbuf[7];
const int rxBytes = uart_read_bytes(UART_NUM_1, inbuf, sizeof inbuf, 1);
ESP_LOGI(RX_TASK_TAG, "Read %d bytes: '%s'", rxBytes, inbuf);
ESP_LOG_BUFFER_HEXDUMP(RX_TASK_TAG, inbuf, rxBytes, ESP_LOG_INFO);
// error/timeout? do something?
if (rxBytes <= 0) continue;
for (int i = 0; i < rxBytes; i++)
{
const uint8_t b = inbuf[i];
if (b & 0x80)
{
// First byte of a message, reset the buffer
msgnext = message;
*msgnext++ = b;
}
else if (msgnext == message)
{
// not synced yet, ignore this byte
continue;
}
else
{
*msgnext++ = b;
if ((msgnext - message) == sizeof message)
{
// WE FOUND A FULL MESSAGE
uint8_t PR = (message[3] & 0x7F) + ((message[2] & 0x40)<<1);
uint8_t spo2 = (message[4] & 0x7F);
uint8_t temperature = (message[5] & 0x7F);
printf("PR is %d, Spo2 is %d, temperature is %d\n",
PR,spo2,temperature);
msgnext = message; // reset to empty the buffer
}
}
}
}
}
The idea is your raw I/O is done into inbuf, and it starts by looking for the sync byte (with the high bit set), and that tells you to start copying data to the real sensor buffer message. Once you get 7 bytes, it shows the result and resets the buffer.
And even if you have a few bytes of valid message data, if another SYNC byte comes in, it assumes the previous message was messed up, so it throws it away and starts a new fresh buffer.
You can add more here, such as support for timeouts, or detecting/logging when a partial message is discarded, but in no case can you avoid this data-framing layer.
Also, it's not necessary that the I/O buffer inbuf to be the same as the message size, and it might make sense to read from the UART in one-byte chunks; in a multi-tasking operating system I probably wouldn't do this, but in the ESP environment it might make sense - dunno. That would simplify the looping some.
EDIT Looking at your actual data dumps, it's clear that your messages are not framed properly because even though you have 7 bytes, the SYNC byte (with the high bit set) is found somewhere in the middle, but not the same place each time. Clearly this is a framing issue.

AT91 ARM EMAC polling issue

I am using atmel's lwip example. Interfacing with PHY is ok. It can link and even auto negotiate. Netif is going up. But when i start polling netif nothing happens. Ive narrowed down problem to EMAC_Poll
unsigned char EMAC_Poll(unsigned char *pFrame, unsigned int frameSize, unsigned int *pRcvSize)
{
unsigned short bufferLength;
unsigned int tmpFrameSize=0;
unsigned char *pTmpFrame=0;
unsigned int tmpIdx = rxTd.idx;
volatile EmacRxTDescriptor *pRxTd = rxTd.td + rxTd.idx;
ASSERT(pFrame, "F: EMAC_Poll\n\r");
char isFrame = 0;
// Set the default return value
*pRcvSize = 0;
// Process received RxTd
while ((pRxTd->addr & EMAC_RX_OWNERSHIP_BIT) == EMAC_RX_OWNERSHIP_BIT) {
// Never got there.
...
}
return EMAC_RX_NO_DATA;
}
typedef struct {
volatile EmacRxTDescriptor td[RX_BUFFERS];
EMAC_RxCallback rxCb; /// Callback function to be invoked once a frame has been received
unsigned short idx;
} RxTd;
/// Describes the type and attribute of Receive Transfer descriptor.
typedef struct _EmacRxTDescriptor {
unsigned int addr;
unsigned int status;
} __attribute__((packed, aligned(8))) EmacRxTDescriptor, *PEmacRxTDescriptor;
There is while loop, but condition is never goes true.
I have very vague presentation what is RxTd and what exacly this condition means. However i can not see how thise RxTd Would change to pass condition. All references of RxTd leads to same emac.c module. Most of them in that polling function and rest in EMAC_ResetRx function.
static void EMAC_ResetRx(void)
{
unsigned int Index;
unsigned int Address;
// Disable RX
AT91C_BASE_EMAC->EMAC_NCR &= ~AT91C_EMAC_RE;
// Setup the RX descriptors.
rxTd.idx = 0;
for(Index = 0; Index < RX_BUFFERS; Index++) {
Address = (unsigned int)(&(pRxBuffer[Index * EMAC_RX_UNITSIZE]));
// Remove EMAC_RX_OWNERSHIP_BIT and EMAC_RX_WRAP_BIT
rxTd.td[Index].addr = Address & EMAC_ADDRESS_MASK;
rxTd.td[Index].status = 0;
}
rxTd.td[RX_BUFFERS - 1].addr |= EMAC_RX_WRAP_BIT;
// Receive Buffer Queue Pointer Register
AT91C_BASE_EMAC->EMAC_RBQP = (unsigned int) (rxTd.td);
}
I do not realy understand last line, but it looks like that rxTd is auto filled with AT91 itself. If it is so, there may be packing/aligment problem, but Atmel added __attribute__ ((packed, aligned(8))) on RxTd structure definition. Any way, can someone describe mechanism of data input or tell me where proble might be?
By the way i am using gcc, if that matters.
UPD:
Ive checked RSR and notice that it is start with 0, then goes to 2 after second. 2- means new data was captured.
UPD:
So i've read about function of emac in datasheet for my chip. I was right. That RBQP register must point to array of descriptors. Each descriptor consists of address and status field. The datasheet states that "bit zero of address field is written to one to show the buffer has been used". Then ARM uses another rx descriptor from that array. I guess by "has been used" they mean that that buffer is filled with frame data and ready to be processed. This must mean that data just not going to that buffer. But it must be there because REC goes high. Additionaly i've checked that RE in NCR is up and MI is enabled. I have no idea what is wrong.
I've spend whole week to solve it. The funny thing is that if i've dump memory and looked at all those addresses - The data was there whole time! So the key was to disable I and D caching and MMU itself. Hope it will help someone.

Ruby, ioctl, and complex structures

I have a piece of hardware that I'm trying to control via my computer's built-in SPI driver. The SPI driver is controlled via ioctl.
I can successfully drive the hardware from a small C program; but when I try to duplicate the C program in Ruby I run into problems.
Using IO#ioctl to set basic registers (with u32 and u8 ints) works fine (I know because I can also use ioctl to read back the values I set); but as soon as I try to set a complex struct, the program fails with
small.rb:51:in 'ioctl': Connection timed out # rb_ioctl - /dev/spidev32766.0 (Errno::ETIMEDOUT)
I might be running into trouble because the spi_ioc_transfer struct has two pointers to byte buffers but the pointers are typed as unsigned 64-bit ints even on 32-bit platforms -- necessitating a cast to (unsigned long) in C. I'm trying to replicate that in Ruby but am quite unsure of myself.
Below are the C program which works and the Ruby port which doesn't work. The do_latch functions are necessary so I can see the result in my hardware; but are probably not germane to this problem.
C (which works):
#include <stdint.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/spi/spidev.h>
int do_latch() {
int fd = open("/sys/class/gpio/gpio1014/value", O_RDWR);
write(fd, "1", 1);
write(fd, "0", 1);
close(fd);
}
int do_transfer(int fd, uint8_t *bytes, size_t len) {
uint8_t *rx_bytes = malloc(sizeof(uint8_t) * len);
struct spi_ioc_transfer transfer = {
.tx_buf = (unsigned long)bytes,
.rx_buf = (unsigned long)rx_bytes,
.len = len,
.speed_hz = 100000,
.delay_usecs = 0,
.bits_per_word = 8,
.cs_change = 0,
.tx_nbits = 0,
.rx_nbits = 0,
.pad = 0
};
if(ioctl(fd, SPI_IOC_MESSAGE(1), &transfer) < 1) {
perror("Could not send SPI message");
exit(1);
}
free(rx_bytes);
}
int main() {
int fd = open("/dev/spidev32766.0", O_RDWR);
uint8_t mode = 0;
ioctl(fd, SPI_IOC_WR_MODE, &mode);
uint8_t lsb_first = 0;
ioctl(fd, SPI_IOC_WR_LSB_FIRST, lsb_first);
uint32_t speed_hz = 100000;
ioctl(fd, SPI_IOC_WR_MAX_SPEED_HZ, speed_hz);
size_t data_len = 36;
uint8_t *tx_data = malloc(sizeof(uint8_t) * data_len);
memset(tx_data, 0xFF, data_len);
do_transfer(fd, tx_data, data_len);
do_latch();
sleep(2);
memset(tx_data, 0x00, data_len);
do_transfer(fd, tx_data, data_len);
do_latch();
free(tx_data);
close(fd);
return 0;
}
Ruby (which fails on the ioctl line in do_transfer):
SPI_IOC_WR_MODE = 0x40016b01
SPI_IOC_WR_LSB_FIRST = 0x40016b02
SPI_IOC_WR_BITS_PER_WORD = 0x40016b03
SPI_IOC_WR_MAX_SPEED_HZ = 0x40046b04
SPI_IOC_WR_MODE32 = 0x40046b05
SPI_IOC_MESSAGE_1 = 0x40206b00
def do_latch()
File.open("/sys/class/gpio/gpio1014/value", File::RDWR) do |file|
file.write("1")
file.write("0")
end
end
def do_transfer(file, bytes)
##########################################################################################
#begin spi_ioc_transfer struct (cat /usr/include/linux/spi/spidev.h)
#pack bytes into a buffer; create a new buffer (filled with zeroes) for the rx
tx_buff = bytes.pack("C*")
rx_buff = (Array.new(bytes.size) { 0 }).pack("C*")
#on 32-bit, the struct uses a zero-extended pointer for the buffers (so it's the same
#byte layout on 64-bit as well) -- so do some trickery to get the buffer addresses
#as 64-bit strings even though this is running on a 32-bit computer
tx_buff_pointer = [tx_buff].pack("P").unpack("L!")[0] #u64 (zero-extended pointer)
rx_buff_pointer = [rx_buff].pack("P").unpack("L!")[0] #u64 (zero-extended pointer)
buff_len = bytes.size #u32
speed_hz = 100000 #u32
delay_usecs = 0 #u16
bits_per_word = 8 #u8
cs_change = 0 #u8
tx_nbits = 0 #u8
rx_nbits = 0 #u8
pad = 0 #u16
struct_array = [tx_buff_pointer, rx_buff_pointer, buff_len, speed_hz, delay_usecs, bits_per_word, cs_change, tx_nbits, rx_nbits, pad]
struct_packed = struct_array.pack("QQLLSCCCCS")
#in C, I pass a pointer to the the structure; so mimic that here
struct_pointer_packed = [struct_packed].pack("P")
#end spi_ioc_transfer struct
##########################################################################################
file.ioctl(SPI_IOC_MESSAGE_1, struct_pointer_packed)
end
File.open("/dev/spidev32766.0", File::RDWR) do |file|
file.ioctl(SPI_IOC_WR_MODE, [0].pack("C"));
file.ioctl(SPI_IOC_WR_LSB_FIRST, [0].pack("C"));
file.ioctl(SPI_IOC_WR_MAX_SPEED_HZ, [0].pack("L"));
data_bytes = Array.new(36) { 0x00 }
do_transfer(file, data_bytes)
do_latch()
sleep(2)
data_bytes = []
data_bytes = Array.new(36) { 0xFF }
do_transfer(file, data_bytes)
do_latch()
end
I pulled the magic number constants out by having C print them (they're macros in C). I can validate that most of them work; I'm a little unsure about the ioctl message that fails (SPI_IOC_MESSAGE_1) since that doesn't work and it's a complicated macro. Still, I have no reason to think that it's incorrect and it's always the same when I look at it from C.
When I print out the structure in C and then print it out in Ruby, the only differences are in the buffer addresses, so if something's going wrong, that feels like the right place to look. But I've run out of things to try.
I can also print out the addresses in both versions and they look like what I would expect, 32 bits extended to 64 bits, and match the values in the structure (although the structure is little-endian -- this is an ARM).
Structure in C (that works):
60200200 00000000 a8200200 00000000 24000000 40420f00 00000800 00000000
Structure in Ruby (that fails):
a85da27f 00000000 08399b7f 00000000 24000000 40420f00 00000800 00000000
Is there an obvious mistake that I'm making when I lay out the struct in Ruby? Is there something else that I'm missing?
My next step is to write a library in C and use FFI to access it from Ruby. But that seems like giving up; and using the native ioctl function feels like the better approach if I can ever make it work.
Update
Above, I'm doing
struct_array = [tx_buff_pointer, rx_buff_pointer, buff_len, speed_hz, delay_usecs, bits_per_word, cs_change, tx_nbits, rx_nbits, pad]
struct_packed = struct_array.pack("QQLLSCCCCS")
#in C, I pass a pointer to the the structure; so mimic that here
struct_pointer_packed = [struct_packed].pack("P")
file.ioctl(SPI_IOC_MESSAGE_1, struct_pointer_packed)
because I have to pass a pointer to the struct in C. But that's what's causing the error!
Instead, it needs to be
struct_array = [tx_buff_pointer, rx_buff_pointer, buff_len, speed_hz, delay_usecs, bits_per_word, cs_change, tx_nbits, rx_nbits, pad]
struct_packed = struct_array.pack("QQLLSCCCCS")
file.ioctl(SPI_IOC_MESSAGE_1, struct_packed)
I guess Ruby is automatically making it an array when it marshalls it over?
Unfortunately, now it only intermittently works. The second call never works and the first call doesn't work if I pass in all zeros. It's very mysterious.
It is a common issue not to flush the buffer, you could check it out and try it.
Flush:
Flushes any buffered data within ios to the underlying operating system (note that this is Ruby internal buffering only; the OS may buffer the data as well).
rb_io_flush(VALUE io)
{
return rb_io_flush_raw(io, 1);
}

Calculating the delay between write and read on I2C in Linux

I am currently working with I2C in Arch Linux Arm and not quite sure how to calculate the absolute minimum delay there is required between a write and a read. If i don't have this delay the read naturally does not come through. I have just applied usleep(1000) between the two commands, which works, but its just done empirically and has to be optimized to the real value (somehow). But how?.
Here is my code sample for the write_and_read function i am using:
int write_and_read(int handler, char *buffer, const int bytesToWrite, const int bytesToRead) {
write(handler, buffer, bytesToWrite);
usleep(1000);
int r = read(handler, buffer, bytesToRead);
if(r != bytesToRead) {
return -1;
}
return 0;
}
Normally there's no need to wait. If your writing and reading function is threaded somehow in the background (why would you do that???) then synchronizating them is mandatory.
I2C is a very simple linear communication and all the devices used my me was able to produce the output data within microsecs.
Are you using 100kHz, 400kHz or 1MHz I2C?
Edited:
After some discuss I suggest you this to try:
void dataRequest() {
Wire.write(0x76);
x = 0;
}
void dataReceive(int numBytes)
{
x = numBytes;
for (int i = 0; i < numBytes; i++) {
Wire.read();
}
}
Where x is a global variable defined in the header then assigned 0 in the setup(). You may try to add a simple if condition into the main loop, e.g. if x > 0, then send something in serial.print() as a debug message, then reset x to 0.
With this you are not blocking the I2C operation with the serial traffic.

C on embedded system w/ linux kernel - mysterious adc read issue

I'm developing on an AD Blackfin BF537 DSP running uClinux. I have a total of 32MB SD-RAM available. I have an ADC attached, which I can access using a simple, blocking call to read().
The most interesting part of my code is below. Running the program seems to work just fine, I get a nice data package that I can fetch from the SD-card and plot. However, if I comment out the float calculation part (as noted in the code), I get only zeroes in the ft_all.raw file. The same occurs if I change optimization level from -O3 to -O0.
I've tried countless combinations of all sorts of things, and sometimes it works, sometimes it does not - earlier (with minor modifications to below), the code would only work when optimization was disabled. It may also break if I add something else further down in the file.
My suspicion is that the data transferred by the read()-function may not have been transferred fully (is that possible, even though it returns the correct number of bytes?). This is also the first time I initialize pointers using direct memory adresses, and I have no idea how the compiler reacts to this - perhaps I missed something, here?
I've spent days on this issue now, and I'm getting desperate - I would really appreciate some help on this one! Thanks in advance.
// Clear the top 16M memory for data processing
memset((int *)0x01000000,0x0000,(size_t)SIZE_16M);
/* Prep some pointers for data processing */
int16_t *buffer;
int16_t *buf16I, *buf16Q;
buffer = (int16_t *)(0x1000000);
buf16I = (int16_t *)(0x1600000);
buf16Q = (int16_t *)(0x1680000);
/* Read data from ADC */
int rbytes = read(Sportfd, (int16_t*)buffer, 0x200000);
if (rbytes != 0x200000) {
printf("could not sample data! %X\n",rbytes);
goto end;
} else {
printf("Read %X bytes\n",rbytes);
}
FILE *outfd;
int wbytes;
/* Commenting this region results in all zeroes in ft_all.raw */
float a,b;
int c;
b = 0;
for (c = 0; c < 1000; c++) {
a = c;
b = b+pow(a,3);
}
printf("b is %.2f\n",b);
/* Only 12 LSBs of each 32-bit word is actual data.
* First 20 bits of nothing, then 12 bits I, then 20 bits
* nothing, then 12 bits Q, etc...
* Below, the I and Q parts are scaled with a factor of 16
* and extracted to buf16I and buf16Q.
* */
int32_t *buf32;
buf32 = (int32_t *)buffer;
uint32_t i = 0;
uint32_t n = 0;
while (n < 0x80000) {
buf16I[i] = buf32[n] << 4;
n++;
buf16Q[i] = buf32[n] << 4;
i++;
n++;
}
printf("Saving to /mnt/sd/d/ft_all.raw...");
outfd = fopen("/mnt/sd/d/ft_all.raw", "w+");
if (outfd == NULL) {
printf("Could not open file.\n");
}
wbytes = fwrite((int*)0x1600000, 1, 0x100000, outfd);
fclose(outfd);
if (wbytes < 0x100000) {
printf("wbytes not correct (= %d) \n", (int)wbytes);
}
printf(" done.\n");
Edit: The code seems to work perfectly well if I use read() to read data from a simple file rather than the ADC. This leads me to believe that the rather hacky-looking code when extracting the I and Q parts of the input is working as intended. Inspecting the assembly generated by the compiler confirms this.
I'm trying to get in touch with the developer of the ADC driver to see if he has an explanation of this behaviour.
The ADC is connected through a SPORT, and is opened as such:
sportfd = open("/dev/sport1", O_RDWR);
ioctl(sportfd, SPORT_IOC_CONFIG, spconf);
And here are the options used when configuring the SPORT:
spconf->int_clk = 1;
spconf->word_len = 32;
spconf->serial_clk = SPORT_CLK;
spconf->fsync_clk = SPORT_CLK/34;
spconf->fsync = 1;
spconf->late_fsync = 1;
spconf->act_low = 1;
spconf->dma_enabled = 1;
spconf->tckfe = 0;
spconf->rckfe = 1;
spconf->txse = 0;
spconf->rxse = 1;
A bfin_sport.h file from Analog Devices is also included: https://gist.github.com/tausen/5516954
Update
After a long night of debugging with the previous developer on the project, it turned out the issue was not related to the code shown above at all. As Chris suggested, it was indeed an issue with the SPORT driver and the ADC configuration.
While debugging, this error messaged appeared whenever the data was "broken": bfin_sport: sport ffc00900 status error: TUVF. While this doesn't make much sense in the application, it was clear from printing the data, that something was out of sync: the data in buffer was on the form 0x12000000,0x34000000,... rather than 0x00000012,0x00000034,... whenever the status error was shown. It seems clear then, why buf16I and buf16Q only contained zeroes (since I am extracting the 12 LSBs).
Putting in a few calls to usleep() between stages of ADC initialization and configuration seems to have fixed the issue - I'm hoping it stays that way!

Resources