C on embedded system w/ linux kernel - mysterious adc read issue - c

I'm developing on an AD Blackfin BF537 DSP running uClinux. I have a total of 32MB SD-RAM available. I have an ADC attached, which I can access using a simple, blocking call to read().
The most interesting part of my code is below. Running the program seems to work just fine, I get a nice data package that I can fetch from the SD-card and plot. However, if I comment out the float calculation part (as noted in the code), I get only zeroes in the ft_all.raw file. The same occurs if I change optimization level from -O3 to -O0.
I've tried countless combinations of all sorts of things, and sometimes it works, sometimes it does not - earlier (with minor modifications to below), the code would only work when optimization was disabled. It may also break if I add something else further down in the file.
My suspicion is that the data transferred by the read()-function may not have been transferred fully (is that possible, even though it returns the correct number of bytes?). This is also the first time I initialize pointers using direct memory adresses, and I have no idea how the compiler reacts to this - perhaps I missed something, here?
I've spent days on this issue now, and I'm getting desperate - I would really appreciate some help on this one! Thanks in advance.
// Clear the top 16M memory for data processing
memset((int *)0x01000000,0x0000,(size_t)SIZE_16M);
/* Prep some pointers for data processing */
int16_t *buffer;
int16_t *buf16I, *buf16Q;
buffer = (int16_t *)(0x1000000);
buf16I = (int16_t *)(0x1600000);
buf16Q = (int16_t *)(0x1680000);
/* Read data from ADC */
int rbytes = read(Sportfd, (int16_t*)buffer, 0x200000);
if (rbytes != 0x200000) {
printf("could not sample data! %X\n",rbytes);
goto end;
} else {
printf("Read %X bytes\n",rbytes);
}
FILE *outfd;
int wbytes;
/* Commenting this region results in all zeroes in ft_all.raw */
float a,b;
int c;
b = 0;
for (c = 0; c < 1000; c++) {
a = c;
b = b+pow(a,3);
}
printf("b is %.2f\n",b);
/* Only 12 LSBs of each 32-bit word is actual data.
* First 20 bits of nothing, then 12 bits I, then 20 bits
* nothing, then 12 bits Q, etc...
* Below, the I and Q parts are scaled with a factor of 16
* and extracted to buf16I and buf16Q.
* */
int32_t *buf32;
buf32 = (int32_t *)buffer;
uint32_t i = 0;
uint32_t n = 0;
while (n < 0x80000) {
buf16I[i] = buf32[n] << 4;
n++;
buf16Q[i] = buf32[n] << 4;
i++;
n++;
}
printf("Saving to /mnt/sd/d/ft_all.raw...");
outfd = fopen("/mnt/sd/d/ft_all.raw", "w+");
if (outfd == NULL) {
printf("Could not open file.\n");
}
wbytes = fwrite((int*)0x1600000, 1, 0x100000, outfd);
fclose(outfd);
if (wbytes < 0x100000) {
printf("wbytes not correct (= %d) \n", (int)wbytes);
}
printf(" done.\n");
Edit: The code seems to work perfectly well if I use read() to read data from a simple file rather than the ADC. This leads me to believe that the rather hacky-looking code when extracting the I and Q parts of the input is working as intended. Inspecting the assembly generated by the compiler confirms this.
I'm trying to get in touch with the developer of the ADC driver to see if he has an explanation of this behaviour.
The ADC is connected through a SPORT, and is opened as such:
sportfd = open("/dev/sport1", O_RDWR);
ioctl(sportfd, SPORT_IOC_CONFIG, spconf);
And here are the options used when configuring the SPORT:
spconf->int_clk = 1;
spconf->word_len = 32;
spconf->serial_clk = SPORT_CLK;
spconf->fsync_clk = SPORT_CLK/34;
spconf->fsync = 1;
spconf->late_fsync = 1;
spconf->act_low = 1;
spconf->dma_enabled = 1;
spconf->tckfe = 0;
spconf->rckfe = 1;
spconf->txse = 0;
spconf->rxse = 1;
A bfin_sport.h file from Analog Devices is also included: https://gist.github.com/tausen/5516954
Update
After a long night of debugging with the previous developer on the project, it turned out the issue was not related to the code shown above at all. As Chris suggested, it was indeed an issue with the SPORT driver and the ADC configuration.
While debugging, this error messaged appeared whenever the data was "broken": bfin_sport: sport ffc00900 status error: TUVF. While this doesn't make much sense in the application, it was clear from printing the data, that something was out of sync: the data in buffer was on the form 0x12000000,0x34000000,... rather than 0x00000012,0x00000034,... whenever the status error was shown. It seems clear then, why buf16I and buf16Q only contained zeroes (since I am extracting the 12 LSBs).
Putting in a few calls to usleep() between stages of ADC initialization and configuration seems to have fixed the issue - I'm hoping it stays that way!

Related

Decoding TIFF LZW codes not yet in the dictionary

I made a decoder of LZW-compressed TIFF images, and all the parts work, it can decode large images at various bit depths with or without horizontal prediction, except in one case. While it decodes files written by most programs (like Photoshop and Krita with various encoding options) fine, there's something very strange about the files created by ImageMagick's convert, it produces LZW codes that aren't yet in the dictionary, and I don't know how to handle it.
Most of the time the 9 to 12-bit code in the LZW stream that isn't yet in the dictionary is the next one that my decoding algorithm will try to put in the dictionary (which I'm not sure should be a problem although my algorithm fails on an image that contains such cases), but at times it can even be hundreds of codes into the future. In one case the first code after the clear code (256) is 364, which seems quite impossible given that the clear code clears my dictionary of all codes 258 and above, in another case the code is 501 when my dictionary only goes up to 317!
I have no idea how to deal with it, but it seems that I'm the only one with this problem, the decoders in other programs load such images fine. So how do they do it?
Here's the core of my decoding algorithm, obviously due to how much code is involved I can't provide complete compilable code in a compact manner, but since this is a matter of algorithmic logic this should be enough. It follows closely the algorithm described in the official TIFF specification (page 61), in fact most of the spec's pseudo code is in the comments.
void tiff_lzw_decode(uint8_t *coded, buffer_t *dec)
{
buffer_t word={0}, outstring={0};
size_t coded_pos; // position in bits
int i, new_index, code, maxcode, bpc;
buffer_t *dict={0};
size_t dict_as=0;
bpc = 9; // starts with 9 bits per code, increases later
tiff_lzw_calc_maxcode(bpc, &maxcode);
new_index = 258; // index at which new dict entries begin
coded_pos = 0; // bit position
lzw_dict_init(&dict, &dict_as);
while ((code = get_bits_in_stream(coded, coded_pos, bpc)) != 257) // while ((Code = GetNextCode()) != EoiCode)
{
coded_pos += bpc;
if (code >= new_index)
printf("Out of range code %d (new_index %d)\n", code, new_index);
if (code == 256) // if (Code == ClearCode)
{
lzw_dict_init(&dict, &dict_as); // InitializeTable();
bpc = 9;
tiff_lzw_calc_maxcode(bpc, &maxcode);
new_index = 258;
code = get_bits_in_stream(coded, coded_pos, bpc); // Code = GetNextCode();
coded_pos += bpc;
if (code == 257) // if (Code == EoiCode)
break;
append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code));
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
else if (code < 4096)
{
if (dict[code].len) // if (IsInTable(Code))
{
append_buf(dec, &dict[code]); // WriteString(StringFromCode(Code));
lzw_add_to_dict(&dict, &dict_as, new_index, 0, word.buf, word.len, &bpc);
lzw_add_to_dict(&dict, &dict_as, new_index, 1, dict[code].buf, 1, &bpc); // AddStringToTable
new_index++;
tiff_lzw_calc_bpc(new_index, &bpc, &maxcode);
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
else
{
clear_buf(&outstring);
append_buf(&outstring, &word);
bufwrite(&outstring, word.buf, 1); // OutString = StringFromCode(OldCode) + FirstChar(StringFromCode(OldCode));
append_buf(dec, &outstring); // WriteString(OutString);
lzw_add_to_dict(&dict, &dict_as, new_index, 0, outstring.buf, outstring.len, &bpc); // AddStringToTable
new_index++;
tiff_lzw_calc_bpc(new_index, &bpc, &maxcode);
clear_buf(&word);
append_buf(&word, &dict[code]); // OldCode = Code;
}
}
}
free_buf(&word);
free_buf(&outstring);
for (i=0; i < dict_as; i++)
free_buf(&dict[i]);
free(dict);
}
As for the results that my code produces in such situations it's quite clear from how it looks that it's only those few codes that are badly decoded, everything before and after is properly decoded, but obviously in most cases the subsequent image after one of these mystery future codes is ruined by virtue of shifting the rest of the decoded bytes by a few places. That means that my reading of the 9 to 12-bit code stream is correct, so this really means that I see a 364 code right after a 256 dictionary-clearing code.
Edit: Here's an example file that contains such weird codes. I've also found a small TIFF LZW loading library that suffers from the same problem, it crashes where my loader finds the first weird code in this image (code 3073 when the dictionary only goes up to 2051). The good thing is that since it's a small library you can test it with the following code:
#include "loadtiff.h"
#include "loadtiff.c"
void loadtiff_test(char *path)
{
int width, height, format;
floadtiff(fopen(path, "rb"), &width, &height, &format);
}
And if anyone insists on diving into my code (which should be unnecessary, and it's a big library) here's where to start.
The bogus codes come from trying to decode more than we're supposed to. The problem is that a LZW strip may sometimes not end with an End-of-Information 257 code, so the decoding loop has to stop when a certain number of decoded bytes have been output. That number of bytes per strip is determined by the TIFF tags ROWSPERSTRIP * IMAGEWIDTH * BITSPERSAMPLE / 8, and if PLANARCONFIG is 1 (which means interleaved channels as opposed to planar), by multiplying it all by SAMPLESPERPIXEL. So on top of stopping the decoding loop when a code 257 is encountered the loop must also be stopped after that count of decoded bytes has been reached.

--fill command on PIC32MX boot flash memory

I have been trying for the last few weeks to find out why this isn't working. I have tried reading all the documentation I could find on my PIC32 MCU (PIC32MX795F512L) and the XC32 compiler I am using (v1.34) but with no success as of yet.
I need a special constant value written to the physical boot flash address 0x1FC02FEC (Virtual address: 0x9FC02FEC). This constant is 0x3BDFED92.
I have managed to successfully do this on my main program (If I program my pic32 directly using Real ICE) by means of the following command line (Which I put in xc32-ld under "Additional options" under the Project Properties):
--fill=0x3bdfed92#0x9FC02FEC
I am then able to check (Inside my main program) if this address indeed does have the correct value stored inside it, and this works too. I use the following code for that:
if(*(int *)(0x9fc02fec) == 0x3bdfed92)
My problem is the following. I do not want my main program hex file to write the constant into that location. I want my bootloader hex file to do this and my main program must just be able to read that location and see if that constant is there. If I use the --fill command inside the xc32-ld of my bootloader program, it successfully writes the constant just like the main program did (I have tested this by running my bootloader program with the same --fill command in debug mode and checking the 0x1FC02FEC address for the constant). Problem is, when my bootloader reads in a new main program via the MicroSD, and then jumps to the new main program, everything doesn't work. Seems like, before it jumps to the new main program, something bad happens and everything crashes. Almost like writing a value to the 1FC02FEC location is a problem when the program jumps from boot loader to main program.
Is there a reason for this? I hope my explanation is ok, if not then please let me know and I will try reword it in a more understandable way.
I am using the example code provided by Microchip to do the bootloader using the MicroSD card. The following is the code:
int main(void)
{
volatile UINT i;
volatile BYTE led = 0;
// Setup configuration
(void)SYSTEMConfig(SYS_FREQ, SYS_CFG_WAIT_STATES | SYS_CFG_PCACHE);
InitLED();
TRISBbits.TRISB14 = 0;
LATBbits.LATB14 = 0;
ClrWdt();
// Create a startup delay to resolve trigger switch bouncing issues
unsigned char x;
WORD ms = 500;
DWORD dwCount = 25;
while(ms--)
{
ClrWdt();
x=4;
while(x--)
{
volatile DWORD _dcnt;
_dcnt = dwCount*((DWORD)(0.00001/(1.0/GetInstructionClock())/10));
while(_dcnt--)
{
#if defined(__C32__)
Nop();
Nop();
Nop();
#endif
}
}
}
if(!CheckTrigger() && ValidAppPresent())
{
// This means the switch is not pressed. Jump
// directly to the application
JumpToApp();
}
else if(CheckTrigger() && ValidAppPresent()){
if(MDD_MediaDetect()){
if(FSInit()){
myFile = FSfopen("image.hex","r");
if(myFile == NULL){
JumpToApp();
}
}
else{
JumpToApp();
}
}
else{
JumpToApp();
}
}
//Initialize the media
while (!MDD_MediaDetect())
{
// Waiting for media to be inserted.
BlinkLED();
}
// Initialize the File System
if(!FSInit())
{
//Indicate error and stay in while loop.
Error();
while(1);
}
myFile = FSfopen("image.hex","r");
if(myFile == NULL)// Make sure the file is present.
{
//Indicate error and stay in while loop.
Error();
while(1);
}
// Erase Flash (Block Erase the program Flash)
EraseFlash();
// Initialize the state-machine to read the records.
record.status = REC_NOT_FOUND;
while(1)
{
ClrWdt();
// For a faster read, read 512 bytes at a time and buffer it.
readBytes = FSfread((void *)&asciiBuffer[pointer],1,512,myFile);
if(readBytes == 0)
{
// Nothing to read. Come out of this loop
// break;
FSfclose(myFile);
// Something fishy. The hex file has ended abruptly, looks like there was no "end of hex record".
//Indicate error and stay in while loop.
Error();
while(1);
}
for(i = 0; i < (readBytes + pointer); i ++)
{
// This state machine seperates-out the valid hex records from the read 512 bytes.
switch(record.status)
{
case REC_FLASHED:
case REC_NOT_FOUND:
if(asciiBuffer[i] == ':')
{
// We have a record found in the 512 bytes of data in the buffer.
record.start = &asciiBuffer[i];
record.len = 0;
record.status = REC_FOUND_BUT_NOT_FLASHED;
}
break;
case REC_FOUND_BUT_NOT_FLASHED:
if((asciiBuffer[i] == 0x0A) || (asciiBuffer[i] == 0xFF))
{
// We have got a complete record. (0x0A is new line feed and 0xFF is End of file)
// Start the hex conversion from element
// 1. This will discard the ':' which is
// the start of the hex record.
ConvertAsciiToHex(&record.start[1],hexRec);
WriteHexRecord2Flash(hexRec);
record.status = REC_FLASHED;
}
break;
}
// Move to next byte in the buffer.
record.len ++;
}
if(record.status == REC_FOUND_BUT_NOT_FLASHED)
{
// We still have a half read record in the buffer. The next half part of the record is read
// when we read 512 bytes of data from the next file read.
memcpy(asciiBuffer, record.start, record.len);
pointer = record.len;
record.status = REC_NOT_FOUND;
}
else
{
pointer = 0;
}
// Blink LED at Faster rate to indicate programming is in progress.
led += 3;
mLED = ((led & 0x80) == 0);
}//while(1)
return 0;
}
If I remember well (very long time ago I used PIC32) you can add into your linker script:
MEMORY
{
//... other stuff
signature (RX) : ORIGIN = 0x9FC02FEC, length 0x4
}
then
SECTIONS
{
.signature_section:
{
BYTE(0x3b);
BYTE(0xdf);
BYTE(0xed);
BYTE(0x92);
}>signature
}
Googling around I also found out, that you could do that in your source code, I hope...
const int __attribute__((space(prog), address(0x9FC02FEC))) signature = 0x3bdfed92;
In my program I use an attribute to place a certain value at a certain location in memory space. My bootloader and App can read this location. This may be a simpler way for you to do this. This is using xc16 and a smaller part but I've done it on a PIC32 as well.
#define CHECK_SUM 0x45FB
#define CH_HIGH ((CHECK_SUM & 0xFF00) >> 8)
#define CH_LOW ((CHECK_SUM & 0x00FF) >> 0)
const char __attribute__((space(prog), address(APP_CS_LOC))) CHKSUM[2] = {CH_LOW,CH_HIGH};
Just a note, when you read it, it will be in the right format: HIGH->LOW

Looking the cause of a race condition on a multicore corepack

I am using a simple software queue based on a write index and a read index.
Introduction details; Language: C, Compiler: GCC Optimization: -O3 with extra parameters, Architecture: Armv7a, CPU: Multicore, 2 Cortex A-15, L2 Cache: Shared and enabled, L1 Cache: Every CPU, enabled, Architecture is supposed to be cache coherent.
CPU 1 does the writing stuff and CPU 2 does the reading stuff. Below is the very simplified example code. You can assume the initial values of the indexes are zero.
COMMON:
#define QUE_LEN 4
unsigned int my_que_write_index = 0; //memory
unsigned int my_que_read_index = 0; //memory
struct my_que_struct{
unsigned int param1;
unsigned int param2;
};
struct my_que_struct my_que[QUE_LEN]; //memory
CPU 1 runs:
void que_writer
{
unsigned int write_index_local;
write_index_local = my_que_write_index; //my_que_write_index is in memory
my_que[write_index_local].param1 = 16; //my_que is my queue and stored in memory also
my_que[write_index_local].param2 = 32;
//similar writing stuff
++write_index_local;
if(write_index_local == QUE_LEN) write_index_local = 0;
my_que_write_index = write_index_local;
}
CPU 2 runs:
void que_reader()
{
unsigned int read_index_local, param1, param2;
read_index_local = my_que_read_index; //also in memory
while(read_index_local != my_que_write_index)
{
param1 = my_que[read_index_local].param1;
if(param1 == 0) FATAL_ERROR;
param2 = my_que[read_index_local].param2;
//similar reading stuff
my_que[read_index_local].param1 = 0;
++read_index_local;
if(read_index_local == QUE_LEN) read_index_local = 0;
}
my_que_read_index = read_index_local;
}
Okay, in a normal case, fatal error should never occur because param1 of the queue is always stored with a constant value of 16. But somehow param1 of the queue is happening 0 and fatal error occurs.
It is clear that this is somehow a race condition problem, but I can't figure how it is happening. Indexes are updated seperately by the CPUs.
I don't want to fill my code with memory barriers without understanding the core of the problem. Do you have any idea how this is happening?
Details: This is a baremetal system, these codes are interrupt-disabled, and there is no preemption or task switching.
The compiler and the CPU are allowed to rearrange stores and loads as they see fit (i.e. as long as a single threaded program would not be able to observe a difference). Of course for multi-threaded programs these effects are observable quite well.
For example, this code
write_index_local = my_que_write_index;
my_que[write_index_local].param1 = 16;
my_que[write_index_local].param2 = 32;
++write_index_local;
if(write_index_local == QUE_LEN) write_index_local = 0;
my_que_write_index = write_index_local;
could be reordered like this
a = my_que_write_index;
my_que_write_index = write_index_local == QUE_LEN - 1 ? 0 : a + 1;
my_que[a].param1 = 16;
my_que[a].param2 = 32;
Getting this stuff right requires atomics and barriers that avoid these kinds of reorderings. Check out Preshing's excellent series of blog posts to learn about atomics, this one is probably a good start: http://preshing.com/20120612/an-introduction-to-lock-free-programming/ but check out the following ones as well.

Getting raw data using libusb

I'm doing reverse engineering about a ultrasound probe on the Linux side. I want to capture raw data from an ultrasound probe. I'm programming with C and using the libusb API.
There are two BULK IN endpoints in the device (2 and 6). The device is sending 2048 bytes data, but it is sending data as 512 bytes with four block.
This picture is data flow on the Windows side, and I want to copy that to the Linux side. You see four data blocks with endpoint 02 and after that four data blocks with endpoint 06.
But there is a problem about timing. The first data block of endpoint 02's and first data block of endpoint 06's are close to each other acoording to time. But in data flow they are not in sequence.
I see that the computer is reading the first data blocks of endpoint 02 and 06. After that, the computer is reading the other three data blocks of endpoint 02 and endpoint 06. But in USB Analyzer, the data flow is being viewed according to the endpoint number. The sequence is different according to time.
On the Linux side, I write code like this:
int index = 0;
imageBuffer2 = (unsigned char *) malloc(2048);
imageBuffer6 = (unsigned char *) malloc(2048);
while (1) {
libusb_bulk_transfer(devh, BULK_EP_2, imageBuffer2, 2048, &actual2, 0);
libusb_bulk_transfer(devh, BULK_EP_6, imageBuffer6, 2048, &actual6, 0);
//Delay
for(index = 0; index <= 10000000; index ++)
{
}
}
So that result is in picture as below
In other words, in my code all reading data is being read in sequence according to time and endpoint number. My result is different from the data flow on the Windows side.
In brief, I have two BULK IN endpoints, and they are starting read data close according to time. How is it possible?
It's not clear to me whether you're using a different method for getting the data on Windows or not, I'm going to assume that you are.
I'm not an expert on libusb by any means, but my guess would be that you are overwriting you data with each call, since you're using the same buffer each time. Try giving your buffer a fixed value before using the transfer method, and then evaluate the result.
If it is the case, I believe something along the lines of the following would also work in C:
imageBuffer2 = (unsigned char *) malloc(2048);
char *imageBuffer2P = imageBuffer2;
imageBuffer6 = (unsigned char *) malloc(2048);
char *imageBuffer6P = imageBuffer6;
int dataRead2 = 0;
int dataRead6 = 0;
while(dataRead2 < 2048 || dataRead6 < 2048)
{
int actual2 = 0;
int actual6 = 0;
libusb_bulk_transfer(devh, BULK_EP_2, imageBuffer2P, 2048-dataRead2, &actual2, 200);
libusb_bulk_transfer(devh, BULK_EP_6, imageBuffer6P, 2048-dataRead6, &actual6, 200);
dataRead2 += actual2;
dataRead6 += actual6;
imageBuffer2P += actual2;
imageBuffer6P += actual6;
usleep(1);
}

Issue with SPI (Serial Port Comm), stuck on ioctl()

I'm trying to access a SPI sensor using the SPIDEV driver but my code gets stuck on IOCTL.
I'm running embedded Linux on the SAM9X5EK (mounting AT91SAM9G25). The device is connected to SPI0. I enabled CONFIG_SPI_SPIDEV and CONFIG_SPI_ATMEL in menuconfig and added the proper code to the BSP file:
static struct spi_board_info spidev_board_info[] {
{
.modalias = "spidev",
.max_speed_hz = 1000000,
.bus_num = 0,
.chips_select = 0,
.mode = SPI_MODE_3,
},
...
};
spi_register_board_info(spidev_board_info, ARRAY_SIZE(spidev_board_info));
1MHz is the maximum accepted by the sensor, I tried 500kHz but I get an error during Linux boot (too slow apparently). .bus_num and .chips_select should correct (I also tried all other combinations). SPI_MODE_3 I checked the datasheet for it.
I get no error while booting and devices appear correctly as /dev/spidevX.X. I manage to open the file and obtain a valid file descriptor. I'm now trying to access the device with the following code (inspired by examples I found online).
#define MY_SPIDEV_DELAY_USECS 100
// #define MY_SPIDEV_SPEED_HZ 1000000
#define MY_SPIDEV_BITS_PER_WORD 8
int spidevReadRegister(int fd,
unsigned int num_out_bytes,
unsigned char *out_buffer,
unsigned int num_in_bytes,
unsigned char *in_buffer)
{
struct spi_ioc_transfer mesg[2] = { {0}, };
uint8_t num_tr = 0;
int ret;
// Write data
mesg[0].tx_buf = (unsigned long)out_buffer;
mesg[0].rx_buf = (unsigned long)NULL;
mesg[0].len = num_out_bytes;
// mesg[0].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[0].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[0].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[0].cs_change = 0;
num_tr++;
// Read data
mesg[1].tx_buf = (unsigned long)NULL;
mesg[1].rx_buf = (unsigned long)in_buffer;
mesg[1].len = num_in_bytes;
// mesg[1].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[1].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[1].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[1].cs_change = 1;
num_tr++;
// Do the actual transmission
if(num_tr > 0)
{
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), mesg);
if(ret == -1)
{
printf("Error: %d\n", errno);
return -1;
}
}
return 0;
}
Then I'm using this function:
#define OPTICAL_SENSOR_ADDR "/dev/spidev0.0"
...
int fd;
fd = open(OPTICAL_SENSOR_ADDR, O_RDWR);
if (fd<=0) {
printf("Device not found\n");
exit(1);
}
uint8_t buffer1[1] = {0x3a};
uint8_t buffer2[1] = {0};
spidevReadRegister(fd, 1, buffer1, 1, buffer2);
When I run it, the code get stuck on IOCTL!
I did this way because, in order to read a register on the sensor, I need to send a byte with its address in it and then get the answer back without changing CS (however, when I tried using write() and read() functions, while learning, I got the same result, stuck on them).
I'm aware that specifying .speed_hz causes a ENOPROTOOPT error on Atmel (I checked spidev.c) so I commented that part.
Why does it get stuck? I though it can be as the device is created but it actually doesn't "feel" any hardware. As I wasn't sure if hardware SPI0 corresponded to bus_num 0 or 1, I tried both, but still no success (btw, which one is it?).
UPDATE: I managed to have the SPI working! Half of it.. MOSI is transmitting the right data, but CLK doesn't start... any idea?
When I'm working with SPI I always use an oscyloscope to see the output of the io's. If you have a 4 channel scope ypu can easily debug the issue, and find out if you're axcessing the right io's, using the right speed, etc. I usually compare the signal I get to the datasheet diagram.
I think there are several issues here. First of all SPI is bidirectional. So if yo want to send something over the bus you also get something. Therefor always you have to provide a valid buffer to rx_buf and tx_buf.
Second, all members of the struct spi_ioc_transfer have to be initialized with a valid value. Otherwise they just point to some memory address and the underlying process is accessing arbitrary data, thus leading to unknown behavior.
Third, why do you use a for loop with ioctl? You already tell ioctl you haven an array of spi_ioc_transfer structs. So all defined transaction will be performed with one ioctl call.
Fourth ioctl needs a pointer to your struct array. So ioctl should look like this:
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), &mesg);
You see there is room for improvement in your code.
This is how I do it in a c++ library for the raspberry pi. The whole library will soon be on github. I'll update my answer when it is done.
void SPIBus::spiReadWrite(std::vector<std::vector<uint8_t> > &data, uint32_t speed,
uint16_t delay, uint8_t bitsPerWord, uint8_t cs_change)
{
struct spi_ioc_transfer transfer[data.size()];
int i = 0;
for (std::vector<uint8_t> &d : data)
{
//see <linux/spi/spidev.h> for details!
transfer[i].tx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].rx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].len = d.size(); //number of bytes in vector
transfer[i].speed_hz = speed;
transfer[i].delay_usecs = delay;
transfer[i].bits_per_word = bitsPerWord;
transfer[i].cs_change = cs_change;
i++
}
int status = ioctl(this->fileDescriptor, SPI_IOC_MESSAGE(data.size()), &transfer);
if (status < 0)
{
std::string errMessage(strerror(errno));
throw std::runtime_error("Failed to do full duplex read/write operation "
"on SPI Bus " + this->deviceNode + ". Error message: " +
errMessage);
}
}

Resources