Calculating the delay between write and read on I2C in Linux - c

I am currently working with I2C in Arch Linux Arm and not quite sure how to calculate the absolute minimum delay there is required between a write and a read. If i don't have this delay the read naturally does not come through. I have just applied usleep(1000) between the two commands, which works, but its just done empirically and has to be optimized to the real value (somehow). But how?.
Here is my code sample for the write_and_read function i am using:
int write_and_read(int handler, char *buffer, const int bytesToWrite, const int bytesToRead) {
write(handler, buffer, bytesToWrite);
usleep(1000);
int r = read(handler, buffer, bytesToRead);
if(r != bytesToRead) {
return -1;
}
return 0;
}

Normally there's no need to wait. If your writing and reading function is threaded somehow in the background (why would you do that???) then synchronizating them is mandatory.
I2C is a very simple linear communication and all the devices used my me was able to produce the output data within microsecs.
Are you using 100kHz, 400kHz or 1MHz I2C?
Edited:
After some discuss I suggest you this to try:
void dataRequest() {
Wire.write(0x76);
x = 0;
}
void dataReceive(int numBytes)
{
x = numBytes;
for (int i = 0; i < numBytes; i++) {
Wire.read();
}
}
Where x is a global variable defined in the header then assigned 0 in the setup(). You may try to add a simple if condition into the main loop, e.g. if x > 0, then send something in serial.print() as a debug message, then reset x to 0.
With this you are not blocking the I2C operation with the serial traffic.

Related

Too much time taking for the copy the data

I have a some problem with for loop copies' time I don't why for loop is taking much time for the copy small data size.
I am using PIC24FJ256GL406 MCU and everything is fine but when operate the UART micro-controller is running slow because some delay is occurring while copy the buffer data.
Let me explain you with code and debug log.
Here I am posting the function for the UART transmission. this function is generally transfer only FIRST character and Rest of the byte will be transfer in the Interrupt routine service.
My clock Frequency is 32Mhz So peripheral will be 16 Mhz.
I did not understand why for loop is taking Almost 20 MS for the copy the 16 byte data only. So This time will be increase if data is more.
unsigned int UART1_WriteBuffer(const uint8_t *buffer, const unsigned int bufLen)
{
//transmit first char
U1TXREG = buffer[0];
while (!U1STAbits.TRMT);
numBytesWritten = bufLen - 1;
totalByte = 0;
//get the current time stamp
WSTimestamp currentTimeStamp = WSGetCurrTimestamp();
WMLogInfo(GEN_LOG, "current time stamp %ld", currentTimeStamp);
uint16_t i = 0;
//memset
memset(&uart1_txByteQ, 0x00, sizeof(uart1_txByteQ));
for (i = 0; i < numBytesWritten; i++)
{
uart1_txByteQ = buffer[i + 1]; //copy the data
}
_U1TXIE = 1;
//get last time stamp
WSTimestamp lastTimeStamp = WSGetCurrTimestamp();
WMLogInfo(GEN_LOG, "last time stamp %ld ", lastTimeStamp);
///print the debug
WMLogInfo(GEN_LOG, "total = %ld MS time taken fo copy the = %d byte", lastTimeStamp - currentTimeStamp, numBytesWritten);
return bufLen;
}
My Interrupt Routine service.
void __attribute__((interrupt, no_auto_psv)) _U1TXInterrupt(void)
{
if (totalByte < numBytesWritten)
{
U1TXREG = uart1_txByteQ[totalByte++];
while (!U1STAbits.TRMT);
}
else
{
_U1TXIE = 0;
_U1TXIE = 0;
}
}
Console Log.
This is function log. please note.
GEN:main loop current time stamp 6503<\r><\n>
GEN:current time stamp 6506<\r><\n>
GEN:last time stamp 6526 <\r><\n>
GEN:total = 20 MS time taken fo copy the = 16 byte<\r><\n>
GEN:command "AT+QREFUSECS=1,1<\r>" send with len [17]
The long delays are caused by busy-waiting for flags in combination with some use-case bug. Why are you using Tx interrupt in the first place in case you intend to busy-wait poll for a flag anyhow? The ISR doesn't make sense - it would seem that you should simply drop Tx interrupts entirely.
In addition, you have a naive implementation of buffer copies. It's a very common embedded systems beginner bug to hard copy RAM buffers needlessly. There's very few cases where you actually need to do hard copies and this isn't one. Instead you should use a system with double buffers and simply swap a pointer between them:
static uint8_t buf1 [n];
static uint8_t buf2 [n];
static uint8_t* rx_buf = buf1; // buffer used by rx ISR
static uint8_t* app_buf = buf2; // buffer used by the application
...
if(rx_done)
{ // swap buffers
uint8_t* tmp = rx_buf;
rx_buf = app_buf;
app_buf = tmp;
}
This also solves the double-buffering problem where an UART rx interrupt needs to store it's incoming data somewhere at the same time as the main program uses data. You'll need to protect against race conditions somehow - with a semaphore etc. And you'll need to declare variables shared with ISRs volatile to protect against bad compiler optimizers.

Linkage problem with extern variable when compiling?

I'm using MikroC for PIC v7.2, to program a PIC18f67k40.
Within functii.h, I have the following variable declaration:
extern volatile unsigned char byte_count;
Within main.c, the following code:
#include <functii.h>
// ...
volatile unsigned char byte_count = 0;
// ...
void interrupt () {
if (RC1IF_bit) {
uart_rx = Uart1_read();
uart_string[byte_count] = uart_rx;
byte_count++;
}
// ...
}
Then, within command.c, I have the following code:
#include <functii.h>
void how_many_bytes () {
// ...
uart1_write(byte_count);
// ...
}
In main.c, I process data coming through the UART, using an interrupt. Once the end of transmission character is received, I call how_many_bytes(), which sends back the length of the message that was received (plus the data bytes themselves, the code for which I didn't include here, but those are all OK!!).
The problem is that on the uart1_write() call, byte_count is always 0, instead of having been incremented in the interrupt sequence.
Probably you need some synchronization between the interrupt handler and the main processing.
If you do something like this
if(byte_count != 0) {
uart1_write(byte_count);
byte_count = 0;
}
the interrupt can occur anywhere, for example
between if(byte_count != 0)and uart1_write(byte_count); or
during the processing of uart1_write(byte_count); which uses a copy of the old value while the value gets changed or
between uart1_write(byte_count); and byte_count = 0;.
With the code above case 1 is no problem but 2 and 3 are. You would lose all characters received after reading byte_count for the function call.
Maybe you can disable/enable interrupts at certain points.
A better solution might be to not reset byte_count outside of interrupt() but instead implement a ring buffer with separate read and write index. The read index would be modified by how_many_bytes() (or uart1_write()) only and the write index by interrupt() only.

ARM USB functions take too long to excecute

I just started working on a XMC4500 microcontroller. I'm currently implementing USB CDC communication and I ran into a problem.
After calling a function "USBD_VCOM_SendData" the program then waits for a frame to send the data.
More accurately the program waits in the "usbd_endpoint_stream_xmc4000.c" file in the "Endpoint_Write_Stream_LE". There it waits for the endpoint to be ready in the "Endpoint_WaitUntilReady()" function.
#define USB_STREAM_TIMEOUT_MS 100
uint8_t Endpoint_WaitUntilReady(void)
{
#if (USB_STREAM_TIMEOUT_MS < 0xFF)
uint8_t TimeoutMSRem = USB_STREAM_TIMEOUT_MS;
#else
uint16_t TimeoutMSRem = USB_STREAM_TIMEOUT_MS;
#endif
uint16_t PreviousFrameNumber = USB_Device_GetFrameNumber();
for (;;)
{
if (Endpoint_GetEndpointDirection() == ENDPOINT_DIR_IN)
{
if (Endpoint_IsINReady())
{
return ENDPOINT_READYWAIT_NoError;
}
}
else
{
if (Endpoint_IsOUTReceived())
{
return ENDPOINT_READYWAIT_NoError;
}
}
uint8_t USB_DeviceState_LCL = USB_DeviceState;
if (USB_DeviceState_LCL == DEVICE_STATE_Unattached)
{
return ENDPOINT_READYWAIT_DeviceDisconnected;
}
else if (USB_DeviceState_LCL == DEVICE_STATE_Suspended)
{
return ENDPOINT_READYWAIT_BusSuspended;
}
else if (Endpoint_IsStalled())
{
return ENDPOINT_READYWAIT_EndpointStalled;
}
uint16_t CurrentFrameNumber = USB_Device_GetFrameNumber();
if (CurrentFrameNumber != PreviousFrameNumber)
{
PreviousFrameNumber = CurrentFrameNumber;
if (!(TimeoutMSRem--))
{
return ENDPOINT_READYWAIT_Timeout;
}
}
}
}
This wait time is approximately 100-150us and is to long. When I was working on STM32 microcontrollers the execution time was significantly smaller.
Has anybody delt with this problem before?
Is there a way to write the data to a buffer and then let the peripheral take care of the transaction without the need for processor time?
Or at least trigger an interrupt when the endpoint is ready for the transaction.
The loop terminates instantly if Endpoint_IsINReady() returns true. Perhaps you should try to run that macro in your code, and only send messages to the USB host if it evaluates to true.
Also, to ensure that the library does not use its blocking behavior, you'll have to think about how much data the endpoint buffers can hold at once, and make sure you don't send any more than that at a time. (You can make your own buffer and send it to the USB library piecemeal if you need to.)

Displaying 2D arrays of pixel values in a console application

I'm continuously sending 2D arrays of pixel values (uint32) from LabVIEW to a C-program through TCP/IP with the resolution 160x120. The purpose of the C-program is to display the received pixel values as 2D arrays in the console application. I'm sending the pixels as stream of bytes, and using the recv function in Ws2_32.lib to receive the bytes in the C-program. Then I'm converting the bytes to uint32 values and displaying them in the console application using a 2D arrays, so every 2D array will represent an image.
I have en issue with the frame rate though. I'm able to send 30 frames per second in LabVIEW, but when I open the TCP/IP connection with the C-program, the frame rate goes down to 1 frame per second. It must be an issue with the C-program, since I managed to send the desired frames per second with the same LabVIEW program to a corresponding C# program.
The C-code:
#define DEFAULT_BUFLEN 256
#define IMAGEX 120
#define IMAGEY 160
WSADATA wsa;
SOCKET s , new_socket;
struct sockaddr_in server , client;
int c;
int iResult;
char recvbuf[DEFAULT_BUFLEN];
int recvbuflen = DEFAULT_BUFLEN;
typedef unsigned int uint32_t;
unsigned int x=0,y=0,i,n;
uint32_t image[IMAGEX][IMAGEY];
size_t len;
uint32_t* p;
p = (uint32_t*)recvbuf;
do
{
iResult = recv(new_socket, recvbuf, recvbuflen, 0);
len = iResult/sizeof(uint32_t);
for(i=0; i < len; i++)
{
image[x][y] = p[i];
x++;
if (x >= IMAGEX)
{
x=0;
y++;
}
if (y >= IMAGEY)
{
y = 0;
x = 0;
//print image
for (n=0; n< IMAGEX*IMAGEY; n++)
{
printf("%d",image[n%IMAGEX][n/IMAGEY]);
if (n % IMAGEX)
{
printf(" ");
}
else
{
printf("\n");
}
}
}
}
} while ( iResult > 0 );
try reducing the prints .. Since you are reading and printing in the same thread, the data in the TCP connection will fill up and it will then back pressure the other end (LABView) and the LABView will stop sending data until it gets the green signal from the other end (you C program)
To start with you can debug by replacing this
for (n=0; n< IMAGEX*IMAGEY; n++)
{
printf("%d",image[n%IMAGEX][n/IMAGEY]);
if (n % IMAGEX)
{
printf(" ");
}
else
{
printf("\n");
}
}
with
printf("One frame recv\n");
and see if it makes any difference. I am assuming your tcp connection has ample bandwidth
Very hard to diagnose without further information. I can give a few suggestions, however.
First of all, your recv call is using a small buffer, so you are spending a lot of time calling it. Why not read a whole frame at a time? Also, you read in the data and then copy it to the image array. Wouldn't it be simpler to just use the image array itself? Combining those two suggestions would have recv reading a full frame directly into the image array, saving a lot of time.
Another source of the problem could be the console. With the sample code you provided, you are attempting to write 30*120*160=57,600 integer values per second to the terminal. If the average value, with delimiter, takes up 8 characters, that's 4 million characters per second. It's entirely possible that the display just can't go that fast, in which case things would back up and slow down all the way to the server writing to the socket.
There are several ways to handle this, but it's too much to go into here.

Issue with SPI (Serial Port Comm), stuck on ioctl()

I'm trying to access a SPI sensor using the SPIDEV driver but my code gets stuck on IOCTL.
I'm running embedded Linux on the SAM9X5EK (mounting AT91SAM9G25). The device is connected to SPI0. I enabled CONFIG_SPI_SPIDEV and CONFIG_SPI_ATMEL in menuconfig and added the proper code to the BSP file:
static struct spi_board_info spidev_board_info[] {
{
.modalias = "spidev",
.max_speed_hz = 1000000,
.bus_num = 0,
.chips_select = 0,
.mode = SPI_MODE_3,
},
...
};
spi_register_board_info(spidev_board_info, ARRAY_SIZE(spidev_board_info));
1MHz is the maximum accepted by the sensor, I tried 500kHz but I get an error during Linux boot (too slow apparently). .bus_num and .chips_select should correct (I also tried all other combinations). SPI_MODE_3 I checked the datasheet for it.
I get no error while booting and devices appear correctly as /dev/spidevX.X. I manage to open the file and obtain a valid file descriptor. I'm now trying to access the device with the following code (inspired by examples I found online).
#define MY_SPIDEV_DELAY_USECS 100
// #define MY_SPIDEV_SPEED_HZ 1000000
#define MY_SPIDEV_BITS_PER_WORD 8
int spidevReadRegister(int fd,
unsigned int num_out_bytes,
unsigned char *out_buffer,
unsigned int num_in_bytes,
unsigned char *in_buffer)
{
struct spi_ioc_transfer mesg[2] = { {0}, };
uint8_t num_tr = 0;
int ret;
// Write data
mesg[0].tx_buf = (unsigned long)out_buffer;
mesg[0].rx_buf = (unsigned long)NULL;
mesg[0].len = num_out_bytes;
// mesg[0].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[0].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[0].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[0].cs_change = 0;
num_tr++;
// Read data
mesg[1].tx_buf = (unsigned long)NULL;
mesg[1].rx_buf = (unsigned long)in_buffer;
mesg[1].len = num_in_bytes;
// mesg[1].delay_usecs = MY_SPIDEV_DELAY_USECS,
// mesg[1].speed_hz = MY_SPIDEV_SPEED_HZ;
mesg[1].bits_per_word = MY_SPIDEV_BITS_PER_WORD;
mesg[1].cs_change = 1;
num_tr++;
// Do the actual transmission
if(num_tr > 0)
{
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), mesg);
if(ret == -1)
{
printf("Error: %d\n", errno);
return -1;
}
}
return 0;
}
Then I'm using this function:
#define OPTICAL_SENSOR_ADDR "/dev/spidev0.0"
...
int fd;
fd = open(OPTICAL_SENSOR_ADDR, O_RDWR);
if (fd<=0) {
printf("Device not found\n");
exit(1);
}
uint8_t buffer1[1] = {0x3a};
uint8_t buffer2[1] = {0};
spidevReadRegister(fd, 1, buffer1, 1, buffer2);
When I run it, the code get stuck on IOCTL!
I did this way because, in order to read a register on the sensor, I need to send a byte with its address in it and then get the answer back without changing CS (however, when I tried using write() and read() functions, while learning, I got the same result, stuck on them).
I'm aware that specifying .speed_hz causes a ENOPROTOOPT error on Atmel (I checked spidev.c) so I commented that part.
Why does it get stuck? I though it can be as the device is created but it actually doesn't "feel" any hardware. As I wasn't sure if hardware SPI0 corresponded to bus_num 0 or 1, I tried both, but still no success (btw, which one is it?).
UPDATE: I managed to have the SPI working! Half of it.. MOSI is transmitting the right data, but CLK doesn't start... any idea?
When I'm working with SPI I always use an oscyloscope to see the output of the io's. If you have a 4 channel scope ypu can easily debug the issue, and find out if you're axcessing the right io's, using the right speed, etc. I usually compare the signal I get to the datasheet diagram.
I think there are several issues here. First of all SPI is bidirectional. So if yo want to send something over the bus you also get something. Therefor always you have to provide a valid buffer to rx_buf and tx_buf.
Second, all members of the struct spi_ioc_transfer have to be initialized with a valid value. Otherwise they just point to some memory address and the underlying process is accessing arbitrary data, thus leading to unknown behavior.
Third, why do you use a for loop with ioctl? You already tell ioctl you haven an array of spi_ioc_transfer structs. So all defined transaction will be performed with one ioctl call.
Fourth ioctl needs a pointer to your struct array. So ioctl should look like this:
ret = ioctl(fd, SPI_IOC_MESSAGE(num_tr), &mesg);
You see there is room for improvement in your code.
This is how I do it in a c++ library for the raspberry pi. The whole library will soon be on github. I'll update my answer when it is done.
void SPIBus::spiReadWrite(std::vector<std::vector<uint8_t> > &data, uint32_t speed,
uint16_t delay, uint8_t bitsPerWord, uint8_t cs_change)
{
struct spi_ioc_transfer transfer[data.size()];
int i = 0;
for (std::vector<uint8_t> &d : data)
{
//see <linux/spi/spidev.h> for details!
transfer[i].tx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].rx_buf = reinterpret_cast<__u64>(d.data());
transfer[i].len = d.size(); //number of bytes in vector
transfer[i].speed_hz = speed;
transfer[i].delay_usecs = delay;
transfer[i].bits_per_word = bitsPerWord;
transfer[i].cs_change = cs_change;
i++
}
int status = ioctl(this->fileDescriptor, SPI_IOC_MESSAGE(data.size()), &transfer);
if (status < 0)
{
std::string errMessage(strerror(errno));
throw std::runtime_error("Failed to do full duplex read/write operation "
"on SPI Bus " + this->deviceNode + ". Error message: " +
errMessage);
}
}

Resources