Reading / writing from using I2C on Linux - c

I'm trying to read/write to a FM24CL64-GTR FRAM chip that is connected over a I2C bus on address 0b 1010 011.
When I'm trying to write 3 bytes (data address 2 bytes, + data one byte), I get a kernel message ([12406.360000] i2c-adapter i2c-0: sendbytes: NAK bailout.), as well as the write returns != 3. See code below:
#include <linux/i2c-dev.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdint.h>
int file;
char filename[20];
int addr = 0x53; // 0b1010011; /* The I2C address */
uint16_t dataAddr = 0x1234;
uint8_t val = 0x5c;
uint8_t buf[3];
sprintf(filename,"/dev/i2c-%d",0);
if ((file = open(filename,O_RDWR)) < 0)
exit(1);
if (ioctl(file,I2C_SLAVE,addr) < 0)
exit(2);
buf[0] = dataAddr >> 8;
buf[1] = dataAddr & 0xff;
buf[2] = val;
if (write(file, buf, 3) != 3)
exit(3);
...
However when I write 2 bytes, then write another byte, I get no kernel error, but when trying to read from the FRAM, I always get back 0. Here is the code to read from the FRAM:
uint8_t val;
if ((file = open(filename,O_RDWR)) < 0)
exit(1);
if (ioctl(file,I2C_SLAVE,addr) < 0)
exit(2);
if (write(file, &dataAddr, 2) != 2) {
exit(3);
if (read(file, &val, 1) != 1) {
exit(3);
None of the functions return an error value, and I have also tried it with:
#include <linux/i2c.h>
struct i2c_rdwr_ioctl_data work_queue;
struct i2c_msg msg[2];
uint8_t ret;
work_queue.nmsgs = 2;
work_queue.msgs = msg;
work_queue.msgs[0].addr = addr;
work_queue.msgs[0].len = 2;
work_queue.msgs[0].flags = 0;
work_queue.msgs[0].buf = &dataAddr;
work_queue.msgs[1].addr = addr;
work_queue.msgs[1].len = 1;
work_queue.msgs[1].flags = I2C_M_RD;
work_queue.msgs[1].buf = &ret;
if (ioctl(file,I2C_RDWR,&work_queue) < 0)
exit(3);
Which also succeeds, but always returns 0. Does this indicate a hardware issue, or am I doing something wrong?
Are there any FRAM drivers for FM24CL64-GTR over I2C on Linux, and what would the API be? Any link would be helpful.

I do not have experience with that particular device, but in our experience many I2C devices have "quirks" that require a work-around, typically above the driver level.
We use linux (CELinux) and an I2C device driver with Linux as well. But our application code also has a non-trivial I2C module that contains all the work-around intelligence for dealing with all the various devices we have experience with.
Also, when dealing with I2C issues, I often find that I need to re-acquaint myself with the source spec:
http://www.nxp.com/acrobat_download/literature/9398/39340011.pdf
as well as the usage of a decent oscilloscope.
Good luck,
Above link is dead, here are some other links:
http://www.nxp.com/documents/user_manual/UM10204.pdf
and of course wikipedia:
http://en.wikipedia.org/wiki/I%C2%B2C

The NAK was a big hint: the WriteProtect pin was externally pulled up, and had to be driven to ground, after that a single write of the address followed by data-bytes is successful (first code segment).
For reading the address can be written out first (using write()), and then sequential data can be read starting from that address.

Note that the method using the struct i2c_rdwr_ioctl_data and the struct i2c_msg (that is, the last code part you've given) is more efficient than the other ones, since with that method you execute the repeated start feature of I2c.
This means you avoid a STA-WRITE-STO -> STA-READ-<data>...-STO transition, because your communication will become STA-WRITE-RS-READ-<data>...STO (RS = repeated start). So, saves you a redundant STO-STA transient.
Not that it differs a lot in time, but if it's not needed, why losing on it...
Just my 2 ct.
Best rgds,

You had some mistakes!
The address of ic is Ax in hex, x can be anything but the 4 upper bits should be A=1010 !!!

Related

PCIe DMA into system memory without a kernel module

I have a PCIe device with DMA functionality residing under Ubuntu Linux 14.04. I can create a DMA transfer coming from the device (confirmed with an analyzer and target application of DMAing into another device's memory). However, I am struggling to understand how to receive the data from the device to Linux system memory.
I boot with maxcpus=1 to make sure I do not run into cache issues (https://bakhi.github.io/devmem/) and mem=2048M (https://www.oreilly.com/library/view/linux-device-drivers/0596005903/ch15.html) kernel parameters to make sure the kernel does not use the memory I would like to use for my DMA buffer. The Ubuntu PC has 16 GB of total RAM and I am trying to target physical address 0x90000000.
I followed this answer https://stackoverflow.com/a/41713401 to try to map the physical memory. It appears that the mmap is successful (returns an address). When I try to mmap memory below 2048M, it fails as expected, since it is allocated to the kernel. Before attempting a DMA transfer, I tried simply reading and writing the memory in the C program. When I read the memory, I get some values back. When I read the memory again, I get the same values. This makes me think that the reading might actually be working (not a very strong argument...). When I attempt to read the memory, write the memory and then read it again, I see the original values, as if the write never happened. I tried to start a DMA transfer but I do not see the data arriving in the physical memory as expected.
Here is the code of my exercise:
#include <stdio.h>
#include <stdint.h>
#include <fcntl.h>
#include <sys/mman.h>
int main()
{
static volatile uint32_t *gpio = NULL;
int fd;
if ((fd = open ("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC) ) < 0) return -1;
gpio = (uint32_t *)mmap(0, 0x8000000, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0x90000000);
if ((int32_t)gpio == -1) return -1;
int i;
for (i = 0; i < 32; i++)
{
printf("0x%08x\n", *(gpio+i));
}
printf("Press any key when ready to continue...\n");
getchar();
*(gpio + 0) = 0x11223344;
*(gpio + 1) = 0x55667788;
printf("---\n");
for (i = 0; i < 32; i++)
{
printf("0x%08x\n", *(gpio+i));
}
munmap((void*)gpio, 0x8000000);
close(fd);
return 0;
}
My goal is to write some data through a PCIe DMA transfer from an end-point to the system memory of the host computer without a kernel module and read the data in the host computer once I know the DMA transfer has occurred. Is this even possible?

I can't receive more than 64 bytes on custom USB CDC class based STM32 device

currently I try to sent 720 bytes from Windows application to custom STM32 device (now for testing purposes I use Blue Pill - STM32F103xxx). Ah, I forgot to point that I am totally newbie into programming :). So on device side I have 1000 bytes buffers for receiving and sending (Thanks to STMCube for this). Testing device with terminal program ( packets < than 64 bytes) works. Then I rework one of Microsoft examples to be able to sent more data to device. Used device driver on Windows is "usbser.sys". In short my console program do following:
Calculate SINE weave (360) samples - 16 bytes size
Sent them to USB Device as 720 bytes (byte size protocol for COM port)
My problem is that no more than 64 bytes comes into device.
Somewhere I read that reason for this can be into built in Rx,Tx Windows buffers (64 bytes long by mention somewhere on internet) and for this into code below I insert:
SetupComm(hCom,1000,1000)
in hope that this will solve my troubles but nope. Below is "my" code, any ideas how I can fix this?
#include <windows.h>
#include <tchar.h>
#include <stdio.h>
#include <math.h>
#define PI 3.14159265
void PrintCommState(DCB dcb)
{
// Print some of the DCB structure values
_tprintf(TEXT("\nBaudRate = %d, ByteSize = %d, Parity = %d, StopBits = %d\n"),
dcb.BaudRate,
dcb.ByteSize,
dcb.Parity,
dcb.StopBits);
}
int _tmain(int argc, TCHAR* argv[])
{
DCB dcb;
HANDLE hCom;
BOOL fSuccess;
const TCHAR* pcCommPort = TEXT("COM3"); // Most systems have a COM1 port
unsigned __int8 aOutputBuffer[720];// Data that will sent to device
unsigned __int16 aCalculatedWave[360];// Data that will sent to device
int iCnt; // temp counter to use everywhere
for (iCnt = 0; iCnt < 360; iCnt = iCnt + 1)
{
aCalculatedWave[iCnt] = (unsigned short)(0xFFFF * sin(iCnt * PI / 180));
if (iCnt > 180) aCalculatedWave[iCnt] = 0 - aCalculatedWave[iCnt];
}
// 16 bit aCalculatedWaveto to 8 bit aOutputBuffer
for (int i = 0, j = 0; i < 720; i += 2, ++j)
{
aOutputBuffer[i] = aCalculatedWave[j] >> 8; // Hi byte
aOutputBuffer[i + 1] = aCalculatedWave[j] & 0xFF; // Lo byte
}
// Open a handle to the specified com port.
hCom = CreateFile(pcCommPort,
GENERIC_READ | GENERIC_WRITE,
0, // must be opened with exclusive-access
NULL, // default security attributes
OPEN_EXISTING, // must use OPEN_EXISTING
0, // not overlapped I/O
NULL); // hTemplate must be NULL for comm devices
if (hCom == INVALID_HANDLE_VALUE)
{
// Handle the error.
printf("CreateFile failed with error %d.\n", GetLastError());
return (1);
}
if (SetupComm(hCom,1000,1000) !=0)
printf("Windows In/Out serial buffers changed to 1000 bytes\n");
else
printf("Buffers not changed with error %d.\n", GetLastError());
// Initialize the DCB structure.
SecureZeroMemory(&dcb, sizeof(DCB));
dcb.DCBlength = sizeof(DCB);
// Build on the current configuration by first retrieving all current
// settings.
fSuccess = GetCommState(hCom, &dcb);
if (!fSuccess)
{
// Handle the error.
printf("GetCommState failed with error %d.\n", GetLastError());
return (2);
}
PrintCommState(dcb); // Output to console
// Fill in some DCB values and set the com state:
// 57,600 bps, 8 data bits, no parity, and 1 stop bit.
dcb.BaudRate = CBR_9600; // baud rate
dcb.ByteSize = 8; // data size, xmit and rcv
dcb.Parity = NOPARITY; // parity bit
dcb.StopBits = ONESTOPBIT; // stop bit
fSuccess = SetCommState(hCom, &dcb);
if (!fSuccess)
{
// Handle the error.
printf("SetCommState failed with error %d.\n", GetLastError());
return (3);
}
// Get the comm config again.
fSuccess = GetCommState(hCom, &dcb);
if (!fSuccess)
{
// Handle the error.
printf("GetCommState failed with error %d.\n", GetLastError());
return (2);
}
PrintCommState(dcb); // Output to console
_tprintf(TEXT("Serial port %s successfully reconfigured.\n"), pcCommPort);
if (WriteFile(hCom, aOutputBuffer, 720, NULL, 0) != 0)
_tprintf(TEXT("720 bytes successfully writed to Serial port %s \n"), pcCommPort);
else
_tprintf(TEXT("Fail on write 720 bytes to Serial port %s \n"), pcCommPort);
return (0);
}
USB bulk endpoints implement a stream-based protocol, i.e. an endless stream of bytes. This is in contrast to a message-based protocol. So USB bulk endpoints have no concept of messages, message start or end. This also applies to USB CDC as it is based on bulk endpoints.
At the lower USB level, the stream of bytes is split into packets of at most 64 bytes. As per USB full-speed standard, packets cannot be larger than 64 bytes.
If the host sends small chunks of data that are more than 1ms apart, they will be sent and received in separate packets and it looks as if USB is a message-based protocol. However, for chunks of more than 64 bytes, they are split into smaller packets. And if small chunks are sent with less than 1ms in-between, the host will merge them into bigger packets.
Your design seems to require that data is grouped, e.g. the group of 720 bytes mentioned in the question. If this is a requirement, the grouping must be implemented, e.g. by first sending the size of the group and then the data.
Since larger groups are split into chunks of 64 bytes and the receive callback is called for every packet, the packets must be joined until the full group is available.
Also note a few problems in your current code (see usbd_cdc_if.c, line 264):
USBD_CDC_SetRxBuffer(&hUsbDeviceFS, &Buf[0]);
USBD_CDC_ReceivePacket(&hUsbDeviceFS);
NewDataFromUsb = *Len;
USBD_CDC_SetRxBuffer sets the buffer for the next packet to be received. If you always use the same buffer – as in this case – it's not needed. The initial setup is sufficient. However, it could be used to set a new buffer if the current packet does not contain a full group.
Despite its name, USBD_CDC_ReceivePacket does not receive a packet. Instead, it gives the OK to receive the next package. It should only be called if the data in the buffer has been processed and the buffer is ready to receive the next packet. Your current implementation runs the risk that the buffer is overwritten before it is processed, in particular if you send a group of more than 64 bytes, which will likely result in a quick succession of packets.
Note that Windows hasn't been mentioned here. The Windows code seems to be okay. And changing to Winusb.sys will just make your life harder but not get you packets bigger than 64 bytes.

How to fix a segmentation fault for Ansi C Tcp client?

I'm trying to expand an example of a Tcp client developed using Ansi C, following the book "TCP/IP Sockets in C". The client connects to a Tcp Server providing strings of different lengths depending on the request provided by the client (I developed my own simple protocol). When the returned strings are short in length, everything works fine. When they're over a certain length (it happens for example with 4KB), the client crashes with a Segmentation Fault error.
The socket is handled using a wrapper to stream the i/o:
FILE *str = fdopen(sock, "r+"); // Wrap for stream I/O
And the transmission and reception are handled using fwrite() and fread().
This is the call that generates the error in my project (the caller):
uint8_t inbuf[MAX_WIRE_SIZE];
size_t respSize = GetNextMsg(str, inbuf, MAX_WIRE_SIZE); // Get the message
And this is the implementation of the GetNextMsg() function, that use to receive the data and unframe it:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <netinet/in.h>
#include "Practical.h"
/* Read 4-byte length and place in big-endian order.
* Then read the indicated number of bytes.
* If the input buffer is too small for the data, truncate to fit and
* return the negation of the *indicated* length. Thus a negative return
* other than -1 indicates that the message was truncated.
* (Ambiguity is possible only if the caller passes an empty buffer.)
* Input stream is always left empty.
*/
uint32_t GetNextMsg(FILE *in, uint8_t *buf, size_t bufSize)
{
uint32_t mSize = 0;
uint32_t extra = 0;
if (fread(&mSize, sizeof(uint32_t), 1, in) != 1)
return -1;
mSize = ntohl(mSize);
if (mSize > bufSize)
{
extra = mSize - bufSize;
mSize = bufSize; // Truncate
}
if (fread(buf, sizeof(uint8_t), mSize, in) != mSize)
{
fprintf(stderr, "Framing error: expected %d, read less\n", mSize);
return -1;
}
if (extra > 0)
{ // Message was truncated
uint32_t waste[BUFSIZE];
fread(waste, sizeof(uint8_t), extra, in); // Try to flush the channel
return -(mSize + extra); // Negation of indicated size
}
else
return mSize;
}
I suspect that this could be related to the fact that with Tcp, sender and receiver are handling data with a streaming behavior, therefore it's not granted that the receiver
gets all of the data at once, as the simple example from which I started probably assumed. In fact, with short strings everything works. With longer strings, it doesn't.
I've done a simplified debug inserting a printf as a first thing inside of the function, but when I have the crash this doesn't even get printed.
It seems like an issue with the FILE *str passed as an argument to the function, when
via the socket a message longer than usual is received.
The buffers are sized far bigger than the length of the message causing the issue (1MB vs 4KB).
I've even tried to increase the size of the socket buffer via the setsockopt:
int rcvBufferSize;
// Retrieve and print the default buffer size
int sockOptSize = sizeof(rcvBufferSize);
if (getsockopt(sock, SOL_SOCKET, SO_RCVBUF, &rcvBufferSize, (socklen_t*)&sockOptSize) < 0)
DieWithSystemMessage("getsockopt() failed");
printf("Initial Receive Buffer Size: %d\n", rcvBufferSize);
// Double the buffer size
rcvBufferSize *= 10;
if (setsockopt(sock, SOL_SOCKET, SO_RCVBUF, &rcvBufferSize,
sizeof(rcvBufferSize)) < 0)
DieWithSystemMessage("setsockopt() failed");
but this didn't help.
Any ideas about the reason and how could I fix it?
This code:
{ // Message was truncated
uint32_t waste[BUFSIZE];
fread(waste, sizeof(uint8_t), extra, in); // Try to flush the channel
reads extra bytes into a buffer of size 4*BUFSIZE (4 because you intended to make the buffer unit8_t, but accidentally made it uint32_t instead).
If extra is larger than 4*BUFSIZE, then you will have a local buffer overflow and stack corruption, possibly resulting in a crash.
To do this correctly, something like this is needed:
int remaining = extra;
while (remaining > 0) {
char waste[BUFSIZE];
int to_read = min(BUFSIZE, remaining);
int got = fread(waste, 1, to_read, in);
if (got <= 0) break;
remaining -= got;
}

Unable to write the complete script onto a device on the serial port

The script file has over 6000 bytes which is copied into a buffer.The contents of the buffer are then written to the device connected to the serial port.However the write function only returns 4608 bytes whereas the buffer contains 6117 bytes.I'm unable to understand why this happens.
{
FILE *ptr;
long numbytes;
int i;
ptr=fopen("compass_script(1).4th","r");//Opening the script file
if(ptr==NULL)
return 1;
fseek(ptr,0,SEEK_END);
numbytes = ftell(ptr);//Number of bytes in the script
printf("number of bytes in the calibration script %ld\n",numbytes);
//Number of bytes in the script is 6117.
fseek(ptr,0,SEEK_SET);
char writebuffer[numbytes];//Creating a buffer to copy the file
if(writebuffer == NULL)
return 1;
int s=fread(writebuffer,sizeof(char),numbytes,ptr);
//Transferring contents into the buffer
perror("fread");
fclose(ptr);
fd = open("/dev/ttyUSB3",O_RDWR | O_NOCTTY | O_NONBLOCK);
//Opening serial port
speed_t baud=B115200;
struct termios serialset;//Setting a baud rate for communication
tcgetattr(fd,&serialset);
cfsetispeed(&serialset,baud);
cfsetospeed(&serialset,baud);
tcsetattr(fd,TCSANOW,&serialset);
long bytesw=0;
tcflush(fd,TCIFLUSH);
printf("\nnumbytes %ld",numbytes);
bytesw=write(fd,writebuffer,numbytes);
//Writing the script into the device connected to the serial port
printf("bytes written%ld\n",bytesw);//Only 4608 bytes are written
close (fd);
return 0;
}
Well, that's the specification. When you write to a file, your process normally is blocked until the whole data is written. And this means your process will run again only when all the data has been written to the disk buffers. This is not true for devices, as the device driver is the responsible of determining how much data is to be written in one pass. This means that, depending on the device driver, you'll get all data driven, only part of it, or even none at all. That simply depends on the device, and how the driver implements its control.
On the floor, device drivers normally have a limited amount of memory to fill buffers and are capable of a limited amount of data to be accepted. There are two policies here, the driver can block the process until more buffer space is available to process it, or it can return with a partial write only.
It's your program resposibility to accept a partial read and continue writing the rest of the buffer, or to pass back the problem to the client module and return only a partial write again. This approach is the most flexible one, and is the one implemented everywhere. Now you have a reason for your partial write, but the ball is on your roof, you have to decide what to do next.
Also, be careful, as you use long for the ftell() function call return value and int for the fwrite() function call... Although your amount of data is not huge and it's not probable that this values cannot be converted to long and int respectively, the return type of both calls is size_t and ssize_t resp. (like the speed_t type you use for the baudrate values) long can be 32bit and size_t a 64bit type.
The best thing you can do is to ensure the whole buffer is written by some code snippet like the next one:
char *p = buffer;
while (numbytes > 0) {
ssize_t n = write(fd, p, numbytes);
if (n < 0) {
perror("write");
/* driver signals some error */
return 1;
}
/* writing 0 bytes is weird, but possible, consider putting
* some code here to cope for that possibility. */
/* n >= 0 */
/* update pointer and numbytes */
p += n;
numbytes -= n;
}
/* if we get here, we have written all numbytes */

Why I2C_SMBUS_BLOCK_MAX is limited to 32 bytes?

I'm trying to configure a SAA6752HS chip (a MPEG-2 encoder) through I2C bus using a Raspberry Pi as a development kit. It was a piece of cake until I had to write at the address 0xC2 of the chip. For this task, I have to use an I2C command that expects a payload of size 189 bytes. So then I stumbled upon a 32 bytes limitation inside the I2C driver, defined by I2C_SMBUS_BLOCK_MAX, in /usr/include/linux/i2c.h. It is not possible to force different values of max limit. Everything around I2C lib end up into the function i2c_smbus_access and any request with more then 32 bytes makes ioctl returns -1. I have no idea how to debug it so far.
static inline __s32 i2c_smbus_access(int file, char read_write, __u8 command,
int size, union i2c_smbus_data *data)
{
struct i2c_smbus_ioctl_data args;
args.read_write = read_write;
args.command = command;
args.size = size;
args.data = data;
return ioctl(file,I2C_SMBUS,&args);
}
I can't understand why there is such limitation, considering that there are devices that require more than 32 bytes of payload data to work (SAA6752HS is such an example).
Are there a way to overcome such limitation without rewrite a new driver?
Thank you in advance.
Here's the documentation for the Linux i2c interface: https://www.kernel.org/doc/Documentation/i2c/dev-interface
At the simplest level you can use ioctl(I2C_SLAVE) to set the slave address and the write system call to write the command. Something like:
i2c_write(int file, int address, int subaddress, int size, char *data) {
char buf[size + 1]; // note: variable length array
ioctl(file, I2C_SLAVE, address); // real code would need to check for an error
buf[0] = subaddress; // need to send everything in one call to write
memcpy(buf + 1, data, size); // so copy subaddress and data to a buffer
write(file, buf, size + 1);
}
if write command is returning -1 make use open fd using
int fd = open("/dev/i2c-1", O_RDWR)
not
int fd = open("/dev/i2c-1", I2C_RDWR)

Resources