I've recently been given the task of writing drivers for some of the I2C devices in our product. I was a complete beginner at this, but I've managed to use a mixture of i2cset and i2cget along with smbus to control some LEDs.
My latest task is to read a 7 byte serial number from the EEPROM of an Atmel ATSHA204 chip. My first job is to wake up the chip. The data sheet says this is done as follows
The Wake condition requires either that the system processor manually drive
the SDA pin low for tWLO, or that a data byte of 0x00 is transmitted at a
clock rate sufficiently slow that SDA is low for a minimum period of tWLO.
When the device is awake, the normal processor I2C hardware and/or software
can be used for device communications up to and including the I/O sequence
required to put the device back into low-power (sleep) mode.
So it seems I have to manually set one of the I2C pins low for a time of tWLO which is apparently at least 60 microseconds before I can use conventional I2C calls. I'm really not sure how this is done, Ideally I'd do this in C, so would some variation of the following work?
int file;
file = open("/dev/i2c-0", O_RDWR);
if (file < 0)
{
exit(1);
}
int addr = 0x64; // Address of LED driver
if (ioctl(file, I2C_SLAVE, addr) < 0)
{
return 1;
}
write(file, 0x00, 1); // write a 0x00 to the chip to wake it up
I guess I'm not sure about the last bit, how do I keep writing until the device has woken up? I'd appreciate any help with this, as low-level programming such as this is new to me.
You don't want to pass 0x00 to the write function. That param isn't a value, it's a pointer to a buffer containing the data. You need to know what clock speed your I2C is running. That will determine how many bytes of 0x00 need to be written satisfy the required duration for wakeup.
Below could help you to awake the chip
bool wakeup (int fd)
{
unsigned char buf[4] = {0};
bool awake = false;
if (fcntl(fd, F_GETFD) < 0)
perror ("Invalid FD.\n");
while (!awake){
if (write(fd, buf, sizeof(buf)) > 1){
printf ("Device is awake!\n");
}
if (read (fd, buf, sizeof(buf)) == 4){
printf ("Awake Done!\n");
awake = true;
}
else {
printf ("Failed to awake the device!");
break;
}
}
return awake;
}
Related
I am looking at the following function which retargets stdout to UART in the STM32 Std peripheral library.
int _write(int fd, char *ptr, int len) {
uint32_t primask = __get_PRIMASK();
__disable_irq();
for (int i = 0; i < len; i++) {
while (USART_GetFlagStatus(RETARGET_CFG_UART, USART_FLAG_TXE) == RESET) {
}
USART_SendData(RETARGET_CFG_UART, (uint8_t) * (ptr + i));
}
if (!primask) {
__enable_irq();
}
return len;
}
Before transmitting over UART it masks exceptions which can have a priority set via __disable_irq() (which I understand includes all peripheral and GPIO interrupts).
My question is, why is this UART tx implemented this way?
Can the disable/enable irq calls be removed so that other functionality in the program is not delayed due to potentially lengthy data transactions?
I suspect the author of that code is disabling all interrupts just because there might be some interrupts that write bytes to the UART, and the author wants to ensure that all the bytes sent to the _write function get written consecutively, instead of having other, unrelated bytes in the middle.
I'm assuming all the ISRs defined in your system are relatively quick compared to the serial transmission, and the receiver doesn't care if there are some small delays in the transmission. So it should be OK to leave any interrupts enabled if those interrupts are not related to the UART.
You should get a firm grasp of all the ISRs in your system and what they do, and then you should use that to decide which specific IRQs should be disabled during this operation (if any).
It's a quasi-bad design.
For one thing, always calling __disable_irq but not __enable_irq is suspicious.
It prevents nested calls where the caller is also doing __disable_irq/__enable_irq.
It assumes that a Tx ISR is not involved. That is, the UART is running in polled mode. But, if that were true, why did the code disable/enable interrupts?
If there is more data to send to the UART Tx than can be put into the Tx FIFO, the code [as you've seen] will block.
This means that the base/task level can't do other things.
When I do such UART code, I use a ring queue. I define one that is large enough to accomodate any bursts (e.g. queue is 10,000).
The design assumes that if the _write is called and the ring queue becomes full before all data is added, the ring queue size should be increased (i.e. ring queue full is a [fatal] design error).
Otherwise, the base/task level is trying to send too much data. That is, it's generating more data than can be sent at the Tx baud rate.
The _write process:
Copy bytes to the ring queue.
Copy as many bytes from the queue to the UART as space in the UART Tx FIFO allows.
Tx ISR will be called if space becomes available. It repeats step (2)
With this, _write will not block if the UART Tx FIFO becomes full.
The Tx ISR will pick up the slack. That is, when there is more space available in the FIFO, and the Tx is "ready", the Tx ISR will be called.
Here is some pseudo code to illustrate what I mean:
// push_tx_data -- copy data from Tx ring queue into UART Tx FIFO
int
push_tx_data(void)
{
// fill Tx FIFO with as much data as it can hold
while (1) {
// no free space in Tx FIFO
if (USART_GetFlagStatus(RETARGET_CFG_UART, USART_FLAG_TXE) != RESET)
break;
// dequeue byte to transmit (stop if queue empty)
int i = tx_queue_dequeue();
if (i < 0)
break;
// put this data byte into the UART Tx FIFO
uint8_t buf = i;
USART_SendData(RETARGET_CFG_UART, buf);
}
}
// tx_ISR -- handle interrupt from UART Tx
void
tx_ISR(void)
{
// presumeably interrupts are already disabled in the ISR ...
// copy all the data we can
push_tx_data();
// clear the interrupt pending flag in the interrupt controller or ISR if
// necessary (i.e.) USART_SendData didn't clear it when the FIFO was full
}
// _write -- send data
int
_write(int fd, char *ptr, int len)
{
uint32_t primask = __get_PRIMASK();
// running from task level so disable interrupts
if (! primask)
__disable_irq();
int sent = 0;
// add to [S/W] tx ring queue
for (int i = 0; i < len; i++) {
if (! tx_queue_enqueue(ptr[i]))
break;
++sent;
}
// send data from queue into FIFO
push_tx_data();
// reenable interrupts (task level)
if (! primask)
__enable_irq();
return sent;
}
I have a microcontroller and an embedded PC. These two communicated via a serial connection and both are receiving and sending data to each other. The baud rate between these two is 38400. Microcontroller and PC have both the same configuration to ensure the communication (8 data bits, 1 stop bit, even parity).
Communication works fine until the microcontroller starts sending messages around every 10 ms. At this point, the sending queue of the microcontroller gets full and overruns. He then sends an error message to the PC (this is how I know, that the sending queue of the microcontroller get overrun, not that of the PC.
Prior to the embedded Linux version of the PC program, the microcontroller had run with a DOS version of the PC program without causing an error. In DOS single bytes are directly read from and written to the serial port (no kernel buffer like in Linux) Since most of the C-code is portable to Linux I try to replicate the DOS behavior of serial port reading and writing in Linux to keep the rest of which processes these single bytes.
I open and initialize the serial port in the PC as follows.
fd_mc = open("/dev/ttyS1", O_NOCTTY | O_RDWR /*| O_NONBLOCK*/ /*| O_SYNC*/);
if(fd_mc == -1)
{
perror("Could not open µc port.");
}
else
{
struct termios tty;
memset(&tty, 0, sizeof(tty));
if ( tcgetattr ( fd_mc, &tty ) != 0 )
{
perror("Error getting termios attributes");
}
cfsetospeed (&tty, B38400);
cfsetispeed (&tty, B38400);
tty.c_lflag &= ~(ICANON | ECHO | ECHOE | ISIG); //raw input
tty.c_oflag &= ~OPOST; //raw output
tty.c_cflag |= PARENB; //even parity
tty.c_cflag &= ~PARODD;
tty.c_cflag &= ~CSTOPB;
tty.c_cflag &= ~CSIZE;
tty.c_cflag |= CS8;
tty.c_cflag |= (CLOCAL | CREAD);
tcsetattr(fd_mc, TCSANOW, &tty);
}
The code above is a snippet from a function which initializes two serial ports (one of them is the one to the microcontroller).
Edit:
Here is the setting of the flow control, there is none.
tty.c_cflag &= ~CRTSCTS; //No hardware based flow control
tty.i_cflag &= ~(IXON | IXOFF | IXANY); //no software based flow control
Reading from the serial port happens inside a thread with the help of a ring buffer and pollin. This thread is being called inside the main loop. Code below:
void *thread_read()
{
struct sched_param param;
param.sched_priority = 97;
int ret_par = 0;
ret_par = sched_setscheduler(3, SCHED_FIFO, ¶m);
if (ret_par == -1) {
perror("sched_setscheduler");
return 0;
}
struct pollfd poll_fd[2];
int ret;
extern struct fifo mc_fifo, serial_fifo;
ssize_t t;
char c;
poll_fd[0].fd = fd_mc;
poll_fd[0].events = POLLIN;
poll_fd[1].fd = fd_serial;
poll_fd[1].events = POLLIN;
while(1) {
ret = poll(poll_fd, 2, 10000);
if(ret == -1) {
perror("poll");
}
if(poll_fd[0].revents & POLLIN) {
t = read(fd_mc, &c, 1);
if(t>0) {
fifo_in(&mc_fifo, c);
}
}
if(poll_fd[1].revents & POLLIN) {
t = read(fd_serial, &c, 1);
if(t>0) {
fifo_in(&serial_fifo, c);
}
}
}
pthread_exit(NULL);
}
Call inside the main loop.
pthread_t read;
pthread_create(&read, NULL, thread_read, NULL);
The function for writing inside the buffer (fifo_in) is
int fifo_in(struct fifo *f, char data)
{
if( (f->write + 1 == f->read) || (f->read == 0 && f->write + 1 == FIFO_SIZE) ) //checks if write one before read or at the end
{
printf("fifo in overflow\n");
return 0; //fifo full
}
else {
f->data[f->write] = data;
f->write++;
// printf("Byte in: Containing %4d\tData:\t%4d\n", BytesInReceiveBuffer(1), data); //Bytes contained in fifo of mc
if(f->write >= FIFO_SIZE) {
f->write = 0;
}
return 1; //success
}
}
What this function basically does is check where the read and write positions are and write data inside the buffer if the two positions don't overlap and are more than +1 away from each other.
When another function needs the bytes inside the ring buffer it calls GetByte which reads the bytes from the ring buffer.
GetByte
int GetByte(int port)
{
char c;
switch(port) {
case 0: //COM1
fifo_out(&serial_fifo,&c);
break;
case 1: //COM2
fifo_out(&mc_fifo,&c);
break;
}
return (int)c;
}
fifo_out
int fifo_out(struct fifo *f, char *data) {
if(f->read == f->write) {
printf("fifo in overfwrite\n");
*data = 0;
return 0;
}
else {
*data = f->data[f->read];
f->read++;
// printf("Byte out: Containing %4d\tData:\t%4d\n", BytesInReceiveBuffer(1), *data);
if(f->read >= FIFO_SIZE) {
f->read = 0;
}
return 1;
}
}
Prior to the Linux porting, everything was sequential in the DOS version.
Ate the moment my best guess is, that read() is to slow and at some point you and starts slowing down the reading from the buffer, which again, blocks the calling of the thread. Maybe I am wrong. At the moment I am kinda clueless what exactly the bug is or even how to fix this.
Every good advice is appreciated.
Are you sure you need a fifo in the application? Your serial driver most likely already has quite a large buffer in the kernel (often a page, which is usually 4kB). If this suffices you could radically simplify the implementation by having GetByte doing a non-blocking read against the serial device.
If you want to stick with this design, consider reworking your read-loop to read more than byte at a time. As it is now you need two syscalls for every byte read.
You're changing the scheduling class for PID 3, always. This is probably not what you want. Also, this only means that your thread will run once the bytes have landed in the kernels internal buffer (i.e. when poll returns). If the bytes are transferred from the hardware FIFO to the buffer by running a job on a workqueue, and that workqueue runs in SCHED_OTHER (which most them do), changing the scheduler of your thread will not have the desired effect. Classic priority inversion problem. You might want to audit the kernel driver used by your particular board. Though, if you empty the entire buffer on every read this should be less of a problem.
If this code is ever used on an SMP system, you're most like going to want to guard the read and write pointers with a lock since they are not updated atomically. Threads are hard to get right, have you considered using an event-loop instead? Something like libev.
I am using 3.12 kernel on an ARM based linux board (imx233 CPU). My purpose is to detect pin change of a GPIO (1 to 0).
I can read the pin value constantly calling the below function (in a while(1) loop)
int GPIO_read_value(int pin){
int gpio_value = 0;
char path[35] = {'\0'};
FILE *fp;
sprintf(path, "/sys/class/gpio/gpio%d/value", pin);
if ((fp = fopen(path,"rb+")) == NULL){ //echo in > direction
//error
}
fscanf(fp, "%d", &gpio_value);
fclose(fp);
return gpio_value;
}
But it causes too much load to the CPU. I don't use usleep or nanosleep, because the pin change happens for a very short of a time that would cause me to miss the event.
As far as I find out, it is not possible to use poll(). Is there any poll() like function that I can use to detect a pin change of a GPIO?
EDIT: Just in case, if I am doing something wrong, here is my poll() usage that does not detect the pin change
struct pollfd pollfds;
int fd;
int nread, result;
pollfds.fd = open("/sys/class/gpio/gpio51/value", O_RDWR);
int timeout = 20000; /* Timeout in msec. */
char buffer[128];
if( pollfds.fd < 0 ){
printf(" failed to open gpio \n");
exit (1);
}
pollfds.events = POLLIN;
printf("fd opens..\n");
while (1)
{
result = poll (&pollfds, 0, timeout);
switch (result)
{
case 0:
printf ("timeout\n");
break;
case -1:
printf ("poll error \n");
exit (1);
default:
printf("something is happening..\n");
if (pollfds.revents & POLLIN)
{
nread = read (pollfds.fd, buffer, 8);
if (nread == 0) {
printf ("result:%d\n", nread);
exit (0);
} else {
buffer[nread] = 0;
printf ("read %d from gpio: %s", nread, buffer);
}
}
}
}
close(fd);
EDIT2: the code on https://developer.ridgerun.com/wiki/index.php/Gpio-int-test.c works fine with poll() I needed to define the rising/falling edge for the interrupt and a little bit fix on the definition. It solves my problem, however, it might be good for me and some other people to hear/know the alternative methods.
I have never seen this board before, however I guess PIC is fully implemented for this board (usually is like that) but you have to configure interrupt additionally in GPIO controller (usually is like that).
Some part should be done as a kernel module, then you have to pass information about interrupt to you application.
Example way to do this is to implement following thing as a kernel module:
setup GPIO controller to enable interrupt on particular port and level
(how to do this you can find here: http://cache.freescale.com/files/dsp/doc/ref_manual/IMX23RM.pdf 37.2.3.3 Input Interrupt Operation)
enable GPIO interrupt in PIC (how to do this: http://lwn.net/images/pdf/LDD3/ch10.pdf Chapter10)
implement interrupt handling routine (I will describe a little bit below)
implement ioctl interfaces for your module.
and a rest in your application:
a function that can coosomeoneperate with interrupt.
Simplest way of passing information about interrupt from kernel to app is by semaphore on kernel side.
in module you can implement an ioctl that will sleep until interrupt happen.
So application will call this ioctl and its thread will be blocked until interrupt happen.
Inside module, interrupt routine should check if application thread is now blocked, and if so up() semaphore.
EDIT*****
This CPU has SSP that has working mode for SPI. Why dont use it ??
I'm trying to implement a timeout for some hardware transmissions, to add security to a big project. I already implemented timeout using select for UART transmission, but I don't know how to add a timeout in a SPI transmission.
This is my reading code:
int spi_read(int fd, char command, char* buffer, int size, int timeout)
{
struct spi_ioc_transfer xfer[2];
int status;
memset(buffer, 0, sizeof(buffer));
memset(xfer, 0, sizeof(xfer));
xfer[0].tx_buf = (unsigned int)(&command);
xfer[0].len = 1;
xfer[1].rx_buf = (unsigned int)buffer;
xfer[1].len = size;
status = ioctl(fd, SPI_IOC_MESSAGE(2), xfer);
if(status < 0)
return NOERROR;
else
return EHWFAULT1;
}
It sends a byte sized command and receives a response of certain size (in half duplex mode). How can I implement a timeout in the response? Can it be implemented using select? Should I separe both transactions and use select or better use an alarm?
Then, I have the same question for a full duplex mode, which is implemented too using ioctl. Can you give me any hints?
In hardware the SPI master does not 'wait' for a response. By definition, the SPI master provides the clock cycles and the slave must reply. The concept of waiting for a response doesn't apply to the SPI bus. (I'm assuming you're operating the SPI master)
(deeper in the protocol, the SPI might poll the hardware to see if it's done/ready; but the SPI bus itself is getting an immediate answer every time).
To clarify: the SPI clocks in whatever is on the SPI MISO pin. Whatever level is on the MISO pin is the reply, even if the slave is not explicitly driving a level. The only way to detect a non responsive slave is to pullup/pulldown the MISO in a way that can not be interpreted as a valid message.
I am running into a timing issue with a serial port in C. Long story short, I send a command to a device at one baud rate (say 9600) and expect a response from it at another (say 38400). I have no control over this protocol. A prototype of the situation would look like the following:
open() serial port
set baud rate to 9600 using termios struct
write() command
change baud rate to 38400 using termios struct
read() response
I am running into a problem where the device does not understand the command I sent because the baud rate is changing to 38400 before it completes the write. I am pretty certain write works fine because it returns the number of bytes I intend to write. I tried adding a usleep(100000) after the call to write and while that works sometimes, I cannot guarantee the entire message will be transmitted at 9600 before I change the baud and read a response. I also tried flushing the buffer with tcflush(fd, TCOFLUSH) but I do not want to discard any data so this is also not the correct way.
How can I force write all the serial data and be guaranteed it is written before the next call to change the baud rate to 38400? This seems to be happening at the chip level so is my only hope to include the FTDI libraries (it is an FTDI chip) and access the registers to see when the data is done being transmitted? Thanks.
Use WaitCommEvent with a mask that includes EV_TXEMPTY to wait for the message to be sent out by the driver.
Why not just set the transmitter to 9600 and the receiver to 38400? Most common serial port hardware support this.
// fd = file descriptor of serial port. For example:
// int fd = open ("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_SYNC);
int
somefunction (int fd)
{
struct termios tty;
memset (&tty, 0, sizeof tty);
if (tcgetattr (fd, &tty) != 0)
{
error ("error %d from tcgetattr: %s", errno, strerror (errno));
return -1;
}
cfsetospeed (&tty, B9600); // set tty transmitter to 9600
cfsetispeed (&tty, B38400); // set receiver to 38400
if (tcsetattr (fd, TCSADRAIN, &tty) != 0) // waits until pending output done before changing
{
error ("error %d from tcsetattr", errno);
return -1;
}
return 0;
}
I've amended my code to use TCSADRAIN instead of TCSANOW so that the rate change does not occur until after all pending output has been sent.
You should be using tcdrain() or the TCSADRAIN optional action for tcsetattr() instead of tcflush
I used the Windows FTDI API and found their entire model to be annoyingly asynchronous. With a normal serial port I would expect you could do the math on your message length (in bits, including start, stop, parity) and baud rate and have a reasonable estimate of how long to sleep until your message is transmitted. However, with the FTDI part you're dealing with USB latencies and unknown queuing in the FTDI part itself. If your device's replies come in under 1ms it might not even be possible to turn the FTDI around reliably between your TX and RX phases.
Would it be possible to hook up the RX to a different UART? That would greatly simplify your problem.
If not, you might consider using a special cable that connects your TX back to your RX so that you can see your message go out before you cut over. With a diode you could avoid the device also seeing its own transmissions. 3/4 duplex?