How to get silabs Si1141 out of suspended mode - c

Trying to read the PS1 values. But as im running the following code it keeps saying on "chip_stat" that its suspended.
main (void){
init(); // Configuration initialization
si1141_init(); // Si1141 sensor initialization
__delay_ms(30); // Delay to ensure Si1141 is completely booted, must be atleast 25ms
si1141_WriteToRegister(REG_IRQ_STATUS, 0xFF); // Clear interrupt source
signed int status;
while(1){
WriteToI2C(0x5A<<1); // Slave address
PutByteI2C(0x30); // chip_stat
ReadFromI2C(0x5A<<1); // Slave address
if((status = GetByteI2C(0x30)) == Sw_I2C_ERROR) // chip_stat
{
return Sw_I2C_ERROR;
}
Stop_I2C();
status++;;
}
}
The code im using to read the PS1 values is the following. Im reading the value 16705. Which keeps being the same on all measurements.
The value should go up and down from 0 - 32767, as it measures more or less movement.
signed int si1141_ReadFromRegister(unsigned char reg){
signed int data;
WriteToI2C(0x5A<<1); // Slave address
ReadFromI2C(0x5A<<1); // Slave address
if((data = GetByteI2C(Sw_I2C_LAST)) == Sw_I2C_ERROR)
{
return Sw_I2C_ERROR;
}
Stop_I2C();
return data;
}
main (void){
init(); // Configuration initialization
si1141_init(); // Si1141 sensor initialization
__delay_ms(30); // Delay to ensure Si1141 is completely booted, must be atleast 25ms
si1141_WriteToRegister(REG_IRQ_STATUS, 0xFF); // Clear interrupt source
signed int PS1;
while(1){
PS1 = si1141_ReadFromRegister(REG_PS1_DATA0) + (256 * si1141_ReadFromRegister(REG_PS1_DATA1)); // Proximity CH1
}
}
I linked the files for the i2c communication.
https://www.dropbox.com/s/q41vw444gjvj0qa/swi2c.c?dl=0
https://www.dropbox.com/s/1mshyz88o15hz8c/swi2c.h?dl=0

Rule out I2C errors first. Your software I2C library is no help at all.
Make sure you read registers PART_ID, REV_ID, SEQ_ED first and that the values match the data sheet resp. your expected values. This is to rule out I2C errors.
You have to take quite a few steps to get a single reading to get started.
Reset the Si114x. Program the HW_KEY. Program PS_LED21 to a sensible value. The ANs tell you how. Do not program a higher value than what your components can handle and what your design can support. This might destroy something if done incorrectly. Do not get any funny ideas about PS_ADC_GAIN either, or you will fry your device. Read the AN. Do not program PS_ADC_GAIN at this point.
Clear PSLED21_SELECT -- only PS2_LED, keep PS1_LED set for LED1, obviously -- and PSLED3_SELECT. This is probably optional, but the datasheet tells you to do it, so do it.
Next, program CH_LIST to PS1_EN, then send a PS_FORCE command.** Now read PS1 data from PS1_DATA0 and PS1_DATA1. Done.
It may be easier to test with ALS first to rule out saturating your sensor with some stray infrared (think setting sun as you work through the night).
** For the command protocol, you have to implement the command/response protocol laid out in the datasheet. I suggest you test with reset and nop first to verify your code.

Related

avr-gcc: jump to arbitrary address after ISR has been invoked

I'm working with an ATmega168p and compiling with avr-gcc.
Specifically, I have an RS485 slave that receives bytes via UART and writes them to a buffer in an ISR. If an end character is received, a flag is set in the ISR. In my main loop this flag is checked and the input buffer is processed if necessary. However, there is the problem that some time can pass between the arrival of the end byte and the time when the handler in the main loop processes the input buffer, because of the other "stuff".
This results in a latency which can be up to several milliseconds, because e.g. sensors are read in every n-th iterations.
ISR(UART_RX_vect) {
write_byte_to_buffer();
if (byte==endbyte) // return to <HERE>
}
void main(){
init();
for(;;){
// <HERE> I want my program to continue after the ISR received an end byte
handle_buffer();
do_stuff(); // "stuff" may take a while
}
I want to get rid of this latency, as it is the bottleneck for the higher-level system.
I would like that after the ISR received the end byte, the program returns to the beginning of my main loop, where the input buffer would be processed immediately. I could of course process the input buffer directly in the ISR, but I am aware that this is not a good practice. This would also overwrite packets when the ISR gets invoked while processing a packet.
So, is there a way to overwrite an ISR's return address? Does C include such a feature, maybe something like goto?
Or am I completely on the wrong track?
Edit: Below is a reduced version of my code which also causes the described latency.
#define F_CPU 8000000UL
#define BAUD 38400
#define BUFFER_LENGTH 64
#include <util/setbaud.h>
#include <avr/interrupt.h>
#include <stdbool.h>
volatile char input_buffer[BUFFER_LENGTH + 1] = "";
volatile uint8_t input_pointer = 0;
volatile bool packet_started=false;
volatile bool packet_available = false;
ISR (USART_RX_vect) {
unsigned char nextChar;
nextChar = UDR0;
if (nextChar=='<') {
input_pointer=0;
packet_started=true;
}
else if (nextChar=='>' && packet_started) {
packet_started=false;
packet_available=true;
}
else {
if (input_pointer>=BUFFER_LENGTH) {
input_pointer=0;
packet_started=false;
packet_available=false;
}
else {
input_buffer[input_pointer++]=nextChar;
}
}
}
bool ADC_handler () {
ADCSRA = (1<<ADEN)|(1<<ADPS2)|(1<<ADPS1)|(1<<ADPS0);
ADCSRA |= (1<<ADSC);
while (ADCSRA & (1<<ADSC)); // this loop blocks and causes latency
// assigning conversion result to a variable (not shown)
}
void ADC_init(void) {
ADMUX = (1<<REFS1)|(1<<REFS0)|(1<<MUX3);
ADCSRA = (1<<ADEN)|(1<<ADPS2)|(1<<ADPS1)|(1<<ADPS0);
}
void process_buffer() {
// this function does something with the buffer
// but it takes "no" time and is not causing latency
return;
}
void UART_handler () {
if (packet_available) process_buffer();
}
void UART_init (void) {
UBRR0H = UBRRH_VALUE;
UBRR0L = UBRRL_VALUE;
UCSR0B |= (1<<RXCIE0)|(1<<RXEN0)|(1<<TXEN0);
UCSR0C |= (1<<UCSZ01)|(1<<UCSZ00);
}
int main(void){
UART_init();
ADC_init();
// initializing some other things
sei();
for(;;){
UART_handler();
ADC_handler();
// other handlers like the ADC_handler follow
}
return 0;
}
I'm aware that the latency is due to blocking code, in this case the while loop in the ADC_handler() that waits for the conversion to finish. I could check for packet_available in the ADC handler and make this funtion return if the flag is set or I could even retrieve the conversion result with an ADC interrupt. That's all nice because I'm the one who implements the ADC_handler(). But if I wanted to use third party libraries (e.g. sensor libraries provided by manufacturers) I would depend on how those libraries are implemented. So what I'm looking for is a way to handle the problem "on my side"/in the UART implementation itself.
Don't try to use setjmp()/longjmp() to re-enter a main-level function from an ISR. This calls for disaster, because the ISR is never finished correctly. You might like to use assembly to work around, but this is really fragile. I'm not sure that this works at all on AVRs.
Since your baudrate is 38400, one byte needs at least some 250µs to transfer. Assumed that your message has a minimum of 4 bytes, the time to transfer a message is at least 1ms.
There are multiple possible solutions; your question might be closed because they are opinion-based...
However, here are some ideas:
Time-sliced main tasks
Since a message can arrive only once per millisecond or less, your application don't need to be much faster than that.
Divide your main tasks into separated steps, each running faster than 1 ms. You might like to use a state machine, for example to allow slower I/O to finish.
After each step, check for a completed message. Using a loop avoids code duplication.
Completely interrupt-based application
Use a timer interrupt to do the repeated work. Divide it in short tasks, a state machine does magic here, too.
Use an otherwise unused interrupt to signal the end of the message. Its ISR may run a bit longer, because it will not be called often. This ISR can handle the message and change the state of the application.
You need to think about interrupt priorities with much care.
The endless loop in main() will effectively be empty, like for (;;) {}.

Problem utilizing spi module of beaglebone black

I have the following weird problem.
I have setup the BBB to activate the spi1 module. The the module is connected to an F-RAM chip (FM25CL64B). I have done all the necessary configuration. The /dev/spidev1.0 is present and I wrote a small program to write to, and read from the chip, by opening the /dev/spidev1.0 and using ioctl with the SPI_IOC_MESSAGE macro command.
Using that program I managed to successfully write 32 bytes of text in to the F-RAM chip. Reading also seemed to be successful... How do I know they both were successful? I used a logic analyzer with an SPI decoder activated to actually see what's going on over all four of the SPI lines. Monitoring all SPI lines I could see that the write and read operations generate the correct signals with the correct timings, and all signals are in sync. The CS enables the chip during the transaction, CLK clocks every byte in 8-bit words (as configured), the data lines show the correct values, which I could see thanks to the SPI decoder which shows the byte value right above each 8-bit signal sequence of both MOSI and MISO lines.
The problem is that even though I can see that correct information is sent over the MISO line during read operation, the buffer I provide to the ioctl(iSPIR, SPI_IOC_MESSAGE(2), xfer) is filled with zeros.
I intentionally initialized that buffer with other values, so I can see if the ioctl even writes into it. And it does. Zeros.
Now, the fact that I could see all the bytes sent over the MISO line during the read operation, proves that the writing operation not only looked right in the analyzer but actually wrote the intended data during the previous write operation.
I checked several times whether the MISO line is configured correctly, in the dts file (which I can rebuild and reinstall on demand). I checked if it's the correct pin, and if it's configured as input. Everything seemed to be correctly configured.
I ran the program as root in case there are permission issues - no difference.
I also implemented the spi communication in GPIO mode. I.e. the spi module is disabled, all lines configured as GPIO. CE, CLK, MOSI configured as otuputs and the MISO configured as input. This way I could implement the entire communication in software so I could have full control over the lines. Doing that, this time, I was able to successfully fill a buffer with the correct data from the F-RAM chip. I.e. the sequential read operation went fine from the F-RAM chip all the way to my user space buffer. I was able to print the data into the console. However, that worked way too slow. Also I find it inefficient to use purely software implementation of SPI com when there is a module available for use.
In order to write my sample program I used the spi_test.c open source example available online.
I also built and ran spi_test.c itself with no modification, same result.
Here is listing of my program (Relevant snippets):
// SPI config ...
int InitSPIReadMode(const char* pstrDeviceF)
{
int file;
__u8 wr_mode = SPI_MODE_0, rd_mode = SPI_MODE_0, lsb = 0, bits = 8;
__u32 speed = CLOCK_FREQ_HZ; // 500kHz
if((file = open(pstrDeviceF, O_RDWR)) < 0)
{
printf("Failed to open the bus.");
/* ERROR HANDLING; you can check errno to see what went wrong */
exit(1);
}
if(ioctl(file, SPI_IOC_RD_MODE, &rd_mode) < 0)
{
printf("SPI rd_mode\n");
return -1;
}
if(ioctl(file, SPI_IOC_RD_LSB_FIRST, &lsb) < 0)
{
printf("SPI rd_lsb_fist\n");
return -1;
}
if(ioctl(file, SPI_IOC_RD_BITS_PER_WORD, &bits) < 0)
{
printf("SPI bits_per_word\n");
return -1;
}
if(ioctl(file, SPI_IOC_RD_MAX_SPEED_HZ, &speed) < 0)
{
printf("SPI max_speed_hz\n");
return -1;
}
printf("%s: spi wr-mode=%d, spi rd-mode=%d, %d bits per word, %s, %d Hz max\n", pstrDeviceF, wr_mode, rd_mode, bits, lsb ? "(lsb first) " : "(msb first)", speed);
xfer[0].cs_change = 0; /* Keep CS activated */
xfer[0].delay_usecs = 0; //delay in us
xfer[0].speed_hz = CLOCK_FREQ_HZ; //speed
xfer[0].bits_per_word = 8; // bites per word 8
xfer[1].cs_change = 0; /* Keep CS activated */
xfer[1].delay_usecs = 0;
xfer[1].speed_hz = CLOCK_FREQ_HZ;
xfer[1].bits_per_word = 8;
return file;
}
In the main function: (as the logic analyzer shows this code correctly sends the command, address and clocks the 32 bytes of data afterwards)
int iSPIR = InitSPIReadMode("/dev/spidev1.0"); //open("/dev/spidev1.0", O_RDWR | O_SYNC);
char arrInstruct[3] = { OPCO_READ, 0x00, 0x00 };
char arrFRamData[512];
for(int pos = 0; pos < 512; pos++) arrFRamData[pos] = pos;
xfer[0].tx_buf = (unsigned long)arrInstruct;
xfer[0].len = 3;
xfer[1].rx_buf = (unsigned long)arrFRamData;
xfer[1].len = 32;
if(ioctl(iSPIR, SPI_IOC_MESSAGE(2), xfer) < 0) printf("ioctl write error %s.\n", strerror(errno));
// hex dumping of the arrFRamData buffer.
xfer is a global variable defined as:
struct spi_ioc_transfer xfer[2];
Thanks a lot in advance! :)
I found what the problem was with my setup. But even though the whole thing works now, the solution I applied raises more questions than answers.
So I found this solution in an article (https://elinux.org/BeagleBone_Black_Enable_SPIDEV) on the beaglebone wiki.
There I noticed that, in the device-tree overlay they set the CLK as an input. Reading the entire article turned out nothing as to why the CLOCK on the BBBs side has to be an input. Even though it's the master... The article only explains how to build and install the new DTBO in order to activate the SPI1 module.
So, even though it didn't make any sense to me, I though it wouldn't hurt to try changing the CLK line in my DTBO file form output to input to see what happens... And it worked! :O
Now, why is it so strange that the setup works like this. The BBB is supposed to be the SPI master, so its clock line should be an Output in order to drive this synchronous communication. That means the FM25CL64B chip must act as SPI slave. I just checked the datasheet and yes, the CLK on the chips side, is an input. So the CLK on the BBB must be an output. That's how I've configured the CLK pin on the BBB. As an output. And it didn't work.
This is why I'm so puzzled. So it works even though both ends of the CLK line are inputs?! Looking at the logic analyzers output, I can clearly see that the CLK line is being driven correctly. But if both the master an slave have inputs on that line there shouldn't be anything to generate those impulses?! There is nothing else connected to that line....

Arduino Drone project, when output is "loaded" (even cap to gnd), input "steering" command starts to glitch

Arduino drone project, whenever output is "loaded" ("open-circuit" signal-in pin to ESC or even a cap to ground), the input "steering" command starts to glitch (go to really low values << 1000).
The motor speed is a function of both the steering command, as well as the throttle. (In this test-case with just one motor as seen in the code below, unMotorSpeed = unThrottleIn +/- unSteeringIn)
When hooked up to a scope, the physical input signals (steering and throttle coming from the receiver) are great, and I even swapped the input pins to make sure there wasn't a problem between the receiver and the arduino. The problem seems to be coming from the software, but it just doesn't make sense, since when there isn't a "load" attached, the input and output values are all fine and clean. (I put "load" in quotes because sometimes it's essentially an open circuit --> super high impedance input signal to the electronic speed controller (ESC), which I don't even ground to complete a circuit).
Would anyone be able to check out the code and see if I'm missing something?
At this point, a somewhat quick workaround would be to simply not write those new glitchy values to the motor and keep the old speed values for whenever the new speed is significantly lower than the old speed (and these are at over 50khz, so clearly a huge jump in just one step is a bit crazy).
Note: In the code, the total output of unMotorSpeed leaves from the servoThrottle pin. Just the original naming that I didn't end up changing... it's clear if you read the code and see all the variables.
UPDATE: Weird thing is, I just ran my setup without any changes and EVERYTHING WORKED...I reflashed the arduino multiple times to make sure it wasn't some lucky glitch, and it kept on working, with everything set up. Then I dropped my remote on the ground and moved a few of the wires on the breadboard around, and things went back to their wonky ways after putting the setup back together. Idk what to do!
// --> starting code found at: rcarduino.blogspot.com
// See related posts -
// http://rcarduino.blogspot.co.uk/2012/01/how-to-read-rc-receiver-with.html
with.html
#include <Servo.h>
// Assign your channel in pins
#define THROTTLE_IN_PIN 3
#define STEERING_IN_PIN 2
// Assign your channel out pins
#define THROTTLE_OUT_PIN 9
//#define STEERING_OUT_PIN 9
// Servo objects generate the signals expected by Electronic Speed Controllers and Servos
// We will use the objects to output the signals we read in
// this example code provides a straight pass through of the signal with no custom processing
Servo servoThrottle;
//Servo servoSteering;
// These bit flags are set in bUpdateFlagsShared to indicate which
// channels have new signals
#define THROTTLE_FLAG 1
#define STEERING_FLAG 2
// holds the update flags defined above
volatile uint8_t bUpdateFlagsShared;
// shared variables are updated by the ISR and read by loop.
// In loop we immediatley take local copies so that the ISR can keep ownership of the
// shared ones. To access these in loop
// we first turn interrupts off with noInterrupts
// we take a copy to use in loop and the turn interrupts back on
// as quickly as possible, this ensures that we are always able to receive new signals
volatile uint16_t unThrottleInShared;
volatile uint16_t unSteeringInShared;
// These are used to record the rising edge of a pulse in the calcInput functions
// They do not need to be volatile as they are only used in the ISR. If we wanted
// to refer to these in loop and the ISR then they would need to be declared volatile
uint32_t ulThrottleStart;
uint32_t ulSteeringStart;
//uint32_t ulAuxStart;
void setup()
{
Serial.begin(9600);
// attach servo objects, these will generate the correct
// pulses for driving Electronic speed controllers, servos or other devices
// designed to interface directly with RC Receivers
servoThrottle.attach(THROTTLE_OUT_PIN);
// using the PinChangeInt library, attach the interrupts
// used to read the channels
attachInterrupt(digitalPinToInterrupt(THROTTLE_IN_PIN), calcThrottle,CHANGE);
attachInterrupt(digitalPinToInterrupt(STEERING_IN_PIN), calcSteering,CHANGE);
}
void loop()
{
// create local variables to hold a local copies of the channel inputs
// these are declared static so that thier values will be retained
// between calls to loop.
static uint16_t unThrottleIn;
static uint16_t unSteeringIn;
static uint16_t difference;
static uint16_t unMotorSpeed; // variable that stores overall motor speed
static uint8_t bUpdateFlags; // local copy of update flags
// check shared update flags to see if any channels have a new signal
if(bUpdateFlagsShared)
{
noInterrupts(); // turn interrupts off quickly while we take local copies of the shared variables
// take a local copy of which channels were updated in case we need to use this in the rest of loop
bUpdateFlags = bUpdateFlagsShared;
// in the current code, the shared values are always populated
// so we could copy them without testing the flags
// however in the future this could change, so lets
// only copy when the flags tell us we can.
if(bUpdateFlags & THROTTLE_FLAG)
{
unThrottleIn = unThrottleInShared;
}
if(bUpdateFlags & STEERING_FLAG)
{
unSteeringIn = unSteeringInShared;
}
// clear shared copy of updated flags as we have already taken the updates
// we still have a local copy if we need to use it in bUpdateFlags
bUpdateFlagsShared = 0;
interrupts(); // we have local copies of the inputs, so now we can turn interrupts back on
// as soon as interrupts are back on, we can no longer use the shared copies, the interrupt
// service routines own these and could update them at any time. During the update, the
// shared copies may contain junk. Luckily we have our local copies to work with :-)
}
//Serial.println(unSteeringIn);
// do any processing from here onwards
// only use the local values unAuxIn, unThrottleIn and unSteeringIn, the shared
// variables unAuxInShared, unThrottleInShared, unSteeringInShared are always owned by
// the interrupt routines and should not be used in loop
// the following code provides simple pass through
// this is a good initial test, the Arduino will pass through
// receiver input as if the Arduino is not there.
// This should be used to confirm the circuit and power
// before attempting any custom processing in a project.
// we are checking to see if the channel value has changed, this is indicated
// by the flags. For the simple pass through we don't really need this check,
// but for a more complex project where a new signal requires significant processing
// this allows us to only calculate new values when we have new inputs, rather than
// on every cycle.
///// if-else chain commented out to determine/prove problem with steering signal --> buggy!
if(unSteeringIn < 1400) // if steering joystick moved left
{
difference = 1400 - unSteeringIn;
if(unThrottleIn - difference >= 0)
unMotorSpeed = unThrottleIn - difference;
}
else if(unSteeringIn > 1550) //if steering joystick moved right (needs to be tweaked, but works for now)
{
difference = unSteeringIn - 1600;
if(unThrottleIn + difference < 2000)
unMotorSpeed = unThrottleIn + difference;
}
else
{
unMotorSpeed = unThrottleIn;
}
//Serial.println(unMotorSpeed);
//Serial.println(unSteeringIn);
//Serial.println(unThrottleIn);
if(bUpdateFlags)
{
//Serial.println(servoThrottle.readMicroseconds());
if(servoThrottle.readMicroseconds() != unMotorSpeed)
{
servoThrottle.writeMicroseconds(unMotorSpeed);
Serial.println(unMotorSpeed);
}
}
bUpdateFlags = 0;
}
// simple interrupt service routine
void calcThrottle()
{
// if the pin is high, its a rising edge of the signal pulse, so lets record its value
if(digitalRead(THROTTLE_IN_PIN) == HIGH)
{
ulThrottleStart = micros();
}
else
{
// else it must be a falling edge, so lets get the time and subtract the time of the rising edge
// this gives use the time between the rising and falling edges i.e. the pulse duration.
unThrottleInShared = (uint16_t)(micros() - ulThrottleStart); // pulse duration
// use set the throttle flag to indicate that a new throttle signal has been received
bUpdateFlagsShared |= THROTTLE_FLAG;
}
}
void calcSteering()
{
if(digitalRead(STEERING_IN_PIN) == HIGH)
{
ulSteeringStart = micros();
}
else
{
unSteeringInShared = (uint16_t)(micros() - ulSteeringStart); // pulse duration
bUpdateFlagsShared |= STEERING_FLAG;
}
}
You should read the documentation of AttachInterrupt() - in the section "About Interrupt Service Routines" it gives information on how certain functions behave when called from an interrupt. For micros() it states:
micros() works initially, but will start behaving erratically after 1-2 ms.
I believe that means after the ISR has been running for more than 1ms, rather than just 1 ms in general, so may not apply in this case, but you might need to consider how you are doing the timing in the ISR. That's a problem with Arduino - terrible documentation!
One definite problem which may be a cause is the fact that unSteeringInShared is non-atomic. It is a 16 bit value on 8 bit hardware so requires multiple instructions to read and write and the process can be interrupted. It is therefore possible to read one byte of the value in the loop() context and then have both bytes changed by the interrupt context before you read the second byte, so you then up with two halves of two different values.
To resolve this problem you could either disable interrupts while reading:
noInterrupts() ;
unSteeringIn = unSteeringInShared ;
interrupts() ;
Or you can spin-lock the read:
do
{
unSteeringIn = unSteeringInShared ;
} while( unSteeringIn != unSteeringInShared ) ;
You should do the same for unThrottleInShared too, although why you do not see any problem with that is unclear - this is perhaps not the problem you are currently observing, but is definitely a problem in any case.
Alternatively if 8 bit resolution is sufficient you could encode the input as an atomic 8 bit value thus:
uint8_t unSteeringInShared ;
...
int32_t timeus = micros() - ulSteeringStart - 1000 ;
if( timeus < 0 )
{
unSteeringInShared = 0 ;
}
else if( timeus > 1000 )
{
unSteeringInShared = 255;
}
else
{
unSteeringInShared = (uint8_t)(time * 255 / 1000) ;
}
Of course changing your scale from 1000 to 2000 to 0 to 255 will need changes to the rest of the code. For example to convert a value x in the range 0 to 255 to a a servo pulse width:
pulsew = (x * 1000 / 256) + 1000 ;

Bit banging for SPI in ARM

I am trying to read the data from FXLS8471Q 3-Axis, Linear Accelerometer using SPI. I am using bit banging method to read the data from Accelerometer. I am using LPC 2184 ARM processor. I used the following code.
unsigned char spiReadReg (const unsigned char regAddr)
{
unsigned char SPICount;
unsigned char SPIData;
SPI_CS = 1;
SPI_CK = 0;
SPIData = regAddr;
SPI_CS = 0;
for (SPICount = 0; SPICount < 8; SPICount++)
{
if (SPIData & 0x80)
SPI_MOSI = 1;
else
SPI_MOSI = 0;
SPI_CK = 1;
SPI_CK = 0;
SPIData <<= 1;
}
SPI_MOSI = 0;
SPIData = 0;
for (SPICount = 0; SPICount < 8; SPICount++)
{
SPIData <<=1;
SPI_CK = 1;
SPIData += SPI_MISO;
SPI_CK = 0;
SPIData &=(0xFE);
}
SPI_CS = 1;
return ((unsigned char)SPIData);
}
But instead of getting valid value 0x6A , I am getting garbage value.
Please help me out to solve this problem;
SPIData &=(0xFE); as pointed out in another answer, is definitely wrong as it erases the bit you just received.
However, there are other major issues with your code.
An SPI slave device sends you data by setting the value of MISO on a rising or falling clock, depending on the type of device. However, you didn't wait in your code for the value to appear on MISO.
You control the communication by setting the clock to 1 and 0. The datasheet of the accelerometer says on page 19, that
Data is sampled during the rising edge of SCLK and set up during the falling edge of SCLK.
This means that in order to read from it, your processor needs to change the clock from one to zero, thereby signaling the accelerometer to send the next bit to the MISO. This means you did the reverse, you in your code read on a rising edge while you should be reading on the falling edge. After setting the clock to zero, you have to wait a little while until the value appears on MISO, and only then should you read it and add it to your SPIData variable. Table 9, SPI timing indicates how much you have to wait: at least 500 nanoseconds. That's not much, but if your CPU runs faster than 2 MHz then if you don't use a delay, you will try to read the MISO before the accelerometer had time to properly set it.
You also forgot the slave Select, which is actually required to indicate the beginning and the end of a datagram.
Check the Figure 7. SPI Timing Diagram in the datasheet, it indicates what you are required to do and in what order, to communicate with the device using SPI.
I also suggest reading about how the rotating registers of the SPI work, because it seems from its datasheet, that the accelerometer needs to receive a well specified number of bits before useful data appears on its output. Don't forget, as you send a single bit to the device, it also has to send a bit back to you, so if it didn't decode a command yet, it can only send gibberish. Your code, as the master, can only "push" bits in, and collect the bits which "pop out" on the other side. This means you have to send a command, and then send further bits until all the answer is pushed out to you bit by bit.
If you get stuck, I think you will have much more luck getting help on https://electronics.stackexchange.com/, but instead of just putting this same code there (which seems to be blindly copied from www.maximintegrated.com and has absolutely nothing to do with the problem you try to solve), I strongly recommend trying to understand the "Figure 7. SPI Timing Diagram" I was suggesting before, and alter your code accordingly.
Without understanding how the device you try to communicate with works, you will never succeed if you just blindly copy the code from a completely different project just because it says "spi" in the title.
SPIData &=(0xFE);
The above line is causing the problem. Here the LSB is reset to 0 (which contained the data bit just taken from MISO) -- basically you are destroying the bit you just read. Omitting the line should correct the problem.
be sure to compile your function with NO optimization
as that will corrupt the bitbang operation.
this is the code from the maxum site for the spiReadReg function.
(which looks like were you got your code.
However, this is just a guide for the 'general' sequence of operations for communicating with the maxim 1481 part.
the accel. part needs several setup commands and reads completely differently
Suggest reading the app notes and white papers available at freescale.com for the specific part number.
Those app notes/white papers will indicate the sequence of commands needed for setting up a specific mode of operation and how to request/interpret the resulting data.
There are a number of device specifics that your code has not taken into account.
Per the spec sheet the first bit transmitted is a read/write indicator, followed by 8 bits of register address, followed by 7 trash bits (suggest sending all 0's for the trash bits.) followed by the data bits.
Depending on the setup commands, those data bits could be 8 bits or 14 bits or multiple registers of 8 or 14 bits per register.

PIC16F1459 I2C Master Ack Issue with 24LC32

I'm facing a weird issue. I've always used bit bangin I2C functions on my PIC16F1459, but now I want to use the MSSP (SPI,I2C Master Slave Peripheral). So I've started writing the functions according to the datasheet, Start, Stop, etc. The problem I have is my PIC won't ACK the data I send to the I2C EEPROM. It clearly says in the datasheet that the ACK status can be found at SSPCON2.ACKSTAT. So my guess was to poll this bit until the slave responds to my data, but the program hangs in the while Loop.
void vReadACK (void)
{
while (SSPCON2.ACKSTAT != 0);
}
And here's my write function, my I2CCheck function and I2C Master Initialization function
void vI2CEcrireOctet (UC ucData, UC ucRW)
{
vI2CCheck();
switch (ucRW)
{
case READ:
SSPBUF = ucData + 1;
break;
case WRITE:
SSPBUF = ucData + 0;
break;
}
vReadACK();
}
void vI2CCheck (void)
{
while (SSPCON2.ACKEN); //ACKEN not cleared, wait
while (SSPCON2.RCEN); //RCEN not cleared, wait
while (SSPCON2.PEN); //STOP not cleared, wait
while (SSPCON2.SEN); //Start not cleared, wait
while (SSPCON2.RSEN); //Rep start not cleared, wait
while (SSP1STAT.R_NOT_W); //TX not done wait
}
void vInitI2CMaster (void)
{
TRISB4_bit = 1; //SDA IN
TRISB6_bit = 1; //SCL IN
SSP1STAT.SMP = 1; //No slew rate
SSP1STAT.CKE = 0; //Disable SMBus inputs
SSPADD = 0x27; //100 KHz
SSPCON1 = 0b00101000; //I2C Master mode
SSPCON3 = 0b00000000; //Rien de slave
}
Just so you know, 24LC32A WriteProtect tied to VSS, A2-A1-A0 tied to GND, so adress 0xA0. 4k7 pull-ups are on I2C line. PIC16F1459 at 16MHz INTOSC.
I'm completely stuck. I've went through the MSSP datasheet 5 to 6 times without finding any issue. Can you guys help?
And here's my logic analyzer preview (removing the while inside vReadAck() )
Well it looks like I've found the answer to my question. What I was doing was the exact way of doing this. The problem seemed to be the Bus Free Time delay required for the slave to respond. At 16Mhz, my I2C was probably too fast for the EEPROM memory. So I've added a small Delay function right after the stop operation, so the write sequences are delayed and BAM, worked.
Bus free time: Time the bus
must be free before a new
transmission can start.
Despite the fact you "totally know" know "PIC won't ACK the data I send to the I2C EEPROM" because it's not supposed to, you still seem to misunderstand how I2C acknowledgements are supposed to work. They're called acknowledgements because they can be both positively (ACK) and negatively (NAK) acknowledged. If you look at the the analyzer screen shot you posted you'll find that its quite clearly labelled each byte being sent as having been NAK'ed by the transmitter.
To properly check for I2C ACKs you should be polling the trailing edge of the ACKTIM bit, and then checking the ACKSTAT bit to find out whether the slave transmitted an ACK or a NAK bit. Something like this:
int
vReadACK() {
while(!SSPCON3.ACKTIM);
while(SSPCON3.ACKTIM);
return SSPCON2.ACKSTAT;
}
As for why your slaved device is apparently NAKing each byte it isn't clear from the code you've posted, but there's a couple of notable omissions from your code. You need to generate start and stop conditions but you've shown no code to do this.

Resources