I2C from master to slave communication problem - c

I am using TC237 and the board does not provide I2C communication, so I implement it as GPIO.
Reads and writes through registers, but no master-slave communication.
According to the I2C communication protocol, the start-stop ACK NACK function code was created.
I also create a 1-byte write and read code, and based on it I create code to read and write to the registers of the slave.
I do not know how to upload a picture, but when I check the SDA and SCL with the oscilloscope, when I read it, it seems that there are two bytes to read and the rest is OK. The first byte reads 0x00 and the next byte reads 0xEF.
I2C_Start();
waitTime(1*TimeConst_100us);
I2C_WriteByte((uint8)(Slave_addr|0x01));//LIDAR : 0xC5/BH1750 : 0x27
I2C_ACK();//I2C_GetACK();
I2C_ReadData_H = I2C_ReadByte();
I2C_ACK();
waitTime(1*TimeConst_100us);
I2C_ReadData_L = I2C_ReadByte();
waitTime(1*TimeConst_100us);
if(I2C_NACK() == BUSY)
{
return RESET;
}
I2C_Stop();
return SET;
The result should be received by the internal IC, but it receives a strange value.
The suspicious part seems to not receive an ACK from the device after reading the first byte, what should I do?

Related

How to save only certain bytes I need instead of all in an array?

I am getting from a sensor via UART communication every second 10 bytes. But I don't need all bytes, actually only certain bytes to work with. Now what I do is to save all bytes into an array and create two new uint8_t's and assign them the byte from the buffer array I need.
Is there a way to only receive and save the bytes I need in the first place instead of all 10?
uint8_t buffer[10];
HAL_UART_Receive_DMA(&huart4, (uint8_t*)buffer, 10)
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart4)
{
uint8_t value1 = buffer[4];
uint8_t value2 = buffer[5];
.
.
.
}
In DMA mode you need to provide the full size buffer. There is no other way as reception is not controlled by the core (it is done ion the background) and the DMA controller signals end (ad if you want half and error conditions) of the transaction only
It is possible with just raw interrupt handling, without any DMA and (mostly) without fancy HAL functions.
You'll have to write manual UART interrupt handler for RXNE flag, which is set when UART receives signle character. Read it from DR register and decide - save or discard it. Of cource it's up to you now, to count all received bytes and detect "end of message" condition.
OR
If your code have nothing to do during receiving this message, use
HAL_UART_Receive
To read message byte-by-byte

How does I2C Protocol work

I have visited on some links and looked for some example programs for I2C programming. I want to write my own code for I2C protocol. Suppose DS1307 RTC and LCD connected to 8051. I am using Keil software to write a C program. It's very difficult to write whole program of I2C for me, so I tried to break program in small parts:
Module 1: define and set pins for LCD and DS1307 RTC
Module 2: write C code for DS1307 (make functions for DS1307 such as read, write)
Module 3: write C code for LCD (data, command initialize, etc)
Module 4: main function
I understand module 1 but I am looking help to understand module 2. So again I want break module 2 in small parts.
How to break module 2 in small parts for easy understanding? How many functions should be in module2?
The Module 2 is essentially I2C driver using bit banging of 8051 port. I2C protocol follows a sequence. It is started by start sequence and stopped by stop sequence. You can have different functions. The communication is started by the master and each slave has an address. So in module2, you will write all below functions.
For example, the I2 read sequence will be following
I2C_Start(); // set I2C start sequence
I2C_Send(slave_address|1); Send I2C slave address in read mode
I2C_read_ACK(); //master will know slave got data
while(number_of bytes){
I2C_Read();
I2C_send_ACK();
number_of bytes--;
}
I2C_NAK(); //slave need to know so it will not prepare next data.
I2C_Stop(); //stop communication
Again the write on slave will have below steps
I2C_Start(); // set I2C start sequence
I2C_Send(slave_address); Send I2C slave address in write mode
I2C_read_ACK(); //master will know slave got data
while(number_of bytes){
I2C_Write();
I2C_read_ACK(); //master will know slave got data
number_of bytes--;
}
I2C_Stop(); //stop communication
I also see driver at
https://circuitdigest.com/microcontroller-projects/digital-clock-using-8051-microcontroller
Official I2C protocol is here
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwici4Ocn6jVAhUIwlQKHV_zAJ8QFggoMAA&url=https%3A%2F%2Fwww.nxp.com%2Fdocuments%2Fuser_manual%2FUM10204.pdf&usg=AFQjCNHgNi6wOD4MjIDsnT0DXTYLS_-gPQ

Implement UART frame controller

I'm programming on a STM32 board and I'm confused on how to use my peripherals : polling, interrupt, DMA, DMA interrupt...
Actually, I coded an UART module which send basics data and it works in polling, interrupt and DMA mode.
But I'd like to be able to send and receive specific frames with variable lengths, for example:
[ START | LGTH | CMD_ID | DATA(LGTH) | CRC ]
I also have sensors and I'd like to interact received DATA in these UART frames with sensors.
So, what I don't understand is:
how to program the UART module to work in "frame" mode? (buffer? circular DMA? interrupt? where, when..)
when I'm able to send or receive frame with my UART, what is the best way to interact with sensors? (inside a timer interrupt? in a state machine ? with extern variable? ...)
Here is my Libraries tree
In future, the idea is to carry this application in freertos
Thank you!
Absolutelly in DMA when it is available.
You have one big (good solution is cyclic) buffer and you just write data from one side. If DMA does not already work, you start the DMA with your buffer.
If DMA works, you just write your data to buffer and you wait DMA transfer complete interrupt.
Later in this interrupt you increase read pointer of buffer (as you sent some data already) and check if any data available to send over DMA. Set memory address to DMA and number of bytes in buffer to send.
Again, when DMA TC IRQ happens, do process again.
There is no support for FRAME, but only in plain bytes. It means you have to "invent" your own frame protocol and use it in app.
Later, when you want to send that FRAME over UART, you have to:
Write start byte to buffer
Write other header bytes
Write actual data
Write stop bytes/CRC/whatever
Check if DMA does not work, if it does not, start it.
Normally, I use this frame concept:
[START, ADDRESS, CMD, LEN, DATA, CRC, STOP]
START: Start byte indicating start of frame
ADDRESS: Address of device when multiple devices are in use on bus
CMD: Command ID
LEN: 2 bytes for data length
DATA: Actual data in bytes of variable length
CRC: 2 bytes for CRC including: address, cmd, len, data
STOP: Stop byte indicating end of frame
This is how I do it in every project where necessary. This does not use CPU to send data, just sets DMA and starts transmission.
From app perspective, you just have to create send_send(data, len) function which will create frame and put it to buffer for transmission.
Buffer size must be big enough to fit your requirements:
How much data at particular time (is it continues or a lot of data at small time)
UART baudrate
For specific question, ask and maybe I can provide some code examples from my libraries as reference.
In this case, where you need to implement that protocol, I would probably use plain interrupts and, in the handler, use a byte-by-byte state-machine to parse the incoming bytes into a frame buffer.
Only when a complete, valid frame has been received is it necessary to signal some semaphore/event and requuest a scheduler run, otherwise, you can handle any protocol error as you require - maybe tx some 'error-repeat' message and reset the state-machine to await the nexx start-of-frame buyte.
If you use DMA for this, then the variable frame-length is going to be awkward and you STILL have to iterate the received data to validate you protocol:(
DMA doesn't sound like a good fit for this, to me...
EDIT: if no preemptive multitasker, then forget about all that semaphore gunge above:) Still, it's easier to check a boolean 'validFrameRx' flag than parse DMA block data.

read() and write() using spi device driver

I would use the read() and write() for the /dev/spiB.C which is created by the user mode spi device driver (spidev.c). Now, the SPI ransaction message follows a certain format (e.g., 24 bits, with some bits for address and some bits for data) which is defined by the chip-vendor-specific spi controller driver. How is the message format fit into the read() and write() transaction? Where and how should I define the format in the code before or after the write() or read()?
Thanks!
You need to call spidev_ioctl() mentioned in spidev.c
E.g. Just check switch case :SPI_IOC_RD_BITS_PER_WORD which sets the bits/word (line 410).
Then finally it writes to the bits_per_word member in the spi device structure (line 415).
This spi pointer is the pointer of the spi device which you are communicating to and has been allocated during spidev_probe().
Definitely you need to set the configuration before r/w.You also need to set speed and mode of SPI.
I have refered to the below link for spidev.c file:
http://lxr.free-electrons.com/source/drivers/spi/spidev.c

Flash Memory writing and reading through SPI

It's the first time I try to use the SPI protocol. I am trying to understand an example code that came with my development kit (which has a STM32F207VCT6 microcontroller). This code implements communication (reading and writing) with a AT45DB041D flash memory.
Every time this example code manages to read the memory it not only sends information about what to be read but also asks to receive data rigth away. This data received isn't used for any purpose (apparently). The real data to be read will be asked again by the receive command later by sending a 0x00 byte first. The following code shows that:
void AT45DBXX_Read_ID(u8 *IData){
u8 i;
AT45DBXX_BUSY();
AT45DBXX_Enable; //Chip Select drive to low
SPIx_Send_byte(Read_ID);
for(i=0;i<4;i++)
{
IData[i] = SPIx_Receive_byte();
}
AT45DBXX_Disable; //Chip Select drive to high
}
Definitions:
void SPIx_Send_byte(u16 data){
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_TXE)==RESET);
SPI_I2S_SendData(Open207V_SPIx,data);
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_RXNE)==RESET);
SPI_I2S_ReceiveData(Open207V_SPIx);
}
u16 SPIx_Receive_byte(void){
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_TXE)==RESET);
SPI_I2S_SendData(Open207V_SPIx,0x00);
while(SPI_I2S_GetFlagStatus(Open207V_SPIx, SPI_I2S_FLAG_RXNE)==RESET);
return SPI_I2S_ReceiveData(Open207V_SPIx);
}
As you can see the SPIx_Send_byte code is not only sending what to be read but also receiving information that won't be used.
Can someone help me understand why that needs to be done and why its needed to send the 0x00 byte to actually receive the data?
Thanks!
SPI is a full-duplex, bi-directional bus where data is both sent to the slave and received from the slave at the same time. Your SPI controller doesn't know if a given byte is glong from the master, or from the slave, or both. Therefore, whenever you send a byte, you must also read a byte, if only to throw it away. By the same token, you cannot receive a byte without sending a byte, even if the slave will throw it away.
Take a look at Wikipedia.
So, what you code is doing is
Sending Read_ID to the slave.
Reading and throwing away the byte simultaneously read out of the slave.
Write 0 to the slave to enable the slave to send a byte of data.
Read the data byte that was simultaneously read out of the slave.
Loop back to #3.
By the way, such questions would be better suited for the EE Stack Exchange as it is more about the hardware protocol as opposed to programming.

Resources